Reputation:
How do I properly set the default character encoding used by the JVM (1.5.x) programmatically?
I have read that -Dfile.encoding=whatever
used to be the way to go for older JVMs. I don't have that luxury for reasons I wont get into.
I have tried:
System.setProperty("file.encoding", "UTF-8");
And the property gets set, but it doesn't seem to cause the final getBytes
call below to use UTF8:
System.setProperty("file.encoding", "UTF-8");
byte inbytes[] = new byte[1024];
FileInputStream fis = new FileInputStream("response.txt");
fis.read(inbytes);
FileOutputStream fos = new FileOutputStream("response-2.txt");
String in = new String(inbytes, "UTF8");
fos.write(in.getBytes());
Upvotes: 415
Views: 870845
Reputation: 483
My team encountered the same issue in machines with Windows.. then managed to resolve it in two ways:
a) Set environment variable (even in Windows system preferences)
JAVA_TOOL_OPTIONS
-Dfile.encoding=UTF8
b) Introduce following snippet to your pom.xml:
-Dfile.encoding=UTF-8
WITHIN
<jvmArguments>
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8001
-Dfile.encoding=UTF-8
</jvmArguments>
Upvotes: 5
Reputation: 269627
Unfortunately, the file.encoding
property has to be specified as the JVM starts up; by the time your main method is entered, the character encoding used by String.getBytes()
and the default constructors of InputStreamReader
and OutputStreamWriter
has been permanently cached.
As another user points out, in a special case like this, the environment variable JAVA_TOOL_OPTIONS
can be used to specify this property, but it's normally done like this:
java -Dfile.encoding=UTF-8 … com.x.Main
Charset.defaultCharset()
will reflect changes to the file.encoding
property, but most of the code in the core Java libraries that need to determine the default character encoding do not use this mechanism.
When you are encoding or decoding, you can query the file.encoding
property or Charset.defaultCharset()
to find the current default encoding, and use the appropriate method or constructor overload to specify it.
Upvotes: 369
Reputation: 361
Try this:
new OutputStreamWriter(new FileOutputStream("Your_file_fullpath" ),Charset.forName("UTF8"))
Upvotes: 15
Reputation: 137
Setting up jvm arguments while starting application helped me resolve this issue. java -Dfile.encoding=UTF-8 -Dsun.jnu.encoding=UTF-8
.
file.encoding=UTF-8
- This helps to have the Unicode characters in the file.
sun.jnu.encoding=UTF-8
- This helps to have the Unicode characters as the File name in the file system.
Upvotes: 0
Reputation: 330
Solve this problem in my project. Hope it helps someone.
I use LIBGDX java framework and also had this issue in my android studio project. In Mac OS encoding is correct, but in Windows 10 special characters and symbols and also russian characters show as questions like: ????? and other incorrect symbols.
Change in android studio project settings:
File->Settings...->Editor-> File Encodings
to UTF-8 in all three fields (Global Encoding, Project Encoding and Default below).
In any java file set:
System.setProperty("file.encoding","UTF-8");
And for test print debug log:
System.out.println("My project encoding is : "+ Charset.defaultCharset());
Upvotes: 1
Reputation: 71
We were having the same issues. We methodically tried several suggestions from this article (and others) to no avail. We also tried adding the -Dfile.encoding=UTF8
and nothing seemed to be working.
For people that are having this issue, the following article finally helped us track down describes how the locale setting can break unicode/UTF-8
in Java/Tomcat
http://www.jvmhost.com/articles/locale-breaks-unicode-utf-8-java-tomcat
Setting the locale correctly in the ~/.bashrc
file worked for us.
Upvotes: 7
Reputation: 33
I'm using Amazon (AWS) Elastic Beanstalk and successfully changed it to UTF-8.
In Elastic Beanstalk, go to Configuration > Software, "Environment properties". Add (name) JAVA_TOOL_OPTIONS with (value) -Dfile.encoding=UTF8
After saving, the environment will restart with the UTF-8 encoding.
Upvotes: 2
Reputation: 549
mvn clean install -Dfile.encoding=UTF-8 -Dmaven.repo.local=/path-to-m2
command worked with exec-maven-plugin to resolve following error while configuring a jenkins task.
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
Error occurred during initialization of VM
java.nio.charset.IllegalCharsetNameException: "UTF-8"
at java.nio.charset.Charset.checkName(Charset.java:315)
at java.nio.charset.Charset.lookup2(Charset.java:484)
at java.nio.charset.Charset.lookup(Charset.java:464)
at java.nio.charset.Charset.defaultCharset(Charset.java:609)
at sun.nio.cs.StreamEncoder.forOutputStreamWriter(StreamEncoder.java:56)
at java.io.OutputStreamWriter.<init>(OutputStreamWriter.java:111)
at java.io.PrintStream.<init>(PrintStream.java:104)
at java.io.PrintStream.<init>(PrintStream.java:151)
at java.lang.System.newPrintStream(System.java:1148)
at java.lang.System.initializeSystemClass(System.java:1192)
Upvotes: 1
Reputation: 12181
In case you are using Spring Boot and want to pass the argument file.encoding
in JVM you have to run it like that:
mvn spring-boot:run -Drun.jvmArguments="-Dfile.encoding=UTF-8"
this was needed for us since we were using JTwig
templates and the operating system had ANSI_X3.4-1968
that we found out through System.out.println(System.getProperty("file.encoding"));
Hope this helps someone!
Upvotes: 7
Reputation: 1
Recently I bumped into a local company's Notes 6.5 system and found out the webmail would show unidentifiable characters on a non-Zhongwen localed Windows installation. Have dug for several weeks online, figured it out just few minutes ago:
In Java properties, add the following string to Runtime Parameters
-Dfile.encoding=MS950 -Duser.language=zh -Duser.country=TW -Dsun.jnu.encoding=MS950
UTF-8 setting would not work in this case.
Upvotes: 0
Reputation: 12662
Following @Caspar comment on accepted answer, the preferred way to fix this according to Sun is :
"change the locale of the underlying platform before starting your Java program."
http://bugs.java.com/view_bug.do?bug_id=4163515
For docker see:
http://jaredmarkell.com/docker-and-locales/
Upvotes: 0
Reputation: 2682
I think a better approach than setting the platform's default character set, especially as you seem to have restrictions on affecting the application deployment, let alone the platform, is to call the much safer String.getBytes("charsetName")
. That way your application is not dependent on things beyond its control.
I personally feel that String.getBytes()
should be deprecated, as it has caused serious problems in a number of cases I have seen, where the developer did not account for the default charset possibly changing.
Upvotes: 40
Reputation: 1398
I have tried a lot of things, but the sample code here works perfect. Link
The crux of the code is:
String s = "एक गाव में एक किसान";
String out = new String(s.getBytes("UTF-8"), "ISO-8859-1");
Upvotes: 7
Reputation: 955
I have a hacky way that definitely works!!
System.setProperty("file.encoding","UTF-8");
Field charset = Charset.class.getDeclaredField("defaultCharset");
charset.setAccessible(true);
charset.set(null,null);
This way you are going to trick JVM which would think that charset is not set and make it to set it again to UTF-8, on runtime!
Upvotes: 83
Reputation: 69
We set there two system properties together and it makes the system take everything into utf8
file.encoding=UTF8
client.encoding.override=UTF-8
Upvotes: 0
Reputation: 2290
From the JVM™ Tool Interface documentation…
Since the command-line cannot always be accessed or modified, for example in embedded VMs or simply VMs launched deep within scripts, a
JAVA_TOOL_OPTIONS
variable is provided so that agents may be launched in these cases.
By setting the (Windows) environment variable JAVA_TOOL_OPTIONS
to -Dfile.encoding=UTF8
, the (Java) System
property will be set automatically every time a JVM is started. You will know that the parameter has been picked up because the following message will be posted to System.err
:
Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
Upvotes: 195
Reputation: 2682
Not clear on what you do and don't have control over at this point. If you can interpose a different OutputStream class on the destination file, you could use a subtype of OutputStream which converts Strings to bytes under a charset you define, say UTF-8 by default. If modified UTF-8 is suffcient for your needs, you can use DataOutputStream.writeUTF(String)
:
byte inbytes[] = new byte[1024];
FileInputStream fis = new FileInputStream("response.txt");
fis.read(inbytes);
String in = new String(inbytes, "UTF8");
DataOutputStream out = new DataOutputStream(new FileOutputStream("response-2.txt"));
out.writeUTF(in); // no getBytes() here
If this approach is not feasible, it may help if you clarify here exactly what you can and can't control in terms of data flow and execution environment (though I know that's sometimes easier said than determined). Good luck.
Upvotes: 1
Reputation: 45398
I can't answer your original question but I would like to offer you some advice -- don't depend on the JVM's default encoding. It's always best to explicitly specify the desired encoding (i.e. "UTF-8") in your code. That way, you know it will work even across different systems and JVM configurations.
Upvotes: 20