Roger Costello
Roger Costello

Reputation: 3209

ANTLR4: Using non-ASCII characters in token rules

On page 74 of the ANTRL4 book it says that any Unicode character can be used in a grammar simply by specifying its codepoint in this manner:

'\uxxxx'

where xxxx is the hexadecimal value for the Unicode codepoint.

So I used that technique in a token rule for an ID token:

grammar ID;

id : ID EOF ;

ID : ('a' .. 'z' | 'A' .. 'Z' | '\u0100' .. '\u017E')+ ;
WS : [ \t\r\n]+ -> skip ;

When I tried to parse this input:

Gŭnter

ANTLR throws an error, saying that it does not recognize ŭ. (The ŭ character is hex 016D, so it is within the range specified)

What am I doing wrong please?

Upvotes: 10

Views: 10695

Answers (4)

Aubin
Aubin

Reputation: 14883

Grammar:

NAME:
   [A-Za-z][0-9A-Za-z\u0080-\uFFFF_]+
;

Java:

import org.antlr.v4.runtime.CharStream;
import org.antlr.v4.runtime.CharStreams;
import org.antlr.v4.runtime.CommonTokenStream;
import org.antlr.v4.runtime.TokenStream;

final class RequirementParser {

   static SystemContext parse( String requirement ) {
      requirement = requirement.replaceAll( "\t", "   " );
      final CharStream     charStream = CharStreams.fromString( requirement );
      final StimulusLexer  lexer      = new StimulusLexer( charStream );
      final TokenStream    tokens     = new CommonTokenStream( lexer );
      final StimulusParser parser     = new StimulusParser( tokens );
      final SystemContext  system     = parser.system();
      if( parser.getNumberOfSyntaxErrors() > 0 ) {
         Debug.format( requirement );
      }
      return system;
   }

   private RequirementParser() {/**/}
}

Source:

Lexers and Unicode text

Upvotes: 4

lenny kovac
lenny kovac

Reputation: 11

You can specify the encoding of the file when actually reading the file. For Kotlin/Java that could look like this, no need to specify the encoding in the grammar!

val inputStream: CharStream = CharStreams.fromFileName(fileName, Charset.forName("UTF-16LE"))
val lexer = BlastFeatureGrammarLexer(inputStream)

Supported Charsets by Java/Kotlin

Upvotes: 1

Alice Oualouest
Alice Oualouest

Reputation: 918

For those having the same problem using antlr4 in java code, ANTLRInputStream beeing deprecated, here is a working way to pass multi-char unicode data from a String to a the MyLexer lexer :

String myString = "\u2013";

CharBuffer charBuffer = CharBuffer.wrap(myString.toCharArray());
CodePointBuffer codePointBuffer = CodePointBuffer.withChars(charBuffer);
CodePointCharStream cpcs = CodePointCharStream.fromBuffer(codePointBuffer);

OneLexer lexer = new MyLexer(cpcs);       
CommonTokenStream tokens = new CommonTokenStream(lexer);

Upvotes: 5

Terence Parr
Terence Parr

Reputation: 5962

ANTLR is ready to accept 16-bit characters but, by default, many locales will read in characters as bytes (8 bits). You need to specify the appropriate encoding when you read from the file using the Java libraries. If you are using the TestRig, perhaps through alias/script grun, then use argument -encoding utf-8 or whatever. If you look at the source code of that class, you will see the following mechanism:

InputStream is = new FileInputStream(inputFile);
Reader r = new InputStreamReader(is, encoding); // e.g., euc-jp or utf-8
ANTLRInputStream input = new ANTLRInputStream(r);
XLexer lexer = new XLexer(input);
CommonTokenStream tokens = new CommonTokenStream(lexer);
...

Upvotes: 11

Related Questions