This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
Currently there is a big problem that Schlieman lexer cannot lex tokens over a higher level embedding language properly. For example such JSP file <script> var x = "${"ahoj"}"; </script> causes the JS lexer to create error tokens on the boundaries with the expression language. According to what I have been told formerly, S. lexer needs to look behind the "gap" to be able to properly determine the type of token. However current lexer architecture doesn't allow this, the input characters stream provided to the lexer implementation ends at the end of the section so the lexer cannot properly look ahead. There is a lexer issue filed for this problem - #117450 (Provide unified LexerInput across multiple joined embedded sections). Since it doesn't look like #117450 can be fully fixed in 6.0, we may need to use some workaround solution. Preprocessing the document - extracting the language pieces, putting them into a character stream and lexing separatedly and them translating the artificial offsets to the real ones seems to be a doable solution. Any opinions? Idea? For completeness, without fixing this problem we won't be likely able to fully fix #111546, #117802.
RFE? P3?
Blocks P2s ...
*** Issue 118914 has been marked as a duplicate of this issue. ***
fixed.