The supplied dates indicate when the API change was made, on the CVS trunk. From this you can generally tell whether the change should be present in a given build or not; for trunk builds, simply whether it was made before or after the change; for builds on a stabilization branch, whether the branch was made before or after the given date. In some cases corresponding API changes have been made both in the trunk and in an in-progress stabilization branch, if they were needed for a bug fix; this ought to be marked in this list.
Fuller descriptions of all changes can be found below (follow links).
Not all deprecations are listed here, assuming that the deprecated APIs continue to essentially work. For a full deprecation list, please consult the Javadoc.
These API specification versions may be used to indicate that a module requires a certain API feature in order to function. For example, if you see here a feature you need which is labelled 1.20, your manifest should contain in its main attributes the line:
OpenIDE-Module-Module-Dependencies: org.netbeans.modules.lexer/1 > 1.20
Embeddings that request input sections to be joined before lexing
are now lexed as a single section.
Token.isRemoved() was added to check whether a particular token
is still present in token hierarchy or whether it was removed as part of a modification.
Support for token hierarchy snapshots and generic character preprocessing was removed from the API and SPI since there were no usecases yet and it should be possible to add the functionality later in a backward compatible way. Some more changes regarding generification etc. were performed.
LexerInput.integerState() was removed.
TokenSequence.removeEmbedding() was added as counterpart
TokenSequence.isValid() was added to check whether
the token sequence can be used for iteration (no modifications
of the underlying input in the meantime).
Joining sections embeddings now supported and some minor changes
were introduced like adding
Some is* methods with trivial implementations were removed from LanguagePath.
TokenChange.embeddedChange(Language) was removed because
there might be multiple such changes and they can be gathered
with existing methods.
EmbeddingPresence enum to speed up queries
for embedded token sequences.
Removed previously added
there is an alternative in using LanguageProvider.firePropertyChange(PROP_LANGUAGE).
Language.refresh() to allow languages framework
and other clients to update contents of a language dynamically.
TokenHierarchy.tokenSequenceList() to find
token sequences having certain language path throughout the whole input source
or just within given offset bounds.
TokenChange.isBoundsChange() to check
for changes that only modify token bounds (see method's javadoc).
Improved incrementality for embedded sections for bounds changes.
PartType enum and
that identifies whether the token is COMPLETE or which part
of a complete token this part represents (START, INNER or END).
TokenSequence.move() to position before the particular token
that "contains" the offset (or after the last token if the given offset
is too high). Additional
moveNext() is necessary
to actually move to the next token.
TokenSequence.moveIndex() was modified in a similar way
were replaced by
moveStart() that positionins before
the first token and by
moveEnd() that positions
after the last token.
TokenSequence.isEmpty() added to check whether there are
no tokens in the TS.
Lexer.release() useful for lexer instances caching.
TokenHierarchyEvent.Type inner class
TokenHierarchyEventType top-level class.
method for creation of a custom embedding.
after embedding creation.
Affected offset information (
There can be now more than one embedded change in a TokenChange.
tokenComplete parameter from
LanguageHierarchy.embedding() because the token incompletness
will be handled in a different way.
Swapped order of
LanguageProvider to be in sync with
LanguageEmbedding is now a final class
(instead of abstract class) with private constructor
create() method. That allows better control
over the evolution of the class and it also allows to cache the created embeddings
to save memory.
LanguageEmbedding is now generified with the
T extends TokenId which is a generification
of the language which it contains.
TokenHierarchy.languagePaths() set contains all language paths
used in the token hierarchy.
fired after change of that set.
Adding the LanguageDescription.find(String mimePath) method, which can
be used for looking up
LanguageDescriptions by their
Generification of methods of LanguagePath, TokenSequence and other classes has been improved.
The original API and SPI were rebuilt completely (under editor_api branch)
to comply with the standard requirements for the NetBeans APIs and allow
for better API evolution in the future.
The major version of the lexer module was increased to 2.
LexerInput.getReadText(int start, int end)
LexerInput.backup(int count) now accepts negative values too to redo character backups.
These methods are necessary for more efficient handling of the input.
Unfortunately this change is incompatible as the LexerInput is interface.
LexerInput.createToken(TokenId id, int tokenLength)
TokenTextMatcher was renamed to
which should be better name than the original one. Documentation was also updatged.
It should be now more clear that there can be zero or more samples for the text
of each token and that the
the given samples and can check whether the token's text matches one of them.
LanguageProvider.findEmbeddedLanguage() method signature was changed. The method is now called findLanguageEmbedding and returns LanguageEmbedding instead of just LanguageDescription.
TokenSequence.moveOffset() was renamed to TokenSequence.move(). The original TokenSequence.move() which is seldom used was renamed to TokenSequence.moveIndex().
The LanguageProvider class was added to the SPI package. It is possible to register instances of this class in the default lookup. The lexer module will use them to find LanguageDescriptions for documents ( according to their mime types) and for tokens, which contain embedded language.
The TokenIdFilter class was removed from the API. Instead of it
Set<? extends TokenId> should be used where appropriate.
With the TokenFactory now being final the TokenHandler is no longer needed. The few remaining overridable SPI methods were moved to LanguageHierarchy and the TokenHandler class was removed.
LanguagePath and InputAttributes parameters were added to LanguageHierarchy.createLexer() (to the end of the existing parameters) in order to allow the lexer to react to input attributes.
For consistency the parameters of LanguageHierarchy.embedding() were reordered so that the LanguagePath and InputAttributes parameters are also at the end of the list and in the same order.
That API subpackage will contain swing-related API of the lexer
in the org.netbeans.api.lexer.swing package.
I have removed Language.find(String mimeType) - it never worked.
I have added Language.getValidId(int intId).
The whole API and implementation was moved from libsrc to src (libsrc is now abandoned) in order to better adhere to module conventions.