This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
Few seconds during startup are wasted in processing content of Services/ directory in spite of that it was processed in previous session. We could speed up the startup by storing all the FolderLookup.Pairs in a persistent way and just reload them on startup instead of initializing all the XML settings mechanism.
First of all one has to measure what would be the possible gain, but I expect it to be huge...
Measured. The implementation seems to gave 5% off the startup time - but that still remain issue 20537 - causing few exceptions on startup preventing the persistent state be restored correctly and thus slowing things down - because the main point of persistent state - not to check the files on disk any further cannot be fully achieved. But even the partial implementation was able to get the startup time from 20.1s to 19.1s on my computer.
Created attachment 4703 [details] The patch of core and openide that demonstrates the 5% speedup
To use the patch start the IDE with: -J-Dnetbeans.lookup=quite -J-Dorg.netbeans.log.startup=print at the end it will store the state of lookup to $USER_DIR/lookup.ser and if you start it again it will reuse the lookup.ser and restore its content in the NbTopManager.modulesClassPathInitialized ().
What happens if the user has modified some settings on disk while the IDE was shut down? E.g. user keeps $userdir/system/ under CVS. While the IDE is not running, user does a cvs up and gets new settings from other people. Or reverts some changes. Will they be visible when the IDE is next startup or will the lookup cache need to be cleared? Also it is not the case that adding "implements Serializable" to various public Lookup impls is an incompatible API change if these classes are not final? An existing subclass might be written to contain nonserializable instance fields.
Ad. change when ide is shutdown: The lookup should think that there is an instance, but when queried - it should update it state. allInstances ().size () == 0, but allItems ().size () > 0 && allItems ().get (0).getInstance () == null so the lookup should slowly get into the up-to-date state. Btw. I thought that we should do some checks to find out whether the disk has changed since last session and in such case disable the cache lookup. Ad. Serializable - it is not too compatible, but it was just a sketch.
Not for 3.4 - sorry.
If you do implement this, please do not forget to mention any possible caveats for module authors in the Upgrade Guide.
Probably this should get reassigned to me - Trung, any opinion?
Is this still planned for 4.0?
I'm trying to work on this.
Have an apparently working patch, but it doesn't seem to help much at all: Average times: unoptimized 15.5721220175425 std. dev. 0.452610789915199 optimized 15.4287937323252 std. dev. 0.417569279980071 Improvement: 0.143328285217285 Percentage: 0.920415888443596% Also the patch does not yet check for cache hits; it assumes it is always a hit. To check properly will require checking timestamps of all module JARs (or just piggyback on the layer cache stamp?) and adding in a stamp of $userdir/system/Services/ (i.e. customized services), which I estimate would add at least 100msec - almost as much as is gained here. Does not look worth the trouble (and possible bugs) so far.
Created attachment 8394 [details] Current working patch (against dev sources)
However with -nogui -nosplash the numbers are much clearer: Average times: unoptimized 11.674832812945 std. dev. 0.257386849350679 optimized 10.7489837884903 std. dev. 0.0211098169661008 Improvement: 0.925849024454752 Percentage: 7.93029792622102%
Jesse, perhaps you should do the measurements on S1S with a bigger set of modules to see how it scales
"perhaps you should do the measurements on S1S with a bigger set of modules to see how it scales" - good idea, thanx.
Created attachment 8460 [details] Revised working patch, which keeps a hash of relevant files and timestamps
Stats w/ timestamp check on dev S1SEE build in ramdisk: Average times: unoptimized 18.3228270292282 std. dev. 0.0374218846785285 optimized 18.0972263892492 std. dev. 0.0129139749485877 Improvement: 0.225600639979046 Percentage: 1.23125454177552% Still an improvement but not too exciting.
Same test in NB (stable-with-apisupport): Average times: unoptimized 11.8029627958934 std. dev. 0.189792507251049 optimized 10.895353770256 std. dev. 0.00903014360598436 Improvement: 0.90760902563731 Percentage: 7.68967115572962% So it seems the timestamp check is not a major factor, but something (as yet TBD) in S1S actually slows things down - note that the absolute savings in NB is much larger. Perhaps lookup deser is slower in S1S for some reason - but this would seem odd since the S1S lookup cache is 22K compared to NB's 18K, hardly a big difference: a few extra executors.
Extra time loading lookup cache in S1S: 1130ms rather than 793ms; extra time storing: 115ms rather than 78ms. That explains only about half the loss of improvement, even assuming that loading lookup directly from memory (parsing *.settings etc.) is free, which is certainly isn't.
Profiling on S1SEE with lookup cache on reveals that shutdown is very slow: 1. XMLSettingsHandler.searchFolder is rather slow, because it calls getCookie on every .settings file, forcing them all to be parsed though they had not been before! This could surely be optimized. 2. Many modules do stuff in close() or closing() involving system options, forcing the options to be read for the first time, and are really dumb about it. E.g. DatabaseOption checks all JDBC drivers! Database module tries to store its system option, it looks like. 2a. By far the worst, however, accounting for no less than 9.7% of *total time to start up and shut down again*, is jwd's installer, which in close() seems to do a lot of class loading, compounded by some apparent performance problems in ProxyClassLoader.getPackage for deeply dependent modules (jwd appears to depend on just about everything else). I suspect that improvement would be much more marked if the shutdown sequence (post-GUI hiding stuff) were excluded from timing results.
"I suspect that improvement would be much more marked if the shutdown sequence (post-GUI hiding stuff) were excluded from timing results." - actually probably not, since ModuleSystem.shutDown must complete successfully before WindowUtils.hideAllFrames. Probably that should be changed: ModuleManager.closing(), if successful, should be followed by WindowUtils.hideAllFrames, then by .close(). Most of the wasted time in S1SEE (maybe 80%) was in close(), not closing().
> Probably that should be changed: ModuleManager.closing(), if > successful, should be followed by WindowUtils.hideAllFrames, > then by .close() Dafe?
No, the exit code is not really part of window system, it is more tied to the module system etc. There is nothing wrong with hideFrames, it is simply being called a bit too early by NbTopManager. I will commit what I have since it seems to work fine, and does improve startup time - though shutdown time for S1SEE may be slowed down by almost as much.
committed * Up-To-Date 1.13 core/manifest.mf added * Up-To-Date 1.1 core/src/org/netbeans/core/LookupCache.java committed * Up-To-Date 1.176 core/src/org/netbeans/core/NbTopManager.java committed * Up-To-Date 1.92 openide/openide-spec-vers.properties committed * Up-To-Date 1.124 openide/api/doc/changes/apichanges.xml committed * Up-To-Date 1.20 openide/src/org/openide/loaders/FolderLookup.java committed * Up-To-Date 1.126 openide/src/org/openide/loaders/XMLDataObject.java committed * Up-To-Date 1.26 openide/src/org/openide/util/lookup/AbstractLookup.java committed * Up-To-Date 1.16 openide/src/org/openide/util/lookup/InheritanceTree.java committed * Up-To-Date 1.12 openide/src/org/openide/util/lookup/ProxyLookup.java
change lookup/core -> lookup/openide
Note: affected source code is in core, not openide