Discussion about official Mozilla Firefox builds
Ver Greeneyes
Posts: 1027Joined: June 28th, 2008, 4:57 am
Posted March 23rd, 2016, 5:35 am
Harsh86 wrote:I'd been watching that bug for while too. Terrance Cole recently posted that after landing this enhancement, telemetry figures have improved as such: "Median GC_MS seems to have gone from 185ms to 150ms. More importantly, the 95th percentile numbers have gone from 1s down to 0.85s."
Yep, also the percentage of full GCs dropped from something like 90% to 50%. It'd be nice if it could be reduced further (we really don't want full GCs except in extreme cases, like running out of memory), but I haven't heard any ideas on how to do that. I think this is still a long term goal, though some of the ongoing refactoring to make the GC less crazy is almost certainly keeping it in mind.
Grantius

Posts: 1544Joined: June 28th, 2011, 4:14 pm
Posted March 23rd, 2016, 6:23 am
Ver Greeneyes wrote:Harsh86 wrote:I'd been watching that bug for while too. Terrance Cole recently posted that after landing this enhancement, telemetry figures have improved as such: "Median GC_MS seems to have gone from 185ms to 150ms. More importantly, the 95th percentile numbers have gone from 1s down to 0.85s."
Yep, also the percentage of full GCs dropped from something like 90% to 50%. It'd be nice if it could be reduced further (we really don't want full GCs except in extreme cases, like running out of memory), but I haven't heard any ideas on how to do that. I think this is still a long term goal, though some of the ongoing refactoring to make the GC less crazy is almost certainly keeping it in mind.
>we really don't want full GCs except in extreme cases, like running out of memory Not even an occasional one? Say if the browser is idle for x minutes?
Micro gaming box: AMD A10-7800 APU, 8gb RAM M350 ITX case (size of a book), Windows 10/Ubuntu Tablet/Laptop: Asus Transformer T100, Intel Atom 2GB RAM, Windows 10 x86 Mobile:Xiaomi Redmi Note 3 Pro
Ver Greeneyes
Posts: 1027Joined: June 28th, 2008, 4:57 am
Posted March 23rd, 2016, 7:04 am
Grantius wrote:Not even an occasional one? Say if the browser is idle for x minutes?
True, we probably want a shrinking GC in that case, which is a full GC. IIRC it should still be incremental though. I guess the real question is how many GCs do more work than necessary, and how many GCs are non-incremental.
Timvde
Posts: 269Joined: February 7th, 2013, 3:19 pm
Posted April 17th, 2016, 2:31 am
Does anyone know what's up with the big Sunspider regression? Appears to be part 11 of bug 887016. Was it expected? Is it a hit we're willing to take?
Ver Greeneyes
Posts: 1027Joined: June 28th, 2008, 4:57 am
Posted April 17th, 2016, 9:19 am
Yeah, basically the standard changed, and now they have to do lots of extra checks that they could optimize out before. There are some plans to reduce the regression - and I think the spec itself is still in flux, and has changed in a way that should improve performance somewhat - but performance might not get back to what it was. IIRC Nicolas Pierron's profile-guided branch pruning is expected to help as well, but that still isn't quite ready yet (aside from exposing a lot of pre-existing bugs that are now fixed, it currently makes the interpreter slower, which requires some changes to fix).
Omega X

Posts: 8108Joined: October 18th, 2007, 2:38 pmLocation: A Parallel Dimension...
Posted April 18th, 2016, 7:57 pm
Is it even worth optimizing for Sunspider anymore? Its not exactly a modern JS test these days.
Latest: Firefox/65.0.1 *ESR/60.5.1 - Mobile/65.0.1 - Thunderbird/60.5.1 Nightly: Nightly/67.0a1 - Mobile/67.0a1 - Daily/67.0a1
Timvde
Posts: 269Joined: February 7th, 2013, 3:19 pm
Posted April 19th, 2016, 2:37 am
Ver Greeneyes wrote:Yeah, basically the standard changed, and now they have to do lots of extra checks that they could optimize out before. There are some plans to reduce the regression - and I think the spec itself is still in flux, and has changed in a way that should improve performance somewhat - but performance might not get back to what it was. IIRC Nicolas Pierron's profile-guided branch pruning is expected to help as well, but that still isn't quite ready yet (aside from exposing a lot of pre-existing bugs that are now fixed, it currently makes the interpreter slower, which requires some changes to fix).
So other JS engines will regress too, when they implement the new spec? Omega X wrote:Is it even worth optimizing for Sunspider anymore? Its not exactly a modern JS test these days.
Optimizing: no. Not regressing: yes. Too many ignorant users and tech sites still use Sunspider to compare JS performance.
Omega X

Posts: 8108Joined: October 18th, 2007, 2:38 pmLocation: A Parallel Dimension...
Posted April 19th, 2016, 5:23 am
Timvde wrote:Optimizing: no. Not regressing: yes. Too many ignorant users and tech sites still use Sunspider to compare JS performance.
Fair point. Though I haven't seen a browser comparison in a while now.
Latest: Firefox/65.0.1 *ESR/60.5.1 - Mobile/65.0.1 - Thunderbird/60.5.1 Nightly: Nightly/67.0a1 - Mobile/67.0a1 - Daily/67.0a1
Ver Greeneyes
Posts: 1027Joined: June 28th, 2008, 4:57 am
Posted April 19th, 2016, 6:39 am
Timvde wrote:So other JS engines will regress too, when they implement the new spec?
Well, it might be possible to get around those checks; there are some fixes in the works, and they're considering backing the change out for a release to get more time to implement those ( bug 1265307). Still, the Chrome devs stopped caring about SunSpider perf a long time ago, so it's possible they already took the hit - I'm not really sure. There are also other regressions though - 4% on Kraken, 8.6% on Octane - so I don't think they're just going to leave things as they are.
h4writer
Posts: 23Joined: September 13th, 2011, 4:25 pm
Posted April 20th, 2016, 7:30 am
mayankleoboy1 wrote:https://bugzilla.mozilla.org/show_bug.cgi?id=1161516#c4 looks unbelievable. Or am I reading it wrong? (score on octane increases by 51% ???!!!)
Please note that the 51% isn't on normal run time! This improvement is when we force ionmonkey to use caches. This mode is quite similar to running ionmonkey without any type information. This is important, since sometimes we have bad TI information and we fallback to less specialized code. Often 10x slower than specialized code. With reusing the baseline stubs (shared stubs) we have a faster fallback. The shared stub initiative will not help benchmarks, but help regular JS code that happen to fall off the specialized code paths to be less slow  To put the numbers in perspective Normal octane score: 27229 Force using caches before: 3780 (7 times slower) Force using caches after: 5708 (5 times slower)
Ver Greeneyes
Posts: 1027Joined: June 28th, 2008, 4:57 am
Posted April 22nd, 2016, 5:07 am
Interesting bug by h4writer to compile small functions faster. As he explains in the bug, small functions don't take all that long to compile, yet SM currently uses the same warm-up time for them as for larger functions (1000 runs). It'll be interesting to see how much difference this makes; it'll make Ion kick in much sooner (and Ion code is much faster than Baseline code), but I guess type information stability might suffer (leading to more recompilations?).
Ver Greeneyes
Posts: 1027Joined: June 28th, 2008, 4:57 am
Posted April 22nd, 2016, 7:53 am
IIUC that kept regressing SS at the time. From discussion on IRC, the fact that IonBuilder runs on the main thread bottlenecks things somewhat (but moving it off the main thread would take a lot of work. IIRC bhackett did that, but it added a lot of extra complexity).
Timvde
Posts: 269Joined: February 7th, 2013, 3:19 pm
Posted May 5th, 2016, 3:55 pm
New line on AWFY: "Flow sensitive aa". More info bug 1255008. (haven't read it myself yet, doing now  )
Return to Firefox Builds
Who is online
Users browsing this forum: Bing [Bot] and 3 guests
|