JavaScript Micro Performance Testing, History, and Limitations

PHPz
Release: 2024-09-11 06:42:03
Original
917 people have browsed it

JavaScript Micro Performance Testing, History, and Limitations

I think performance optimization interests many developers as they learn more about different ways to accomplish a task. Some internal voice asks, "Which way isbest?" While there are many shifting metrics for "best", like Douglas Crockford's 2008 JavaScript: The Good Parts, performance isaccessiblebecause we can test it ourselves.

However, testing and proving performance are not always easy to get right.

A Bit of History

Browser Wars

By the early 2000s, Internet Explorer had won the first browser wars. IE was even the default browser on Macs for a while. Once-dominant Netscape was sold to AOL and eventually shut down. Their spin-off Mozilla was in a years-long beta for their new standalone browserPhoenixFirebirdFirefox.

In 2003 Opera 7 came out with Presto, a new, faster rendering engine. Also Apple released Safari, a performance-focused browser for Macs built on the little-known Konqueror KHTML engine. Firefox officially launched in 2004. Microsoft released IE 7 in 2006, and Opera 9 released a faster JavaScript engine. 2007 brought Safari on both Windows and the new iPhone. 2008 saw Google Chrome and the Android browser.

With more browsers and more platforms, performance was a key part of this period. New browser versions regularly announced they were the new fastest browser. Benchmarks like Apple's SunSpider and Mozilla's Kraken were frequently cited in releases and Google maintained their own Octane test suite. In 2010 the Chrome team even made a series of "speed test" experiments to demonstrate the performance of the browser.

High Performance JavaScript

Micro Performance testing saw a lot of attention in the 2010s. The Web was shifting from limited on-page interactivity to full client-side Single Page Applications. Books like Nicholas Zakas's 2010 High Performance JavaScript demonstrated how seemingly small design choices and coding practices could have meaningful performance impacts.

Constant Change

Before long the JavaScript engine competition was addressing some of these key performance concerns in High Performance JavaScript, and the rapid changes in the engines made it difficult to know what was bestright now. With new browser versions and mobile devices all around, micro performance testing was a hot topic. By 2015, the now-closed performance testing site jsperf.com was so popular it started having its own performance issues due to spamming.

Test The Right Thing

With JavaScript engines evolving, it was easy to write tests, but hard to make sure your tests werefairor evenvalid. If your tests consumed a lot of memory, later tests might see delays from garbage collection. Was setup time counted or excluded from all tests? Were the tests even producing the same output? Did thecontextof the test matter? If we tested !~arr.indexOf(val) vs arr.indexOf(val) === -1 did it make a difference if we were just running the expression or consuming it in an if condition?

Compiler Optimization

As the script interpreters were replaced with various compilers, we started to see some of the benefits — and side-effects — of compiled code:optimizations. Code running in a loop that has no side-effects, for instance, might be optimized out completely.

// Testing the speed of different comparison operators for (let i = 0; i < 10000; i += 1) { a === 10; }
Copy after login

Because this is performing an operation 10000 times with no output or side effects, optimization could discard it completely. It wasn't a guarantee, though.

Moving Targets

Also, micro-optimizations can change significantly from release to release. The unfortunate shuttering of jsperf.com meant millions of historical test comparisons over different browser versions were lost, but this is still something we can see today over time.

It's important to keep in mind that micro-optimization performance testing comes with a lot caveats.

As performance improvements started to level off, we saw test results bounce around. Part of this was improvements in the engines, but we also saw engines optimizing code forcommon patterns. Even if better-coded solutions existed, there was a real benefit to users in optimizing common code patterns rather than expecting every site to make changes.

Shifting Landscape

Worse than the shifting browser performance, 2018 saw changes to the accuracy and precision of timers to mitigate speculative execution attacks like Spectre and Meltdown. I wrote a separate article about these timing issues, if that interests you.

Split Focus

To complicate matters, do you test and optimize for the latest browser, or your project's lowest supported browser? Similarly, as smartphones gained popularity, handheld devices with significantly less processing power became important considerations. Knowing where to allocate your time for the best results – ormost impactfulresults – became even more difficult.

Premature Optimization?

Premature optimization is the root of all evil.
-- Donald Knuth

This gets quoted frequently. People use it to suggest that whenever we think about optimization, we are probably wasting time and making our code worse for the sake of an imaginary or insignificant gain. This is probably true in many cases. But there is more to the quote:

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

The more complete quote adds critical context. We can spend a lot of time onsmall efficienciesif we allow ourselves to do so. This often takes time from the goal of the project without providing much value.

Diminishing Returns

I personally spent a lot of time on these optimizations, and in the moment it didn't seem like a waste. But in retrospect, it's not clear how much of that work was worthwhile. I'm sure some of the code I wrote back then shaved milliseconds off the execution time, but I couldn't really say if the time saved wasimportant.

Google even talks about diminishing returns in their 2017 retirement of the Octane test suite. I strongly recommend reading this post for some great insight into limitations and problems in performance optimization that were experienced by teams dedicated to that work.

So how do we focus on that "critical 3%"?

Application not Operation

Understanding how and when the code is used helps us make better decisions about where to focus.

Tools Not Rules

It wasn't long before the performance increases and variations of new browsers started pushing us away from these kinds of micro-tests and into broader tools like flame charts.
If you have 30 minutes, I recommend this 2015 Chrome DevSummit presentation on the V8 engine. It talks about exactly these issues... that the browsers keep changing, and keeping up with those details can be difficult.

Using performance monitoring and analysis of your running application can help you quickly identify what parts of your code are running slowly or running frequently. This puts you in a great position to look at optimizations.

Focus

Using performance monitoring tools and libraries let you see how the code runs, and which parts need work. They also give us a chance to see if different areas need work on different platforms or browsers. Perhaps localStorage is much slower on a Chromebook with limited memory and eMMC storage. Perhaps you need to cache more information to combat slow or spotty cellular service. We can make guesses at what is wrong, but measuring is a much better solution.

If your customer base is large enough you might find benefit in Real User Monitoring (RUM) tools, that can potentially let you know what the actual customer experience is like. These are outside the scope of this article, but I have used them at several companies to understand the range of customer experience and focus efforts on real-world performance and error handling.

Alternatives

It's easy to dive into "how do I improve this thing", but that isn't always the best answer. You may save a lot of time by stepping back and asking, "Is this the right solution for this problem?"

Issues loading a very large list of elements on the DOM? Maybe a virtualized list where only the visible elements are loaded on the page would resolve the performance issue.

Performing many complex operations on the client? Would it be faster to calculate some or all of this on the server? Can some of the work be cached?

Taking a bigger step back: Is this the right user interface for this task? If you designed a dropdown to expect twenty entries and you now have three thousand, maybe you need a different component or experience for making a selection.

Good Enough?

With any performance work, there is a secondary question of "what is enough"? There's an excellent video from Matt Parker of Stand-up Maths talking about some code he wrote and how his community improved it fromweeksof runtime tomilliseconds. While it's incredible that such an optimization was possible, there's also a point for nearly all projects at which you reach "good enough".

Für ein Programm, das nur einmal ausgeführt wird, könnten Wochen akzeptabel sein, Stunden wären besser, aber wie viel Zeit Sie dafür aufwenden, wird schnell zu einem wichtigen Gesichtspunkt.

Man könnte es sich wieToleranzenim Ingenieurwesen vorstellen. Wir haben ein Ziel und wir haben eine Bandbreite an Akzeptanz. Wir können nach Perfektion streben und gleichzeitig verstehen, dass Erfolg und Perfektion nicht dasselbe sind.

Leistungsziele identifizieren

Ziele sind ein entscheidender Teil der Optimierung. Wenn man nur weiß, dass der aktuelle Zustand schlecht ist, ist „es besser machen“ ein offenes Ziel. Ohne ein Ziel für Ihre Optimierungsreise können Sie Zeit damit verschwenden, mehr Leistung oder mehr Optimierung zu finden, obwohl Sie an etwas Wichtigerem arbeiten könnten.

Ich habe dafür keine gute Kennzahl, da die Leistungsoptimierung stark variieren kann, aber versuchen Sie, sich nicht im Unkraut zu verlieren. Hier geht es eigentlich mehr um Projektmanagement und Planung als um Codierungslösungen, aber der Input der Entwickler ist bei der Definition von Optimierungszielen wichtig. Wie im Abschnitt „Alternativen“ vorgeschlagen, lautet die Lösung möglicherweise nicht „schneller machen“.

Grenzen setzen

Im Fall von Matt Parker brauchte erirgendwanndie Antwort und musste das Gerät für nichts anderes verwenden. In unserer Welt messen wir häufig dieBesucherleistungund ihrewahrscheinlichen finanziellen Auswirkungenim Vergleich zurEntwickler-/Teamzeitund IhrenOpportunitätskosten. Die Maßnahme ist also nicht so einfach.

Stellen wir uns vor, wirwissen, dass eine Reduzierung unserer Add-to-Cart-Zeit um 50 % unser Einkommen um 10 % steigern würde, aber es wird zwei Monate dauern, bis diese Arbeit abgeschlossen ist. Gibt es etwas, das eine größere finanzielle Auswirkung haben könnte als zwei Monate Optimierungsarbeit? Können Sie in kürzerer Zeit einen Nutzen erzielen? Auch hier geht es um Projektmanagement und nicht um Code.

Komplexität isolieren

Wenn Sie feststellen, dass Sie Code optimieren müssen, ist es auch ein guter Zeitpunkt zu prüfen, ob Sie diesen Code von anderen Teilen Ihres Projekts trennen können. Wenn Sie wissen, dass Sie komplexe Optimierungen schreiben müssen, die es schwierig machen, dem Code zu folgen, kann das Extrahieren in ein Dienstprogramm oder eine Bibliothek die Wiederverwendung erleichtern und es Ihnen ermöglichen, diese Optimierung an einer Stelle zu aktualisieren, wenn sie sich im Laufe der Zeit ändern muss.

Abschluss

Leistung ist ein kompliziertes Thema mit vielen Wendungen. Wenn Sie nicht aufpassen, können Sie viel Energie für sehr wenig praktischen Nutzen investieren. Neugier kann ein guter Lehrer sein, aber sie führt nicht immer zu Ergebnissen. Es ist von Vorteil, mit der Codeleistung herumzuspielen, bietet aber auch die Möglichkeit, die tatsächlichen Ursachen der Langsamkeit in Ihrem Projekt zu analysieren und die verfügbaren Tools zu nutzen, um sie zu beheben.

Ressourcen

  • Addy Osmani – Visualisierung der JS-Verarbeitung im Zeitverlauf mit DevTools Flame Charts
  • Stand-Up-Mathematik – Jemand hat meinen Code um 40.832.277.770 % verbessert
  • Titelbild erstellt mit Microsoft Copilot

The above is the detailed content of JavaScript Micro Performance Testing, History, and Limitations. For more information, please follow other related articles on the PHP Chinese website!

source:dev.to
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!