• April 20, 2024

DevTools: “Browser wars”

DevTools: “Browser wars”, new tools and changing sites in the browser
DevTools or “developer tools”, as a browser-built way to view the source code of a site, find out the details of network requests and make “profit” changes to HTML, JavaScript and CSS, did not always exist.

In the noughties, Microsoft’s Internet Explorer was the most popular browser, it occupied almost 90% of the market due to the fact that it was included in the installation package of the popular Windows operating system. However, its shortcomings were already being talked about on professional forums: slow page loading, conflicts when installing plugins, a meager angular interface — all this did not allow calling it the best solution of its time. In addition, Internet Explorer did not fully comply with the web standards set by the World Wide Web Consortium (W3C): it did not support SVG and tools for correct CSS processing, which is why page elements could be displayed unpredictably on screens.

But the main factor that caused IE to lose to Firefox, which was released in 2004, was that the latter had a lot of free built-in extensions. For example, the powerful FireBug development package (there was a version for Internet Explorer, but noticeably slower and with truncated functionality) allowed developers to edit code directly in the browser and view HTTP headers, including AJAX requests.

Now it sounds crazy, but before the advent of FireBug, the main method of JavaScript debugging was the snippet “a → alert(…);”, which allowed me to output a value in a dialog box (and at the same time interrupted complex actions in the browser, since it took focus), and for CSS debugging, the snippet “b → border: 1px solid red;”, which helped to understand how the browser perceives certain HTML elements (and at the same time could critically change the size of the element, which is why later I started using “b → background-color: pink;”).

By 2008, the “fire fox” managed to acquire a lot of new features, but because of this it lost speed, so when a new browser from Google entered the arena, a significant proportion of Firefox users switched to a competitor. Chrome turned out to be nimble and quickly opened even on “weak” devices. This was achieved, among other things, thanks to a completely new JavaScript execution engine, which later formed the basis of Node.js (the runtime environment that brought JavaScript to the server).

BEM: meanings are primary, technologies are secondary
The BEM approach is the pride of Yandex, because today it helps thousands of web developers around the world. In 2006, while working on and other services, we identified two problems: it was difficult to make changes to the code of one page without affecting the code of another, and it was almost impossible to select unique names for a huge number of interface elements created.

So Vitaly Kharisov and other colleagues and I came up with a new methodology for designating semantic parts of interfaces — BEM (socr. from “Block Element Modifier”). BEM is based on the separation of the interface into independent components. A block is a component of a web page that can be applied repeatedly.

The name of the block should reflect its meaning, for example, Contacts, Menu, Promo and so on.

The element is part of a block, like Contacts__address, Promo__item.
Modifier — appearance, state, or behavior of a block or element, for example, Logo_type_big, Promo__item_theme_dark.

At the same time, blocks, elements and modifiers exist in all technologies that are used to create interfaces. And instead of separate folders css/ and js/, where there were one or more large files for the entire project or its various pages, we had a lot of folders with different blocks, elements and modifiers, each of which contained a small piece of CSS and JavaScript. And in order for the user to see a full-fledged page in the end, we implemented the process of assembling the necessary large files from these pieces.

So the concept of assembly and “compilation” of web technologies has fully entered into our work at Yandex. Although, I remember, colleagues from backend development asked for a long time “What are you compiling in your JS/CSS? These are interpreted languages!”

Cross-browser or a perfectionist’s dream
In the early 2010s, the approach to interface layout began to change more rapidly – new parts of web technology standards appeared, which were implemented in browsers. For example, the Flexbox CSS module allowed placing elements on the page both horizontally and vertically, the process became more flexible in comparison with tables or floating blocks used before.

But the joy of such progress was quickly overshadowed by the aggravated problems of cross—browser compatibility – the inability to write one code that will work equally in all or at least in most current browsers. The old techniques of bypassing the features of different rendering engines were no longer enough — their functionality was often updated and implemented in its own way from team to team.

To solve these problems, browser developers came up with prefixes for new CSS properties (for example, “-webkit-” for Safari and Chrome, and “-o-” for Opera), they had to enable site creators to write different code understandable to a particular browser.

But even here everything turned out to be difficult — with this approach, there was more code that was written manually, so additional tools soon appeared that added the necessary prefixes already at the stage of CSS compilation and assembly (for example, Autoprefixer). By the way, these same tools have introduced the broad masses of webmasters to the very idea that CSS, which we give to the browser, does not necessarily have to be created “by hand”.

Interface frameworks: faster, easier, more beautiful
“Write less, do more” — this motto was proclaimed by web developers in 2006, when the JavaScript library jQuery appeared. Interface creators now have the ability to customize the interaction between HTML and JavaScript and make it easier to access all DOM elements. In addition, jQuery has provided a convenient API when working with AJAX. It was a “wrapper” that allowed us not to look back at the features of browsers. Today, jQuery is still “in use”, but with the advent of the “big three” interface frameworks (React, Vue and Angular), it has faded into the background when creating new sites.

The most popular framework today is React, it allowed us to make interaction with the DOM more declarative, and the approach to dividing code into parts (which we loved so much after the invention of the BEM methodology) more natural.

Yes, React didn’t do it right away. I remember both fundamental problems with the performance of large applications due to blocking the browser while re-rendering interface fragments, and minor troubles, for example, the inability to render one component into two new ones. Because of them, for a long time we could not decide on the widespread use of React in all Yandex projects. But, just as it was with jQuery, the community of developers involved helped fix most of these problems over time. Even within the framework of the Fall Web Development Schools, we have been betting on React lately.


Technology is changing our world and reality very quickly. Even 15 years ago, few people believed that smartphones would take over the market, mobile Internet would be the most popular way of communication, and to order food at home, it would be enough to make a couple of “taps” on the screen. Therefore, it is important for modern specialists and those who dream of connecting their lives with IT to constantly learn new things and closely monitor the development of technologies. Remember the words of the Red Queen by Lewis Carroll? “Here you have to run as fast as you can just to stay in place, but if you want to move somewhere, you have to run at least twice as fast.”