Overscripted! Digging into JavaScript execution from scale

This research was performed in partnership with the UCOSP (Undergraduate Capstone Open Source Projects) initiative. UCOSP facilitates open source software advancement by connecting Canadian undergraduate college students with industry mentors to practice dispersed development and data projects.

The group consisted of the following Mozilla staff: Martin Lopatka , David Zeber , Dorothy Bird , Luke Crouch , Jason Thomas

2017 pupil interns  —   crawler execution and data collection: Ruizhi You , Louis Belleville , Calvin Luo , Zejun (Thomas) Yu

2018 student interns  —   exploratory data evaluation projects: Vivian Jin , Tyler Rubenuik , Kyle Kung , Alex McCallum

As champions of a healthy Internet , all of us at Mozilla have been increasingly worried about the current advertisement-centric web content ecosystem. Web-affiliated ad technologies continue to evolve more and more sophisticated programmatic models for concentrating on individuals based on their demographic features and interests. The financial underpinnings of the current system incentivise enhancing on engagement above all else. This, in turn, provides evolved an insatiable appetite designed for data among advertisers aggressively iterating on models to drive human ticks.

Most of the content, products, plus services we use online, regardless of whether provided by media organisations or simply by technology companies, are funded entirely or in part by advertising plus various forms of marketing.

– Timothy Libert plus Rasmus Kleis Nielsen [ link ]

We’ ve talked about the potentially negative effects on the Web’ s morphology and exactly how content silos can impede a diversity associated with viewpoints . Now, the Mozilla Systems Research Group is increasing a call to action. Help us look for patterns that describe, expose, plus illuminate the complex interactions in between people and pages!

Inspired by the Web Census lately published by Steven Englehardt plus Arvind Narayanan of Princeton College, we adapted the OpenWPM crawler framework to perform a comparable get gathering a rich set of information regarding the JavaScript execution on different websites. This enables us to explore further analysis of web monitoring, as well as a general exploration of client-page relationships and a survey of different APIs used on the modern Web.

In short, we set out to explore the particular unseen or otherwise not obvious number of JavaScript execution events that are activated once an user visits a web page, and all the first- and third-party events that are set in motion when people get content. To help enable more pursuit and analysis, we are providing the full group of data about JavaScript executions open source.

The following sections will introduce the information set, how it was collected as well as the decisions made along the way. We’ lmost all share examples of insights we’ ve discovered and we’ ll showcase how to participate in the associated “ Overscripted Web: A Mozilla Information Analysis Challenge” , which we’ ve launched today with Mozilla’ s Open Advancement Team.

The Dataset

In October 2017, several Mozilla staff and a number of Canadian undergraduate students forked the particular OpenWPM crawler repository to begin tinkering, in order to collect a plethora of information about the particular unseen interactions between modern sites and the Firefox web browser.

Preparing the seed list

The master list of pages we crawled in preparing the particular dataset was itself generated from the preliminary shallow crawl we carried out in November 2017. We leaped a depth-1 crawl , seeded by Alexa’ s top ten, 000 site list , making use of 4 different machines at four different IP addresses (all within residential non-Amazon IP addresses offered by Canadian internet service providers). The particular crawl was implemented using the Requests Python library and collected simply no information except for an indication of effective page loads.

From the 2, 150, 251 pages symbolized in the union of the 4 seite an seite shallow crawls, we opted to make use of the intersection of the four listings in order to prune out dynamically created (e. g. personalized) outbound hyperlinks that varied between them. This intended a reduction to 981, 545 URLs, which formed the seeds list for our main OpenWPM get.

The Main Collection

The following workflow describes (at a high level) the collection of web page information contained in this dataset.

  1. Alexa top 10k (10, 000 high traffic web pages as of November 1st, 2017)
  2. Precrawl using the python Requests collection, visits each one of those pages
    1. Request library requests that will page
    2. That web page sends a response
    3. All of the href tags in the response are usually captured to a depth of 1 (away from Alexa page)
      1. For each of those href tags all of the valid pages (starts with “ http” ) are added to the hyperlink set.
      2. The link arranged union (2, 150, 251) has been examined using the request library within parallel, which gives us the intersection list of 981, 545.
      3. The set of urls in the list 981, 545 is passed to the much deeper crawl for JavaScript analysis inside a parallelized form.
  3. Each of these webpages was sent to our adapted edition of OpenWPM to have the execution associated with JavaScript recorded for 10 secs.
  4. The window. location has been hashed as the unique identifier from the location where the JavaScript was performed (to ensure unique source attribution).
    1. When OpenWPM strikes content that is inside an iFrame, the place of the content is reported.
    2. Since we use the window. location to determine the location element of the content, every time an iFrame is encountered, that will location can be split into the mother or father location of the page and the iFrame location.
    3. Data selection and aggregation performed through a websocket associates all the activity linked to an area hash for compilation of the get dataset.

Interestingly, for the Alexa top ten, 000 sites, our depth-1 get yielded properties hosted on 41, 166 TLDs across the union of our own 4 replicates, whereas only thirty four, 809 unique TLDs remain one of the 981, 545 pages belonging to their particular intersection.

A modified version associated with OpenWPM was utilized to record JavaScript calls potentially useful for browsers tracking data from these webpages. The collected JavaScript execution track was written into an s3 bucket for later aggregation and evaluation. Several additional parameters were described based on cursory ad hoc analyses.

For example , the minimum live time per page required to catch the majority of JavaScript activity was established as 10 seconds per web page. This was based on a random sample of the seed list URLs plus showed a large variation in time till no new JavaScript was being carried out (from no JavaScript, to what seemed to be an infinite loop of self-referential JavaScript calls). This dwell period was chosen to balance between taking the majority of JavaScript activity on a most of the pages and minimizing time required to complete the full crawl.

Several of the probes instrumented in the Information Leak repo had been ported over to our hybrid crawler, including instrumentation to monitor JavaScript delivery occuring inside an iFrame element (potentially hosted on a third-party domain). This could prove to provide much insight into the particular relationships between pages in the get data.

Exploratory function

In January 2018, we got to work analyzing the particular dataset we had created. After significant data cleaning to work through the messiness of real world variation, we were remaining with a gigantic   Parquet dataset (around 70GB) containing an immense variety of potential insights. Three illustration analyses are summarized below. The most crucial finding is that we have only just looked at a few ideas of the insights this data might hold.

Examining program replay activity

Session replay is a service that lets web sites track users’ interactions with the page— from how they navigate the site, for their searches, to the input they provide. Consider it a “ video replay” of the user’ s entire session on the webpage. Since some session replay providers may record personal information for example personal addresses, credit card information plus passwords, this can present a significant danger to both privacy and safety.

We explored the particular incidence of session replay use, and a few associated features, across the web pages in our crawl dataset. To identify possible session replay, we obtained the particular Princeton WebTAP project list, that contains 14 Alexa top-10, 000 program replay providers, and checked to get calls to script URLs from the list.

Out of six, 064, 923 distinct script referrals among page loads in our dataset, we found 95, 570 (1. 6%) were to session replay companies. This translated to 4, 857 distinct domain names (netloc) making this kind of calls, out of a total of 87, 325, or 5. 6%. Remember that even if scripts belonging to session replay providers are being accessed, this does certainly not mean that session replay functionality has been used on the site.

Provided the set of pages making phone calls to session replay providers, all of us also looked into the consistency associated with SSL usage across these phone calls. Interestingly, the majority of such calls had been made over HTTPS (75. 7%), and 49. 9% of the webpages making these calls were utilized over HTTPS. Additionally , we discovered no pages accessed over HTTPS making calls to session replay scripts over HTTP, which was amazing but encouraging.

Lastly, we examined the distribution associated with TLDs across sites making phone calls to session replay providers, plus compared this to TLDs on the full dataset. We found that will, along with. com,. ru accounted for an amazing proportion of sites accessing this kind of scripts (around 33%), whereas. ru domain names made up only 3% of most pages crawled. This implies that sixty-five. 6% of. ru sites within our dataset were making calls in order to potential session replay provider scripts. However , this may be explained by the idea that Yandex is one of the primary session replay providers, and it offers a range of various other analytics services of interest to Russian-language websites.

Eval plus dynamically created function calls

JavaScript allows a perform call to be dynamically created from the string with the eval() function or simply by creating a new Function() object. For example , this particular code will print hello two times:

 eval("console. log('hello')")
var my_func = brand new Function("console. log('hello')")

While dynamic function development has its uses, it also brings users to injection attacks, like cross-site scripting , and can potentially be used to cover malicious code.

To be able to understand how dynamic function creation has been used on the Web, we analyzed the prevalence, location, and distribution within our dataset. The analysis was initially carried out on 10, 000 randomly chosen pages and validated against the whole dataset. In terms of prevalence, we discovered that 3. 72% of general function calls were created dynamically, and these originated from across 8. 76% of the websites crawled in our dataset.

These results claim that, while dynamic function creation is just not used heavily, it is still typical enough on the Web to be a potential worry. Looking at call frequency per web page showed that, while some Web pages generate all their function calls dynamically, most tend to have only 1 or 2 dynamically generated calls (which is generally 1-5% of all calls made by a page).

We also analyzed the extent of this practice one of the scripts that are being called. We found that they belong to a relatively small subset of script hosts (at a typical ratio of about 33 calls for each URL), indicating that the same JavaScript documents are being used by multiple webpages. Additionally, around 40% of these are identified trackers (identified using the disconnectme entity list ), although only 33% are usually hosted on a different domain through the webpage that uses them. This particular suggests that web developers may not even know that these are using dynamically generated functions.


Cryptojacking refers to the unauthorized use of the user’ s computer or cellular device to mine cryptocurrency. A growing number of websites are using browser-based cryptojacking scripts as cryptocurrencies rise in popularity. It really is an easy way to generate revenue and a practical alternative to bloating a website with advertisements. An excellent contextualization of crypto-mining through client-side JavaScript execution can be found in the particular unabridged cryptojacking analysis prepared by Vivian Jin.

We looked into the prevalence of cryptojacking one of the websites represented in our dataset. A listing of potential cryptojacking hosts (212 websites total) was obtained from the adblock-nocoin-list GitHub repo . For each script call started on a page visit event, we all checked whether the script host hailed from the list. Among 6, 069, 243 distinct script references on web page loads in our dataset, only 945 (0. 015%) were identified as cryptojacking hosts. Over half of these hailed from CoinHive, the original script developer. Just one use of AuthedMine was found. Seen in terms of domains reached in the get, we found calls to cryptojacking scripts being made from 49 away from 29, 483 distinct domains (0. 16%).

However , it is very important note that cryptojacking code can be carried out in other ways than by such as the host script in a script label. It can be disguised, stealthily executed within an iframe, or directly used in the function of a first-party script. Customers may also face redirect loops that will eventually lead to a page with an exploration script. The low detection rate is also due to the popularity of the sites included in the crawl, which might   deter site owners from implementing obvious cryptojacking scripts. It is likely that the actual rate associated with cryptojacking is higher.

The majority of the domains we found making use of cryptojacking are streaming sites. This really is unsurprising, as users have loading sites open for longer while they will watch video content, and exploration scripts can be executed longer. The Chinese variety site called 52pk. com accounted for 207 out of the general 945 cryptojacking script calls all of us found in our analysis, by far the biggest domain we observed for cryptojacking calls.

Another fascinating fact: although our cryptojacking web host list contained 212 candidates, we all found only 11 of them to become active in our dataset, or regarding 5%.

Limitations plus future directions

Could is a rich dataset allowing for several interesting analyses, it is limited within visibility mainly to behaviours that will occur via JS API phone calls.

Another feature we all investigated using our dataset will be the presence of Evercookies . Evercookies is really a tracking tool used by websites to make sure that user data, such as an user IDENTIFICATION, remains permanently stored on a pc. Evercookies persist in the browser simply by leveraging a series of tricks including Internet API calls to a variety of obtainable storage mechanisms. An initial attempt had been made to search for evercookies in this information by searching for consistent values becoming passed to suspect Web API calls.

Acar ainsi que al., “ The Web Never Does not remember: Persistent Tracking Mechanisms in the Wild”, (2014) created techniques for taking a look at evercookies at scale. First, they will proposed a mechanism to identify identifiers. They applied this system to HTTP cookies but mentioned that it could also be applied to other storage space mechanisms, although some modification would be necessary. For example , they look at cookie termination, which would not be applicable in the case of localStorage. For this dataset we could try replicating their methodology for set phone calls to window. record. cookie and window. localStorage .

They also looked at Flash biscuits respawning HTTP cookies and HTTP respawning Flash cookies. Our dataset contains no information on the presence of Flash biscuits, so additional crawls would be needed to obtain this information. In addition , they utilized multiple crawls to study Flash respawning, so we would have to replicate that method.

In addition to our insufficient information on Flash cookies, we have simply no information about HTTP cookies, the first system by which cookies are set. Understanding which HTTP cookies are at first set can serve as an important complement plus validation for investigating other storage space techniques then used for respawning plus evercookies.

Beyond HTTP and Flash, Samy Kamkar’ s i9000 evercookie collection documents over a number of mechanisms for storing an identification to be used as an evercookie. Many of these are certainly not detectable by our current dataset, e. g. HTTP Cookies, HSTS Pinning, Flask Cookies, Silverlight Storage space, ETags, Web cache, Internet Explorer userData storage, etc . An evaluation of the frequency of each technique would be an useful factor to the literature. We also view the value of an ongoing repeated crawl to distinguish changes in prevalence and human resources for new techniques as they are uncovered.

However , it is possible to carry on analyzing the current dataset for some from the techniques described by Samy. For instance , window. name puffern is listed as being a technique. We can look at this property within our dataset, perhaps by applying the same IDENTIFICATION technique outlined by Acar ou al., or perhaps by looking at sequences of calls.

A conclusion

Throughout our primary exploration of this data it grew to become quickly apparent that the amount of shallow JavaScript execution on a Web page just tells part of the story. We have noticed several examples of scripts running seite an seite to the content-serving functionality of web pages, these appear to fulfill a variety of other functions. The studies performed so far have led to several exciting discoveries, but so much more info remains hidden in the immense dataset available.

We are askin any interested individuals to be section of the exploration. You’ re invited in order to participate in the Overscripted Web: A Mozilla Information Analysis Challenge plus help us better understand a few of the hidden workings of the modern Internet!


Extra special thanks to Steven Englehardt for his contributions to the OpenWPM tool and advice throughout this particular project. We also thank Havi Hoffman for valuable editorial efforts to earlier versions of this article. Finally, thanks to Karen Reid associated with University of Toronto for matching the UCOSP program.

More articles by Matn Lopatka…

If you liked Overscripted! Digging into JavaScript execution from scale by Martin Lopatka Then you'll love Web Design Agency Miami

Add a Comment

Your email address will not be published. Required fields are marked *