Understanding the Execution Context in JavaScript

The execution context in JavaScript is arguably the most important thing for you to understand, as a firm understanding will give you the basic knowledge you need to comprehend more complex concepts such as hoisting and closure.

Before we start it’s important for me to mention that this article focuses on how it works within the language scope and not the engine. While the core principles are the same, implementations may vary in different engines.

In a previous article, we discussed how the JS Engine works. If you’re not already familiar, I recommend you start there. Otherwise, let’s jump in and talk about the ‘execution context’!

What is the Execution Context?

In layman’s terms, the execution context represents the environment which our codes run in.

The deeper we dive into this subject the more you’ll understand what exactly the environment is. For now you can think of it as a box which contains all our code.

In JS, we have different types of code. There’s code that’s in the global context. Then there’s code that’s inside a function context. There’s also code that’s within an eval function.

Each of these different types of code is evaluated within a dedicated execution context. 

Every time your app calls a function, a new execution context is created. In recursive functions, every time the function calls itself, a new execution context is created, therefore you can theoretically have an infinite number of execution contexts.

So to sum it up, you can have 3 different types of execution contexts:

  • Global Execution Context – Every code that’s not within a function, is in the global environment. There can only be one global environment, and a global environment also contains a global object (window in the browser) and the value of this in non strict mode is equal to the global environment.
  • Function Execution Context – Every time a function is executed, a new execution context is created for that function. So every function has an execution context of its own which is created when the code is calling the function, and not before.
  • Eval Execution Context – This is created when an eval function is called, but since most developers don’t use eval we’ll not talk about it in this article.

The Execution Stack

The execution stack (also called calling stack in other languages) is a data structure of a Stack, which is used as a collection of all execution contexts which are active while the code runs.

The Stack works in a way called LIFO (Last in, first out). What that means, is that the last item that goes into the stack, is also the first that comes out of the stack. 

Here’s a visual representation of how it works:

execution stack

How the Stack Works

Now that you have a basic understanding of how a stack looks, we can examine how it works. 

In general, we can think of each context as either a caller or a callee. If a certain code calls another function, then the context of that code is the caller, and the context of the code which is called is the callee.

A context can be a caller and a callee at the same time, e.g. a function which is called from the global context, and then calls another function.

When a caller calls a function, the caller stops the execution and effectively gives the control flow to the callee. At that moment that callee is pushed to the execution context and becomes the active execution context.

Once the code in the active execution context finishes running, the control flow goes back to the caller and the function proceeds.

Let’s take the following code as an example:

function firstFunc() {

    console.log('Executing first function')

secondFunc();

}



function secondFunc() {

    console.log('Executing second function')

}



firstFunc();

Here’s how it will look like in terms of the call stack:

call stack visual

As you can see, the call stack works in a synchronous way, the active stack always represents the context that is currently active, in the order in which it was called.

You might be wondering how the stack works when you have asynchronous code. That’s definitely interesting and a subject for another article, but in short the call stack works with synchronous operations, when you perform an asynchronous operation, it goes into the stack only after it’s free and all synchronous code is completed. We’ll elaborate more about this in the future.

The Process of Creating an Execution Context

Now that we understand what an execution context is and how the call stack works, a few important questions still linger. Why do we even need the execution context? What is it responsible for? And what does an execution context contain?

Every time an execution context is created, it happens in two phases:

  1. The Creation Phase
  2. The Execution Phase

The creation phase begins when an execution context is created but before the code runs. Let’s take for example a function call.

When you call a function, you might think the code immediately runs but in reality the creation phase starts but the code doesn’t actually execute. There are few things that happen before it’s executed.

I like to think of the creation phase as a form of a template. In the creation phase a template is created, and in the execution phase the template is filled with the relevant information.

What is this template? 

During the creation phase the engine goes over the code, and every time it comes across a declaration of a variable or a function, it saves the variables without their actual value (except function arguments, where the value is saved).

Then in the execution phase, the engine will run over that template and execute each relevant part.

This process repeats itself every time a new execution context is created, the engine creates a template of the variables and function declarations, and only then in the execution phase goes and actually assigns the variables values and actually executes the code.

Example Code

Let’s take a look at a quick example to see how this works.

function helloWorld (world) {

    var foo = 'foo'

const bar = 'bar'

let fooBar = 'fooBar'

console.log('Hello, ' + world)

}




helloWorld('earth')
  1. When this code will run, first of all a global execution context will be created and the engine will save the declaration of the helloWorld function with the argument that’s provided to the world parameter – ‘earth’.
  2. Then, in the execution phase, the engine will execute this function, and a new execution context will be created for that function.
  3. In the creation phase of the helloWorld execution context, the engine will save the variable name of type var with an initial value of undefined.
  4. Let and const will be saved with an initial value of uninitialized.
  5. Once the creation phase has completed, the execution phase starts. This is where the assignment of the variables happen and the code gets executed.
  6. Foo will be assigned to ‘foo’ and executed.
  7. Bar will be assigned to ‘bar’ and executed.
  8. fooBar will be assigned to ‘fooBar’ and executed.
  9. Then the console.log function will be executed and ‘Hello, earth’ will be printed.

Visual Representation

Here’s a visual representation that can help us understand how the function execution looks like in the creation phase, and in the execution phase.

First, in the creation phase:

FuntionExecutionContext = {

     foo: undefined,

 bar: < uninitialized >,

   fooBar: < uninitialized >,

   Arguments: {0: 'world', length: 1},

}

Then, in the execution phase:

FuntionExecutionContext = {

     foo: 'foo',

 bar: 'bar',

   fooBar: 'fooBar',

   Arguments: {0: 'world', length: 1},

}

Keep in mind that this is only the function execution context, and in reality there’s a global execution context that’s always created before any code runs.

It’s also important to mention that this visual representation is still an oversimplification of how the process works. In order to understand the entire picture there are more concepts you’ll need to understand such as the LexicalEnvironment and this binding. We’ll talk about these subjects in future articles.

Components of the Execution Context

Technically, an execution context contains the following things:

ExecutionContext = {

  ThisBinding: <this value>,

  VariableEnvironment: { ... },

  LexicalEnvironment: { ... }

}

All of these things are created during the creation phase, and each serves a different role. A variable environment for example is what actually holds the variables and their values.

Right now, you should have a general view of how an execution works. It’s not the best idea to cover all of these subjects in one article, and I know these subjects can get confusing, so for now don’t worry about it, just remember that there’s more to it.

Summary

There are 3 types of execution context:

  1. Global execution context – for code that’s in the global context
  2. Functional Execution Context – for code that’s inside a function
  3. Eval execution context – for code that’s inside the eval function

Each execution context is managed by the execution stack, in the form of a caller and a callee. We learned that there are two phases that happen every time an execution context is created:

  1. The creation phase
  2. The execution phase

In the creation phase a certain template of the variables and functions is created. The declarations of variable of type var are saved with an initial value of undefined, and for const and let an initial value of uninitialized. 

The function’s declarations are also saved, as well as the value of the arguments.

In the execution phase, the engine goes through the code, performs an assignment of the variables and executes the code.

We also mentioned that technically an execution context contains three things, LexicalEnvironment, VariableEnvironment and thisBinding. Since each of them is its own subject, we’ll continue to talk about them in the next articles.

More Useful Resources:

  1. Understanding Execution Context and Execution Stack in Javascript
  2. JavaScript The Core 2nd Edition
  3. What is the Execution Context & Stack in JavaScript? by David Shariff

Web Assembly Deep Dive – How it Works, And Is It The Future?

You’ve most likely heard of Web Assembly. Maybe you’ve heard about how game-changing of a technology it is, and maybe you’ve heard about how it’s going to change the web.

Is it true? The answer to this question is not as simple as a yes or no, but we can definitely tell a lot as it’s been around for a while now. Since November 2017, Web Assembly has been supported in all major browsers, and even mobile web browsers for iOS and Android.

In this article, I’ll explain what Web Assembly is, how it works, what are the advantages, and what are the current uses today.

So with that being said, let’s get started!

What is Web Assembly?

Let’s take a look at how MDN explains it:

“WebAssembly is a new type of code that can be run in modern web browsers — it is a low-level assembly-like language with a compact binary format that runs with near-native performance and provides languages such as C/C++, C# and Rust with a compilation target so that they can run on the web. It is also designed to run alongside JavaScript, allowing both to work together.”

We can learn a few important things from their explanation:

First, Web Assembly (or, WASM for short) is not a language you write, but a compilation result of other languages. Here’s a list of all the languages that support compilation to WASM: https://github.com/appcypher/awesome-wasm-langs

You’ll probably be surprised to find out you can even use TypeScript syntax to write Assembly, using AssemblyScript.

Second, WASM is faster than JS and runs with near-native performance in the browser. This is a big deal and is what makes WASM so attractive.

Third, WASM is designed to run alongside JavaScript, and not to replace it. One of the misconceptions about WASM is that it is somehow a competitor to JS. The truth is, WASM has been designed to run alongside JavaScript from the get-go.

WASM and JS can even communicate with each other. So WASM code has the ability to indirectly access JS features such as different features of the Web API like the DOM, Audio, Web Sockets, etc.

How Web Assembly Works

So by now you probably get that WASM is fast, but why? How does it actually work?

In order to understand that, first you’ll need to understand how the JS engine works behind the scenes. Since this is not the main topic of the article, I’ll only touch on it from a high-level. I explained it much more deeply in my article about how the JS engine works, and I highly recommend to read it first if you’re not familiar with the subject at all.

Also keep in mind that engine implementations are different, the steps I’m explaining here are about how V8 works, but other engines might do this differently.

How the JS Engine Works

In order to compile JS code there are few things the engine must do:

  1. The Parser – the engine must pass that code through a parser first. The parser goes through the code line by line and checks it for valid syntax as well as for the code types. If everything is valid, the Parser creates a tree data-structure called an Abstract Syntax Tree (AST).
  2. AST to Intermediate Representation (IR) – Then, the engine interpreter takes the AST and turns it into Bytecode, which is an intermediate representation of the code (an IR is an abstraction of machine code).
  3. Compiling the IR to Machine Code – Only then the engine compiler takes the Bytecode and turns it into a code a machine can run on its processor.

How WASM Works

The reason WASM is faster is because WASM code goes directly to the compiler, effectively skipping step 1 and 2.

But you might be wondering, why? Why is WASM able to skip steps 1 and 2 and JS not?

The reason for that is because JS is a dynamically-typed language, which means that JS checks the type of variables at run-time by the Parser.

In contrast, statically-typed languages require to declare the types in advance, therefore types are known and are checked at compile time.

So the way WASM works is:

  1. You write code with its types, usually in a statically typed-language.
  2. Then you generate a pre-compiled WASM module.
  3. Then you can run this code straight by the engine compiler, skipping the parsing and the transformation to Intermediate Representation.

Where is Web Assembly Today?

But did WASM pass the test of time?

In 2019, researchers from Braunschweig in Germany looked at Alexa’s top 1 million websites and their use of Web Assembly. They wanted to see what WASM is being used for today.

They analyzed 947,407 websites and 3,465,320 pages. They found that 1,639 websites are loading 1,950 modules of WASM.

The two main uses of WASM were:

  1. Crypto Mining – 32% of the websites used WASM for Cryptocurrency mining (mostly with malicious purposes in hacked websites). The reason they used WASM is that the speed boost also means that computer hardware can be exploited more effectively for cryptojacking attacks.
  2. Gaming – 29.3% of the websites used WASM for gaming. One example is the popular game Doom 3.

However, there are more uses for Web Assembly than just Gaming and Crypto Mining. In fact, Figma, a popular web designing tool, reported that they tripled their loading speed with WASM.

Fastq.bio reported they were able to improve performance by 20 fold. Autocad also used WASM and reported they were able to get native-like speed in the browser.

Conclusion

Web Assembly is definitely a revolutionary technology that allows running heavy software in the browser, something that was not possible before.

With that said, if you’re not trying to run super heavy computation then you probably don’t need to use WASM.

JS is still very fast and still dominates the browser, that doesn’t seem to change with the coming of WASM. Rather than replacing JS, WASM allows for better integrations, and software that uses WASM will most likely use JS too.

How JS Works Behind The Scenes — The Engine

Have you ever asked yourself “how does this all work behind the scenes?”. I know I have.

Having a deep understanding of certain concepts allows us to understand code in a much better way, perform better in our job and in addition to that it’s super helpful in job interviews.

And also it can be a super fun subject to learn… So with that being said, let’s get into the details of how the JS engine works.

In this post we will dive deep into the world of JS, how it works behind the scenes, from the Engine, to concepts like hoisting, execution context, lexical environment and more.

Here’s a general overview of how a JS engine works:

js overview

Don’t worry if you don’t understand any of this yet, at the end of the article you’ll understand every step of this diagram.

Let’s go!

Environment

A computer, a compiler, or even a browser can’t actually ‘understand’ code that’s written in JS. If so, how does the code run?

Behind the scenes, JS always runs in a certain environment, most common ones are:

  1. Browser (by far the most common)
  2. Node.js (which is a runtime environment which allows you to run JS outside of the browser, usually in servers)

Engine

js engine

So JS needs to run in a certain environment, but what exactly is in the environment?

When you write code in JS, you write it in human-readable syntax, with alphabets and numbers. As mentioned, a machine can not understand this type of code.

This is why each environment has an engine.

In general, the engine’s job is to take that code and transform it into machine code which can eventually be run by the computer processor.

Each environment has its own engine, the most common ones are Chrome V8 (which Node also uses), Firefox SpiderMonkey, JavaScriptCore by Safari and Chakra by IE.

All engines work in a similar fashion but there are differences between each engine.

It’s also important to keep in mind that behind the scenes an engine is simply a software, Chrome V8 for example is a software written in C++.

Parser

js parser

So we have an environment, and we have an engine inside that environment. The first thing the engine does upon executing your code is check the code using the parser.

The parser knows JS syntax and rules, and its job is to go through the code line by line and check if the syntax of the code is correct.

If the parser comes across an error, it stops running and sends out an error. If the code is valid, the parser generates something that’s called an Abstract Syntax Tree (or AST for short).

Abstract Syntax Tree (AST)

abstract syntax tree

So our environment has an engine, which has a parser, which generates an AST. But what is an AST and why do we need it?

AST is a data structure, which is not unique to JS but actually used by a lot of other languages (some of them are Java, C#, Ruby, Python).

An AST is simply a tree representation of your code, and the main reason the engine creates an AST instead of compiling directly to a machine code is that it’s easier to convert to machine code when you have the code inside a tree data structure.

You can actually check out how the AST looks like, just put any code in the website ASTExplorer and check out the data structure that’s created:

astexplorer

The Interpreter

js interpreter

The Interpreter’s job is to take the AST that has been created and transform it into an Intermediate Representation of the code (IR).

We will learn more about the interpreter later on, as further context is required in order to fully understand what it is.

The Intermediate Representation (IR)

So what is this IR the interpreter generates from the AST?

An IR is a data structure or code which represents the source code. Its role is to be an intermediate step between code that’s written in an abstract language such as JS and machine code.

Essentially you can think of IR as an abstraction of machine code.

There are many types of IR, a very popular among JS engines is Bytecode. Here’s a picture which demonstrates the IR role in the V8 engine:

intermediate representation IR

But you might be asking… Why do we need to have an IR? Why not just compile straight to machine code. There are 2 primary reasons why Engines use IR as an intermediate step between high-level code and machine code:

  1. Mobility — when code gets compiled to machine code, it needs to match the hardware that it’s run on.

Machine code written for an Intel processor and machine code written for an ARM processor will be different. An IR, on the other-hand, is universal and can match any platform. This makes the following conversion process easier and more mobile.

  1. Optimizations — it’s easier to run optimizations with IR compared to machine code, this is true both from code optimizations point of view and hardware optimizations.

Fun fact: JS engines are not the only ones using Bytecode as an IR, among the languages which also use Bytecode you will find C#, Ruby, Java, and more.

The Compiler

js compiler

The compiler’s job is to take the IR which the interpreter created, which is in our case Bytecode, and transform it into a machine code with certain optimizations.

Let’s talk about code compilation and some fundamental concepts. Keep in mind that this is a huge subject that takes a lot of time to master, so I’ll only touch on it generally for our use case.

Interpreters vs Compilers

There are two ways to translate code into something that a machine can run, using a compiler and using an interpreter.

The difference between an interpreter and a compiler is that an interpreter translates your code and executes it line-by-line while a compiler instantly translates all code into machine code before executing it.

There are pros and cons to each, a compiler is fast but complex and slow to start, an interpreter is slower but simpler.

With that being said, there are 3 ways to turn high-level code into machine code and run it:

  1. Interpretation — with this strategy you have an Interpreter which goes through the code line by line and executes it (not so efficient).
  2. Ahead of Time Compilation (AOT) — here you have a compiler first compiling the entire code, and only then executing it.
  3. Just-In-Time Compilation — Combination between the AOT strategy and the interpretation strategy, a JIT compilation strategy attempts to take the best from both worlds, performing dynamic compilation, but also allowing certain optimizations to happen, which really speeds up the compilation process. We’ll explain more about JIT compilation.

Most JS engines use a JIT compiler but not all of them. For example Hermes, the engine which React Native uses, doesn’t use a JIT compiler.

To summarize, in JS Engines the compiler takes the IR created by the interpreter and generates optimized machine code from it.

JIT Compiler

Like we said, most JS Engines use a JIT compilation method. The JIT combines both the AOT strategy and interpretation, allowing for certain optimizations to happen. Let’s dive deeper into these optimizations and what exactly the compiler does.

JIT compilation optimizations are done by taking code that’s repeating itself and optimizing it. The optimizations process works as follows:

In essence, a JIT compiler gets feedback by collecting profiling data for the code that’s executed, if it comes across any hot code segment (code which repeats itself), the hot segment will go through the compiler which will then use this information to re-compile more optimally. 

Let’s say you have a function, which returns a property of an object:

function load(obj) {
return obj.x;

}

Looks simple? Maybe to us, but for the compiler this is not a simple task. If the compiler sees an object it knows nothing about, it has to check where is the property x, if the object indeed has such a property, where is it in the memory, is it in the prototype chain and much more.

So what does it do to optimize it?

In order to understand that, we must know that in machine code, the object is saved with its types. 

Let’s assume we have an object with x and y properties, the x is of type number and the y is of type string. Theoretically, the object will be represented in machine code like this:

obj:

x: number

y: string

Optimization can be done if we call a function with the same object structure. This means that properties will be the same and in the same order, but values can be different, like this:

load({x: 1, y: 'hello'});

load({x: 5, y: 'world'});

load({x: 3, y: 'foo'});

load({x: 9, y: 'bar'});

Here’s how it works. Once we call that function, the optimized compiler will recognize we are trying to call a function that’s already been called again.

It will then proceed to check whether the object that’s being passed as an argument has the same properties. 

If so, it will already be able to access its location in memory, instead of looking through the prototype chain and doing many other things that are done for unknown objects.

Essentially the compiler runs through a process of optimization and de-optimization. 

When we run code, the compiler assumes that a function will use the same types it used before, so it saves the code with the types in advance. This type of code is called optimized machine code.

Every time the code calls the same function again, the optimized compiler will then try to access the same place in memory. 

But because JS is a dynamically-typed language, at some point we might want to use the same function with different types. In such a case the compiler will do a process of de-optimization, and compile the code normally.

To summarize the part about the JIT compiler, the JIT compiler’s job is to improve performance by using hot code segments, when the compiler executes code that’s been executed before, it assumes that the types are the same and uses the optimized code that’s been generated. If the types are different the JIT performs a de-optimization and compiles the code normally.

A Note About Performance

One way to improve the performance of your app is to use the same types with different objects. If you have two different objects with the same type, even though the values are different as long as the properties are in the same order and have the same type, the compiler sees these two objects as an object with an equal structure and types and it can access it faster.

For example:

const obj = {
 x: 1,
 a: true,
 b: 'hey'

}

const obj2 = {
 x: 7,
 a: false,
 b: 'hello'

}

As you can see in the example, we have two different objects with different values, but because the properties order and types are the same, the compiler will be able to compile these objects faster.

Although it’s possible to optimize code this way, my opinion is that there are much more important things to do for performance, and something as minor as this shouldn’t concern you.

It’s also hard to enforce something like this in a team, and overall doesn’t seem to make a big difference as the engine is very fast.

With that being said I’ve seen this tip being recommended by a V8 team member, so maybe you do want to try to follow it sometimes. I see no harm in following it when possible, but definitely not at the cost of clean code and architectural decisions.

Summary

  1. JS code has to run in an environment, the most common ones are browsers and Node.js.
  2. The environment needs to have an engine, which takes the JS code that’s written in human-readable syntax and turns it into machine code.
  3. The engine uses a parser to go through the code line by line and check if the syntax is correct. If there are any errors, code will stop executing and an error will be thrown.
  4. If all checks pass, the parser creates a tree data structure called an Abstract Syntax Tree (AST).
  5. The AST is a data structure which represents the code in a tree like structure. It’s easier to turn code into machine code from an AST.
  6. The interpreter then proceeds to take the AST and turn it into IR, which is an abstraction of machine code and an intermediary between JS code and machine code. IR also allows to perform optimizations and is more mobile.
  7. The JIT compiler then takes the IR generated and turns it into machine code, by compiling the code, getting feedback on the fly and using that feedback to improve the compilation process.

This post was originally published at borderlessengineer.com