Electron apps have transformed desktop development by combining technologies with Node.js and Chromium. However, with this convenience comes security risks if not configured well. In this blog, we’ll inspect the most common misconfigurations of Electron apps that attackers can abuse to gain code execution or disclosure of sensitive information.
The electron of interest is a free and open-source software (cross-platform) framework developed and maintained by the OpenJS Foundation. The framework is designed in such a way that it creates desktop applications using web technologies that render using a version of the Chromium browser engine and a back end using the Node.js runtime environment.
You might be aware that these two components “Chromium” and “Node.js” are widely used, due to which attack surface for such tech stacks gets increased and we see multiple CVEs affecting these products. These are the two major tech stacks on which Electron apps are made with; if not very well configured/hardened, they would be left vulnerable.
Why Electron but.?
This is due to the following reasons:
- Performance
- Cross Platform Compatibility
- Easy Development Experience
- Friendly Community and ecosystem
For example, the easiest way to get started with Electron is “Fiddle” which comes with custom direct API calls so you can see what “BrowserView” or “desktopCapturer” works. In an easy word creating Electron apps using Fiddle is like creating a website using WordPress - just drag and drop with no installation dependencies.
When Web meets Node
A web browser having access/control to native operating system APIs, which causes client-side vulnerability, and when you find JavaScript execution in such applications, i.e., XSS that could lead to RCE (remote code execution) due to node run time environment (which is enabled by default in EA (Electron App)).
So, the problem is: when electron applications are just like web applications and do not require any OS-based API then there is no point in enabling Node.
Great! So far, we know what electron apps and what their backend tech stack (chromium and node) are, and a major problem that causes electrons to be prone against multiple attacks if not configured properly.
Oops! Electron
Modern desktop applications require modern attacks! In the below image based on a keyword search “electron” there are more than 55+ records (CVE) registered against various electron apps due to misconfigurations leading to information disclosure or code execution.
Tooling, really?
So, what tooling is required to perform a review/assessment against an electron application? For static analysis, electron has their own archive mechanism “Asar”.
@electron/asar - Electron Archive. Asar is a simple extensive archive format that can be simply extracted by 7-zip/keka. Just sip your coffee and review the code.
For dynamic analysis, ideally check for `main.js` and add “devTools:true” then DevTools + Burpsuite proxy
Next, I will be discussing the top five common misconfigurations which are in my checklist while performing the review against the electron app.
#1 Debug mode enabled
Electron apps are based on Chrome hence, we can debug the renderer process in electron apps if the applications are not hardened well. A classic example would be “CVE-2024-36287.” Enabling debug mode in electron applications, which is used by an end consumer, is not a good idea.
Ideally, these debug flags should be set to “false” when releasing the binary. However, there is no direct significant impact here, but it could allow attackers to chain it with other vulnerabilities that could have a potential impact.
To identify if the application has debug mode enabled you can just start browsing the applications functionality one by one until you see a bar where it says open “Dev Tools”.
#2 Reading Runtime Logs Through CLI
I see a very hackish way to do this. It's more like “logcat” while doing a mobile application pentest but not exactly like “logcat” but in the similar line.
Just run the binary directly through the command line by providing the absolute path i.e. “/Applications/Discord.app/Contents/MacOS/Discord” as shown below proof-of-concept.
Such information can be utilized to view details like modules that are being utilized, verbose messages, and if the application is getting updated. What’s the absolute temp path for the package? In this case, the absolute path would be “/Users/zero/Library/Caches/com.hnc.Discord.ShipIt/update.xxxxx/Discord.zip”
In such cases, we can perform a check if these paths are world writable that allows any low-privileged user to modify or add any packages. Some legacy applications usually use “/tmp/xxxx.zip” when the application is being updated, and a low-privileged user could craft a malicious package to achieve code execution.
#3 Injection Based Attacks
As mentioned earlier, client-side attacks under electron applications are considered HIGH severity. This is because electron applications have native API calls to OS which can allow attackers to control and invoke OS commands if the application is prone to injection-based attacks (i.e. cross-site scripting).
In this example, we are using “Notable” version1.8.4 for macOS and as this project is not heavily maintained, we can use this as an example. Notable is a markdown-based note-taking app, so we know for a fact that mark-down commands would be rendered in the context of the application. If the application is not properly sanitizing the input received from the user, it might cause rendering malicious JavaScript. In the below example, we would be trying to render basic cross-site scripting payloads and see if the mark-down for Notable application renders it.
In the above proof-of-concept, we are trying to see if a basic payload can be rendered. When we save the above MD file as shown below the injection JS payload is not rendered, which means some basic checks are implemented against the application to protect against such an attack.
In such cases, I check if we can escape it through tags/comments as shown in the below proof-of-concept.
However, our basic escaping didn’t work, and no JS was executed. Let’s move further by changing the payload pattern from “<script>” tag to something else.
Great, at least we know now that the application is prone to injection-based attacks. Let us try to see if we can craft a payload through electron APIs to perform command injection. In this case, I have crafted the below payloads:
“<a onmouseover="try{ const {shell} = require('electron'); shell.openExternal('file:/System/Applications/Calculator.app/') }catch(e){alert(e)}">Calling native APIs to pop calc</a>”
Let’s break down the code snippet.
- <a> Tag: This is an anchor tag, typically used for creating hyperlinks.
onmouseover Event: Will trigger the JS function when a user hovers the anchor. Within the “onmouseover” event the following JS would be executed. - try { ... } catch(e) { ... }: A basic try catch statement.
const { shell } = require('electron'): This part would assume that the application is running under electron environment. - { shell }: Electron has a “shell” module which would allow you to open external applications or URLs.
- shell.openExternal('file:/System/Applications/Calculator.app/'): “openExternal” This is a method in the electron “shell” module that would facilitate opening external applications or URLs.
If we combine this, below is a glimpse of how the attack would look like to achieve code execution.
#4 DYLIB Hijacking
As you might be familiar, DYLIB (dynamic library) is a type of shared library that is primarily used in macOS. In simple words, it’s like other types of shared libraries found in different operating systems (like .dll files for Windows and .so files for Linux).
Usually, software is nothing but just a set of instructions and to facilitate them these dynamic libraries are in use. DYLIB hijacking is a method where we load an unsigned dynamic library in the context of the current running program by hijacking the execution flow. There are multiple types of DYLIB hijacking just like DLL hijacking.
Once an electron application is built a security mechanism named `hardedRuntime` should be utilized that restricts certain behaviors and ensures granular control over what an application can do/run. If such controls are not utilized, attackers can use the “DYLD_INSERT_LIBRARIES” environment variable to inject malicious DYLIB in a running process, which will hijack the current running flow of the application and will execute the malicious DYLIB.
Let’s prepare a sample DYLIB file to demonstrate this.
$ cat poc.c
#include <syslog.h>
#include <stdio.h>
__attribute__((constructor))
static void poc(void)
{
printf("Malicious DYLIB Inserted, Cobalt");
}
$ gcc -dynamiclib -arch x86_64 -o poc.dylib poc.c
bash-3.2$ DYLD_INSERT_LIBRARIES=poc.dylib /Applications/Sample.app/Contents/MacOS/Sample
Malicious DYLIB Inserted, Cobalt
[INF] 18:53:31: (front-end) Initialising Sample core
[INF] 18:53:31: (front-end) Starting with language: en-US
[INF] 18:53:31: (front-end) Window opened and initialised
[INF] 18:53:31: (front-end) Setting body theme: dark
[INF] 18:53:31: (front-end) Updated 1 vaults from back-end
[INF] 18:53:31: (front-end) Rendering application
[INF] 18:53:31: (front-end) Auto-nav: First vault in order: 85732f9b-c426-49a1-9c8a-a0d1ee444046
[INF] 18:53:31: (front-end) Toggling auto-update for vault editing (editing=false, auto-update=true)
[INF] 18:53:31: Enabled vault auto-update
bash-3.2$
In the above sample macOS application, as soon as we inject our DYLIB through “DYLD_INSERT_LIBRARIES” the application accepts it and puts it under the program stack, and the malicious DYLIB is executed. This would be a classic vanilla example of a program missing a basic hardening check which could be “hardedRuntime”.
To learn more on DLL hijacking for macOS there is a great video by Patrick Wardle which was published by DEFCON.
These are some common misconfigurations which I look for while doing research/pentest against an electron application. I will be back with Part 2 for “Hacking Electron Apps” where I will be hunting for LPE bugs against electron applications.
In Review
While Electron simplifies cross-platform development, its misconfigurations can expose applications to various attacks. Developers can significantly reduce such risks by addressing common misconfigurations and security gaps. Stay tuned for Part 2, where we’ll explore advanced vulnerabilities like privilege escalation.