Chrome extension - implementing an extension
In the previous post, I showed you how to set up a Chromium extension project, so it supports TypeScript, autocompletion wherever possible and just works nicely as a starter. Now, I'll briefly show the implementation of my simple Page Audio extension.
Intro
Idea

What I wanted from my extension was very simple - when I go to a specific website, it should start playing predefined audio. Hard-coded website name and audio are completely fine.
In a bit more detail, the audio should start playing when I open www.example.com, stop when I switch to a different tab, and resume when I go back to www.example.com. Also, if I have two (or more) tabs with www.example.com opened and I switch between them, the audio should keep playing without restarting. In other words, audio should be played on the whole extension level, not individual tabs.
General technical approach
In short, we need to create HTMLAudioElement somewhere and play/pause it depending on the website in the current tab.
It is doable with service worker and content scripts - we could have a content script creating an HTMLAudioElement element on every page and use a service worker to coordinate the playback. When the tab loses focus, it passes the current media time frame to the service worker and when another tab with a matching URL gains focus, it asks the service worker for the time frame and resumes the playback from there.
However, I think this approach is a bit convoluted and might be prone to errors. It would be much nicer if we could have only one HTMLAudioElement element and play/pause it globally, not from individual tabs. Luckily, there's an interesting API that will greatly help us - offscreen API.
Offscreen API lets the extension create one invisible HTML document. Using it, we'll have a place to keep our HTMLAudioElement and just play/pause it when needed. Bear in mind that service worker still can't do any DOM operations, so we'll need some helper script on our offscreen document to receive service worker messages and adequately control the player.
Implementation

Needed permissions in manifest.json
My extension needs two entries in the permissions array:
- tabs - it needs to know when the user is switching and/or updating tabs
- offscreen - it needs ability to create offscreen document to play the audio from there
If you open extension details in the browser, you'll see permissions described as:
Read your browsing history
It might look a bit scary, but that's what adding tabs permission causes. Unfortunately, I wasn't able to figure out a different approach with less concerning permissions. The other ideas I had were resulting in even scarier permission sets ? In this thread you can read why tabs permission causes that entry.
Managing offscreen documents
As I've mentioned, I would like to have only one HTMLAudioElement and play the audio from it. To make it tab-independent, I'll use offscreen API to create a document where it will be kept and controlled by messages from the service worker.
I feel like object-oriented programming, so here's OffscreenDoc class helping with offscreen document management. In essence, it just creates the offscreen document if it's not created yet.
// ts/offscreen-doc.ts
/**
* Static class to manage the offscreen document
*/
export class OffscreenDoc {
private static isCreating: Promise<boolean | void> | null;
private constructor() {
// private constructor to prevent instantiation
}
/**
* Sets up the offscreen document if it doesn't exist
* @param path - path to the offscreen document
*/
static async setup(path: string) {
if (!(await this.isDocumentCreated(path))) {
await this.createOffscreenDocument(path);
}
}
private static async createOffscreenDocument(path: string) {
if (OffscreenDoc.isCreating) {
await OffscreenDoc.isCreating;
} else {
OffscreenDoc.isCreating = chrome.offscreen.createDocument({
url: path,
reasons: ['AUDIO_PLAYBACK'],
justification:
'Used to play audio independently from the opened tabs',
});
await OffscreenDoc.isCreating;
OffscreenDoc.isCreating = null;
}
}
private static async isDocumentCreated(path: string) {
// Check all windows controlled by the service worker to see if one
// of them is the offscreen document with the given path
const offscreenUrl = chrome.runtime.getURL(path);
const existingContexts = await chrome.runtime.getContexts({
contextTypes: ['OFFSCREEN_DOCUMENT'],
documentUrls: [offscreenUrl],
});
return existingContexts.length > 0;
}
}
As you can see, the only public method is setup and it needs some path when called. That's a path to an HTML document template that will be used to create our offscreen document. It's gonna be super simple in our case:
<!-- offscreen.html --> <script src="dist/offscreen.js" type="module"></script>
Literally, just one script tag. This script will be used to receive service worker messages, create HTMLAudioElement, and play/pause the music. It also has type="module" as I will import something there.
But to receive messages, we should probably send them first.
Message interface
There isn't any strict interface for messages. We just need to make sure they are JSON-serializable. However, I would like to be as type-safe as possible, so I defined a simple interface for messages passed in my extension:
// ts/audio-message.ts
export interface AudioMessage {
/**
* Command to be executed on the audio element.
*/
command: 'play' | 'pause';
/**
* Source of the audio file.
*/
source?: string;
}
You'll see in a moment that the sendMessage method isn't that great fit for typing, but there's an easy workaround to still benefit from type safety there.
Sending messages from the service worker
The service worker is the "brain" of our extension, knows what and when is happening, and should send appropriate messages as needed. But when is it exactly?
We should change the playback state in three situations:
- when a new tab is activated, so user simply changes from tab A to tab B,
- when the current tab is updated, so its URL has changed, or
- when a tab is closed - that's a bit tricky case, as it might happen without invoking any of the two above cases when the user closes the last incognito window while the audio is playing.
All situations mean we might be on the website where we want the audio to play or that we've just closed/left it.
Without further ado, here's the updated ts/background.ts script reacting to the two events:
// ts/offscreen-doc.ts
/**
* Static class to manage the offscreen document
*/
export class OffscreenDoc {
private static isCreating: Promise<boolean | void> | null;
private constructor() {
// private constructor to prevent instantiation
}
/**
* Sets up the offscreen document if it doesn't exist
* @param path - path to the offscreen document
*/
static async setup(path: string) {
if (!(await this.isDocumentCreated(path))) {
await this.createOffscreenDocument(path);
}
}
private static async createOffscreenDocument(path: string) {
if (OffscreenDoc.isCreating) {
await OffscreenDoc.isCreating;
} else {
OffscreenDoc.isCreating = chrome.offscreen.createDocument({
url: path,
reasons: ['AUDIO_PLAYBACK'],
justification:
'Used to play audio independently from the opened tabs',
});
await OffscreenDoc.isCreating;
OffscreenDoc.isCreating = null;
}
}
private static async isDocumentCreated(path: string) {
// Check all windows controlled by the service worker to see if one
// of them is the offscreen document with the given path
const offscreenUrl = chrome.runtime.getURL(path);
const existingContexts = await chrome.runtime.getContexts({
contextTypes: ['OFFSCREEN_DOCUMENT'],
documentUrls: [offscreenUrl],
});
return existingContexts.length > 0;
}
}
As you can see, the toggleAudio function is the most important here. First of all, it sets up the offscreen document. It's safe to call it multiple times, as it just does nothing if the document is already created. Then it decides if it should send "play" or "pause" command, depending on the URL of the current tab. Finally, it sends the message. As I've mentioned, sendMessage doesn't have a generic variant (sendMessage
Notice also the two constants at the top - here you specify what audio you want to play and at which website.
Receiving the messages by offscreen document
Finally, we are sending the messages, so now it's time to receive them and play some music ?
To do this, we need to implement the script used by offscreen.html. It's dist/offscreen.js, so that's how corresponding ts/offscreen.ts looks:
<!-- offscreen.html --> <script src="dist/offscreen.js" type="module"></script>
In short, if we haven't created HTMLAudioElement we're doing that using the provided source and then we're playing/pausing it. Returning undefined is needed for typing purposes. If you're interested in the meaning of the different return values, check the docs
Summary

Try it out! Go to www.example.com (or whatever website you've set) and see if the audio is playing. Try switching tabs back and forth and verify if it correctly stops and resumes.
Take into account that if you pause music for more than 30 seconds, it will be restarted, as the service worker will be terminated by the browser! Here are some docs about that.
To summarize what we did:
- we updated our manifest.json with the required permissions to create an offscreen document and monitor activity on tabs
- we made the service worker observe activity on tabs and send adequate commands to the script living in the offscreen document
- we started playing audio via a script that receives messages from the service worker and controls the DOM of the offscreen document
I hope it was clear and easy to follow! There's quite a natural progression of this extension - letting the user specify different websites and assign different audio to each of them. Hopefully, I'll add that when I have some time and write another post describing my approach.
For now, thanks for reading!
The above is the detailed content of Chrome extension - implementing an extension. For more information, please follow other related articles on the PHP Chinese website!
Hot AI Tools
Undress AI Tool
Undress images for free
Undresser.AI Undress
AI-powered app for creating realistic nude photos
AI Clothes Remover
Online AI tool for removing clothes from photos.
Clothoff.io
AI clothes remover
Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!
Hot Article
Hot Tools
Notepad++7.3.1
Easy-to-use and free code editor
SublimeText3 Chinese version
Chinese version, very easy to use
Zend Studio 13.0.1
Powerful PHP integrated development environment
Dreamweaver CS6
Visual web development tools
SublimeText3 Mac version
God-level code editing software (SublimeText3)
Hot Topics
1793
16
1736
56
1587
29
267
587
How to work with dates and times in js?
Jul 01, 2025 am 01:27 AM
The following points should be noted when processing dates and time in JavaScript: 1. There are many ways to create Date objects. It is recommended to use ISO format strings to ensure compatibility; 2. Get and set time information can be obtained and set methods, and note that the month starts from 0; 3. Manually formatting dates requires strings, and third-party libraries can also be used; 4. It is recommended to use libraries that support time zones, such as Luxon. Mastering these key points can effectively avoid common mistakes.
Why should you place tags at the bottom of the ?
Jul 02, 2025 am 01:22 AM
PlacingtagsatthebottomofablogpostorwebpageservespracticalpurposesforSEO,userexperience,anddesign.1.IthelpswithSEObyallowingsearchenginestoaccesskeyword-relevanttagswithoutclutteringthemaincontent.2.Itimprovesuserexperiencebykeepingthefocusonthearticl
What is event bubbling and capturing in the DOM?
Jul 02, 2025 am 01:19 AM
Event capture and bubble are two stages of event propagation in DOM. Capture is from the top layer to the target element, and bubble is from the target element to the top layer. 1. Event capture is implemented by setting the useCapture parameter of addEventListener to true; 2. Event bubble is the default behavior, useCapture is set to false or omitted; 3. Event propagation can be used to prevent event propagation; 4. Event bubbling supports event delegation to improve dynamic content processing efficiency; 5. Capture can be used to intercept events in advance, such as logging or error processing. Understanding these two phases helps to accurately control the timing and how JavaScript responds to user operations.
How can you reduce the payload size of a JavaScript application?
Jun 26, 2025 am 12:54 AM
If JavaScript applications load slowly and have poor performance, the problem is that the payload is too large. Solutions include: 1. Use code splitting (CodeSplitting), split the large bundle into multiple small files through React.lazy() or build tools, and load it as needed to reduce the first download; 2. Remove unused code (TreeShaking), use the ES6 module mechanism to clear "dead code" to ensure that the introduced libraries support this feature; 3. Compress and merge resource files, enable Gzip/Brotli and Terser to compress JS, reasonably merge files and optimize static resources; 4. Replace heavy-duty dependencies and choose lightweight libraries such as day.js and fetch
A definitive JS roundup on JavaScript modules: ES Modules vs CommonJS
Jul 02, 2025 am 01:28 AM
The main difference between ES module and CommonJS is the loading method and usage scenario. 1.CommonJS is synchronously loaded, suitable for Node.js server-side environment; 2.ES module is asynchronously loaded, suitable for network environments such as browsers; 3. Syntax, ES module uses import/export and must be located in the top-level scope, while CommonJS uses require/module.exports, which can be called dynamically at runtime; 4.CommonJS is widely used in old versions of Node.js and libraries that rely on it such as Express, while ES modules are suitable for modern front-end frameworks and Node.jsv14; 5. Although it can be mixed, it can easily cause problems.
How to make an HTTP request in Node.js?
Jul 13, 2025 am 02:18 AM
There are three common ways to initiate HTTP requests in Node.js: use built-in modules, axios, and node-fetch. 1. Use the built-in http/https module without dependencies, which is suitable for basic scenarios, but requires manual processing of data stitching and error monitoring, such as using https.get() to obtain data or send POST requests through .write(); 2.axios is a third-party library based on Promise. It has concise syntax and powerful functions, supports async/await, automatic JSON conversion, interceptor, etc. It is recommended to simplify asynchronous request operations; 3.node-fetch provides a style similar to browser fetch, based on Promise and simple syntax
How does garbage collection work in JavaScript?
Jul 04, 2025 am 12:42 AM
JavaScript's garbage collection mechanism automatically manages memory through a tag-clearing algorithm to reduce the risk of memory leakage. The engine traverses and marks the active object from the root object, and unmarked is treated as garbage and cleared. For example, when the object is no longer referenced (such as setting the variable to null), it will be released in the next round of recycling. Common causes of memory leaks include: ① Uncleared timers or event listeners; ② References to external variables in closures; ③ Global variables continue to hold a large amount of data. The V8 engine optimizes recycling efficiency through strategies such as generational recycling, incremental marking, parallel/concurrent recycling, and reduces the main thread blocking time. During development, unnecessary global references should be avoided and object associations should be promptly decorated to improve performance and stability.
var vs let vs const: a quick JS roundup explainer
Jul 02, 2025 am 01:18 AM
The difference between var, let and const is scope, promotion and repeated declarations. 1.var is the function scope, with variable promotion, allowing repeated declarations; 2.let is the block-level scope, with temporary dead zones, and repeated declarations are not allowed; 3.const is also the block-level scope, and must be assigned immediately, and cannot be reassigned, but the internal value of the reference type can be modified. Use const first, use let when changing variables, and avoid using var.


