
Research
Supply Chain Attack on Axios Pulls Malicious Dependency from npm
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.
Tiny, dependency-free, CLI tool for working with the Netscape Bookmark File format.
Its only function is to take a JSON object from stdin and output HTML to stdout.
Install globally with npm or yarn:
# npm:
npm install -g nbf
# yarn:
yarn global add nbf
Or run it directly with npx:
npx nbf
nbf expects JSON items to have the following properties:
{
/*
Required bookmark URL.
*/
"uri": "https://...",
/*
Optional bookmark title.
*/
"title": "...",
/*
Optional bookmark description.
*/
"description": "...",
/*
Optional bookmark date in any format
parsable by JavaScript's Date() constructor.
When missing, the bookmark will be tagged
with the current timestamp.
*/
"dateAdded": "...",
/*
Optional tags, either as an array
or a comma-separated string.
*/
"tags": ["...", "..."],
"tags": "..., ..., ..."
}
Folders and subfolders are possible:
{
/*
Folder title.
*/
"title": "...",
/*
Nested bookmarks.
*/
"children": [
// ...
]
}
Most of these recipes use:
Jump to:
Grab an archive of your Mastodon account with mastodon-archive.
In the example below, we use jq to look at favourites that have a card attached to them, and reshape the JSON to fit our schema:
cat mastodon.social.user.danburzo.json | \
jq '[
.favourites[] |
select(.card) |
{
dateAdded: .created_at,
uri: .card.url,
title: .card.title,
description: "\(.card.description)\nvia: \(.url)",
tags: ["from:mastodon"]
}
]' | \
nbf > mastodon-faves.html
On macOS, NetNewsWire keeps user data in a SQLite database. We can browse and query it, and grab a JSON of the result, using datasette:
datasette serve ~/Library/Containers/com.ranchero.NetNewsWire-Evergreen/Data/Library/Application\ Support/NetNewsWire/Accounts/OnMyMac/DB.sqlite3
(Older versions of NNW store their database under ~/Library/Application\ Support/NetNewsWire/Accounts/OnMyMac/DB.sqlite3)
Head over to http://127.0.0.1:8001/DB and run this query:
select
a.title as title,
a.summary as description,
coalesce(a.url, a.externalURL) as uri,
a.datePublished * 1000 as dateAdded
from articles as a join statuses as s
on a.articleID = s.articleID
where s.starred = 1;
Then follow the json link, and add the _shape=array query parameter — this shapes the JSON in a way that we can use directly. We can curl -nS it in our command:
curl -nS http://127.0.0.1:8001/DB.json?_shape=array&sql=select+%0D%0A++a.title+as+title%2C+%0D%0A++a.summary+as+description%2C+%0D%0A++coalesce(a.url%2C+a.externalURL)+as+uri%2C%0D%0A++a.datePublished+*+1000+as+dateAdded%0D%0Afrom+articles+as+a+join+statuses+as+s+%0D%0Aon+a.articleID+%3D+s.articleID+%0D%0Awhere+s.starred+%3D+1%3B | \
nbf > nnw.html
GitHub CLI (currently in beta) makes it easy to collate paginated responses from the GitHub API.
gh api user/starred \
-H "Accept: application/vnd.github.v3.star+json" \
-H "Accept: application/vnd.github.mercy-preview+json" \
--paginate \
| jq '
.[] | select(.repo.private == false) | {
title: .repo.full_name,
uri: .repo.html_url,
dateAdded: .starred_at,
description: "\(.repo.description)\n\(.repo.homepage // "")",
tags: ["source:github"]
}' \
| jq -s '.' \
| nbf > stars.html
If you also want to include repository topics and/or language as tags — but be aware that people can go overboard with topics in an effort to make their repos more discoverable:
tags: (
(.repo.topics // []) +
[.repo.language // ""] +
["source:github"]
) | map(. | ascii_downcase) | unique
If afterwards you want to unstar the repos in bulk, use:
gh api user/starred | \
jq -r '.[] | "user/starred/\(.full_name)"' | \
xargs -L1 gh api --method=DELETE
Variations:
--interactive flag for xargs to confirm each unstar with the y key + Enter. (Is there a way to get Yes by default? 🤔)--paginate flag on gh api to go through all your starred repos, but that might get you rate-limited.Lobste.rs offers a JSON endpoint for most things, for example lobste.rs/saved.json. Because you need the session cookie to make this request from the command-line, we go to the browser's dev tools, right-click the select and choose Copy as cURL.
curl ... | jq '[.[] | {
title: .title,
description: "\(.description)\nvia: \(.short_id_url)",
uri: .url,
tags: ((.tags // []) + ["source:lobste.rs"]),
dateAdded: .created_at
}]' | nbf
Safari offers an option to Export Bookmarks... which is already in the Netscape Bookmark Format. However, we may want to run the bookmarks through nbf to:
source:safari as a tag;nbf a chance to timestamp each bookmark.We can extract a JSON from the Safari Bookmarks.html file with hred, then using jq to add the tags property to each object:
cat ~d/Safari\ Bookmarks.html \
| hred "a { @href => uri, @.textContent => title }" \
| jq '[ .[] | .tags = ["source:safari"] ]' \
| nbf > safari-tagged.html
FAQs
CLI tool for working with the Netscape Bookmark File format.
We found that nbf demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.

Research
Malicious versions of the Telnyx Python SDK on PyPI delivered credential-stealing malware via a multi-stage supply chain attack.

Security News
TeamPCP is partnering with ransomware group Vect to turn open source supply chain attacks on tools like Trivy and LiteLLM into large-scale ransomware operations.