To read & normalize RSS/ATOM/JSON feed data.
feed-reader
has been renamed to @extractus/feed-extractor
since v6.1.4
npm i @extractus/feed-extractor
# pnpm
pnpm i @extractus/feed-extractor
# yarn
yarn add @extractus/feed-extractor
// es6 module
import { read } from '@extractus/feed-extractor'
// CommonJS
const { read } = require('@extractus/feed-extractor')
// you can specify exactly path to CommonJS version
const { read } = require('@extractus/feed-extractor/dist/cjs/feed-extractor.js')
// extract a RSS
const result = await read('https://news.google.com/rss')
console.log(result)
// deno < 1.28
import { read } from 'https://esm.sh/@extractus/feed-extractor'
// deno > 1.28
import { read } from 'npm:@extractus/feed-extractor'
import { read } from 'https://unpkg.com/@extractus/feed-extractor@latest/dist/feed-extractor.esm.js'
Please check the examples for reference.
Load and extract feed data from given RSS/ATOM/JSON source. Return a Promise object.
read(String url)
read(String url, Object options)
read(String url, Object options, Object fetchOptions)
URL of a valid feed source
Feed content must be accessible and conform one of the following standards:
For example:
import { read } from '@extractus/feed-extractor'
const result = await read('https://news.google.com/atom')
console.log(result)
Without any options, the result should have the following structure:
{
title: String,
link: String,
description: String,
generator: String,
language: String,
published: ISO Date String,
entries: Array[
{
title: String,
link: String,
description: String,
published: ISO Datetime String
},
// ...
]
}
Object with all or several of the following properties:
normalization
: Boolean, normalize feed data or keep original. Defaulttrue
.useISODateFormat
: Boolean, convert datetime to ISO format. Defaulttrue
.descriptionMaxLen
: Number, to truncate description. Default210
(characters).xmlParserOptions
: Object, used by xml parser, view fast-xml-parser's docsgetExtraFeedFields
: Function, to get more fields from feed datagetExtraEntryFields
: Function, to get more fields from feed entry data
For example:
import { read } from '@extractus/feed-extractor'
await read('https://news.google.com/atom', {
useISODateFormat: false
})
await read('https://news.google.com/rss', {
useISODateFormat: false,
getExtraFeedFields: (feedData) => {
return {
subtitle: feedData.subtitle || ''
}
},
getExtraEntryFields: (feedEntry) => {
const {
enclosure,
category
} = feedEntry
return {
enclosure: {
url: enclosure['@_url'],
type: enclosure['@_type'],
length: enclosure['@_length']
},
category: isString(category) ? category : {
text: category['@_text'],
domain: category['@_domain']
}
}
}
})
You can use this param to set request headers to fetch.
For example:
import { read } from '@extractus/feed-extractor'
const url = 'https://news.google.com/rss'
await read(url, null, {
headers: {
'user-agent': 'Opera/9.60 (Windows NT 6.0; U; en) Presto/2.1.1'
}
})
You can also specify a proxy endpoint to load remote content, instead of fetching directly.
For example:
import { read } from '@extractus/feed-extractor'
const url = 'https://news.google.com/rss'
await read(url, null, {
headers: {
'user-agent': 'Opera/9.60 (Windows NT 6.0; U; en) Presto/2.1.1'
},
proxy: {
target: 'https://your-secret-proxy.io/loadXml?url=',
headers: {
'Proxy-Authorization': 'Bearer YWxhZGRpbjpvcGVuc2VzYW1l...'
}
}
})
Passing requests to proxy is useful while running @extractus/feed-extractor
on browser.
View examples/browser-feed-reader
as reference example.
git clone https://github.com/extractus/feed-extractor.git
cd feed-extractor
npm i
npm test
git clone https://github.com/extractus/feed-extractor.git
cd feed-extractor
npm install
npm run eval https://news.google.com/rss
The MIT License (MIT)