%global _empty_manifest_terminate_build 0
Name: python-recipe-scrapers
Version: 14.36.1
Release: 1
Summary: Python package, scraping recipes from all over the internet
License: MIT License
URL: https://github.com/hhursev/recipe-scrapers/
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/99/a0/082537aaeeb21ea6feff914375c12feb9e09c02b441fc7724faa3411f7fb/recipe_scrapers-14.36.1.tar.gz
BuildArch: noarch
Requires: python3-beautifulsoup4
Requires: python3-extruct
Requires: python3-isodate
Requires: python3-requests
%description
A simple web scraping tool for recipe sites.
pip install recipe-scrapers
then:
from recipe_scrapers import scrape_me
scraper = scrape_me('https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/')
# Q: What if the recipe site I want to extract information from is not listed below?
# A: You can give it a try with the wild_mode option! If there is Schema/Recipe available it will work just fine.
scraper = scrape_me('https://www.feastingathome.com/tomato-risotto/', wild_mode=True)
scraper.host()
scraper.title()
scraper.total_time()
scraper.image()
scraper.ingredients()
scraper.instructions()
scraper.instructions_list()
scraper.yields()
scraper.to_json()
scraper.links()
scraper.nutrients() # if available
You also have an option to scrape html-like content
import requests
from recipe_scrapers import scrape_html
url = "https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/"
html = requests.get(url).content
scraper = scrape_html(html=html, org_url=url)
scraper.title()
scraper.total_time()
# etc...
Notes:
- ``scraper.links()`` returns a list of dictionaries containing all of the tag attributes. The attribute names are the dictionary keys.
%package -n python3-recipe-scrapers
Summary: Python package, scraping recipes from all over the internet
Provides: python-recipe-scrapers
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-recipe-scrapers
A simple web scraping tool for recipe sites.
pip install recipe-scrapers
then:
from recipe_scrapers import scrape_me
scraper = scrape_me('https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/')
# Q: What if the recipe site I want to extract information from is not listed below?
# A: You can give it a try with the wild_mode option! If there is Schema/Recipe available it will work just fine.
scraper = scrape_me('https://www.feastingathome.com/tomato-risotto/', wild_mode=True)
scraper.host()
scraper.title()
scraper.total_time()
scraper.image()
scraper.ingredients()
scraper.instructions()
scraper.instructions_list()
scraper.yields()
scraper.to_json()
scraper.links()
scraper.nutrients() # if available
You also have an option to scrape html-like content
import requests
from recipe_scrapers import scrape_html
url = "https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/"
html = requests.get(url).content
scraper = scrape_html(html=html, org_url=url)
scraper.title()
scraper.total_time()
# etc...
Notes:
- ``scraper.links()`` returns a list of dictionaries containing all of the tag attributes. The attribute names are the dictionary keys.
%package help
Summary: Development documents and examples for recipe-scrapers
Provides: python3-recipe-scrapers-doc
%description help
A simple web scraping tool for recipe sites.
pip install recipe-scrapers
then:
from recipe_scrapers import scrape_me
scraper = scrape_me('https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/')
# Q: What if the recipe site I want to extract information from is not listed below?
# A: You can give it a try with the wild_mode option! If there is Schema/Recipe available it will work just fine.
scraper = scrape_me('https://www.feastingathome.com/tomato-risotto/', wild_mode=True)
scraper.host()
scraper.title()
scraper.total_time()
scraper.image()
scraper.ingredients()
scraper.instructions()
scraper.instructions_list()
scraper.yields()
scraper.to_json()
scraper.links()
scraper.nutrients() # if available
You also have an option to scrape html-like content
import requests
from recipe_scrapers import scrape_html
url = "https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/"
html = requests.get(url).content
scraper = scrape_html(html=html, org_url=url)
scraper.title()
scraper.total_time()
# etc...
Notes:
- ``scraper.links()`` returns a list of dictionaries containing all of the tag attributes. The attribute names are the dictionary keys.
%prep
%autosetup -n recipe-scrapers-14.36.1
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-recipe-scrapers -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Wed May 10 2023 Python_Bot - 14.36.1-1
- Package Spec generated