%global _empty_manifest_terminate_build 0
Name: python-gazpacho
Version: 1.1
Release: 1
Summary: The simple, fast, and modern web scraping library
License: MIT
URL: https://github.com/maxhumber/gazpacho
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/1d/65/3151b3837e9fa0fa535524c56e535f88910c10a3703487d9aead154c1339/gazpacho-1.1.tar.gz
BuildArch: noarch
%description
## About
gazpacho is a simple, fast, and modern web scraping library. The library is stable, actively maintained, and installed with **zero** dependencies.
## Install
Install with `pip` at the command line:
```
pip install -U gazpacho
```
## Quickstart
Give this a try:
```python
from gazpacho import get, Soup
url = 'https://scrape.world/books'
html = get(url)
soup = Soup(html)
books = soup.find('div', {'class': 'book-'}, partial=True)
def parse(book):
name = book.find('h4').text
price = float(book.find('p').text[1:].split(' ')[0])
return name, price
[parse(book) for book in books]
```
## Tutorial
#### Import
Import gazpacho following the convention:
```python
from gazpacho import get, Soup
```
#### get
Use the `get` function to download raw HTML:
```python
url = 'https://scrape.world/soup'
html = get(url)
print(html[:50])
# '\n\n \n Soup
```
#### attrs=
Use the `attrs` argument to isolate tags that contain specific HTML element attributes:
```python
soup.find('div', attrs={'class': 'section-'})
```
#### partial=
Element attributes are partially matched by default. Turn this off by setting `partial` to `False`:
```python
soup.find('div', {'class': 'soup'}, partial=False)
```
#### mode=
Override the mode argument {`'auto', 'first', 'all'`} to guarantee return behaviour:
```python
print(soup.find('span', mode='first'))
#
len(soup.find('span', mode='all'))
# 8
```
#### dir()
`Soup` objects have `html`, `tag`, `attrs`, and `text` attributes:
```python
dir(h1)
# ['attrs', 'find', 'get', 'html', 'strip', 'tag', 'text']
```
Use them accordingly:
```python
print(h1.html)
# 'Soup
'
print(h1.tag)
# h1
print(h1.attrs)
# {'id': 'firstHeading', 'class': 'firstHeading', 'lang': 'en'}
print(h1.text)
# Soup
```
## Support
If you use gazpacho, consider adding the [![scraper: gazpacho](https://img.shields.io/badge/scraper-gazpacho-C6422C)](https://github.com/maxhumber/gazpacho) badge to your project README.md:
```markdown
[![scraper: gazpacho](https://img.shields.io/badge/scraper-gazpacho-C6422C)](https://github.com/maxhumber/gazpacho)
```
## Contribute
For feature requests or bug reports, please use [Github Issues](https://github.com/maxhumber/gazpacho/issues)
For PRs, please read the [CONTRIBUTING.md](https://github.com/maxhumber/gazpacho/blob/master/CONTRIBUTING.md) document
%package -n python3-gazpacho
Summary: The simple, fast, and modern web scraping library
Provides: python-gazpacho
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-gazpacho
## About
gazpacho is a simple, fast, and modern web scraping library. The library is stable, actively maintained, and installed with **zero** dependencies.
## Install
Install with `pip` at the command line:
```
pip install -U gazpacho
```
## Quickstart
Give this a try:
```python
from gazpacho import get, Soup
url = 'https://scrape.world/books'
html = get(url)
soup = Soup(html)
books = soup.find('div', {'class': 'book-'}, partial=True)
def parse(book):
name = book.find('h4').text
price = float(book.find('p').text[1:].split(' ')[0])
return name, price
[parse(book) for book in books]
```
## Tutorial
#### Import
Import gazpacho following the convention:
```python
from gazpacho import get, Soup
```
#### get
Use the `get` function to download raw HTML:
```python
url = 'https://scrape.world/soup'
html = get(url)
print(html[:50])
# '\n\n \n Soup
```
#### attrs=
Use the `attrs` argument to isolate tags that contain specific HTML element attributes:
```python
soup.find('div', attrs={'class': 'section-'})
```
#### partial=
Element attributes are partially matched by default. Turn this off by setting `partial` to `False`:
```python
soup.find('div', {'class': 'soup'}, partial=False)
```
#### mode=
Override the mode argument {`'auto', 'first', 'all'`} to guarantee return behaviour:
```python
print(soup.find('span', mode='first'))
#
len(soup.find('span', mode='all'))
# 8
```
#### dir()
`Soup` objects have `html`, `tag`, `attrs`, and `text` attributes:
```python
dir(h1)
# ['attrs', 'find', 'get', 'html', 'strip', 'tag', 'text']
```
Use them accordingly:
```python
print(h1.html)
# 'Soup
'
print(h1.tag)
# h1
print(h1.attrs)
# {'id': 'firstHeading', 'class': 'firstHeading', 'lang': 'en'}
print(h1.text)
# Soup
```
## Support
If you use gazpacho, consider adding the [![scraper: gazpacho](https://img.shields.io/badge/scraper-gazpacho-C6422C)](https://github.com/maxhumber/gazpacho) badge to your project README.md:
```markdown
[![scraper: gazpacho](https://img.shields.io/badge/scraper-gazpacho-C6422C)](https://github.com/maxhumber/gazpacho)
```
## Contribute
For feature requests or bug reports, please use [Github Issues](https://github.com/maxhumber/gazpacho/issues)
For PRs, please read the [CONTRIBUTING.md](https://github.com/maxhumber/gazpacho/blob/master/CONTRIBUTING.md) document
%package help
Summary: Development documents and examples for gazpacho
Provides: python3-gazpacho-doc
%description help
## About
gazpacho is a simple, fast, and modern web scraping library. The library is stable, actively maintained, and installed with **zero** dependencies.
## Install
Install with `pip` at the command line:
```
pip install -U gazpacho
```
## Quickstart
Give this a try:
```python
from gazpacho import get, Soup
url = 'https://scrape.world/books'
html = get(url)
soup = Soup(html)
books = soup.find('div', {'class': 'book-'}, partial=True)
def parse(book):
name = book.find('h4').text
price = float(book.find('p').text[1:].split(' ')[0])
return name, price
[parse(book) for book in books]
```
## Tutorial
#### Import
Import gazpacho following the convention:
```python
from gazpacho import get, Soup
```
#### get
Use the `get` function to download raw HTML:
```python
url = 'https://scrape.world/soup'
html = get(url)
print(html[:50])
# '\n\n \n Soup
```
#### attrs=
Use the `attrs` argument to isolate tags that contain specific HTML element attributes:
```python
soup.find('div', attrs={'class': 'section-'})
```
#### partial=
Element attributes are partially matched by default. Turn this off by setting `partial` to `False`:
```python
soup.find('div', {'class': 'soup'}, partial=False)
```
#### mode=
Override the mode argument {`'auto', 'first', 'all'`} to guarantee return behaviour:
```python
print(soup.find('span', mode='first'))
#
len(soup.find('span', mode='all'))
# 8
```
#### dir()
`Soup` objects have `html`, `tag`, `attrs`, and `text` attributes:
```python
dir(h1)
# ['attrs', 'find', 'get', 'html', 'strip', 'tag', 'text']
```
Use them accordingly:
```python
print(h1.html)
# 'Soup
'
print(h1.tag)
# h1
print(h1.attrs)
# {'id': 'firstHeading', 'class': 'firstHeading', 'lang': 'en'}
print(h1.text)
# Soup
```
## Support
If you use gazpacho, consider adding the [![scraper: gazpacho](https://img.shields.io/badge/scraper-gazpacho-C6422C)](https://github.com/maxhumber/gazpacho) badge to your project README.md:
```markdown
[![scraper: gazpacho](https://img.shields.io/badge/scraper-gazpacho-C6422C)](https://github.com/maxhumber/gazpacho)
```
## Contribute
For feature requests or bug reports, please use [Github Issues](https://github.com/maxhumber/gazpacho/issues)
For PRs, please read the [CONTRIBUTING.md](https://github.com/maxhumber/gazpacho/blob/master/CONTRIBUTING.md) document
%prep
%autosetup -n gazpacho-1.1
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-gazpacho -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Wed Apr 12 2023 Python_Bot - 1.1-1
- Package Spec generated