%global _empty_manifest_terminate_build 0 Name: python-autoscraper Version: 1.1.14 Release: 1 Summary: A Smart, Automatic, Fast and Lightweight Web Scraper for Python License: MIT URL: https://github.com/alirezamika/autoscraper Source0: https://mirrors.aliyun.com/pypi/web/packages/d1/36/b459ec778bd7b0bfac01358dfdee9d26075f4f2c6ee72e7742e3274a9d41/autoscraper-1.1.14.tar.gz BuildArch: noarch Requires: python3-requests Requires: python3-bs4 Requires: python3-lxml %description # AutoScraper: A Smart, Automatic, Fast and Lightweight Web Scraper for Python ![img](https://user-images.githubusercontent.com/17881612/91968083-5ee92080-ed29-11ea-82ec-d99ec85367a5.png) This project is made for automatic web scraping to make scraping easy. It gets a url or the html content of a web page and a list of sample data which we want to scrape from that page. **This data can be text, url or any html tag value of that page.** It learns the scraping rules and returns the similar elements. Then you can use this learned object with new urls to get similar content or the exact same element of those new pages. ## Installation It's compatible with python 3. - Install latest version from git repository using pip: ```bash $ pip install git+https://github.com/alirezamika/autoscraper.git ``` - Install from PyPI: ```bash $ pip install autoscraper ``` - Install from source: ```bash $ python setup.py install ``` ## How to use ### Getting similar results Say we want to fetch all related post titles in a stackoverflow page: ```python from autoscraper import AutoScraper url = 'https://stackoverflow.com/questions/2081586/web-scraping-with-python' # We can add one or multiple candidates here. # You can also put urls here to retrieve urls. wanted_list = ["What are metaclasses in Python?"] scraper = AutoScraper() result = scraper.build(url, wanted_list) print(result) ``` Here's the output: ```python [ 'How do I merge two dictionaries in a single expression in Python (taking union of dictionaries)?', 'How to call an external command?', 'What are metaclasses in Python?', 'Does Python have a ternary conditional operator?', 'How do you remove duplicates from a list whilst preserving order?', 'Convert bytes to a string', 'How to get line count of a large file cheaply in Python?', "Does Python have a string 'contains' substring method?", 'Why is “1000000000000000 in range(1000000000000001)” so fast in Python 3?' ] ``` Now you can use the `scraper` object to get related topics of any stackoverflow page: ```python scraper.get_result_similar('https://stackoverflow.com/questions/606191/convert-bytes-to-a-string') ``` ### Getting exact result Say we want to scrape live stock prices from yahoo finance: ```python from autoscraper import AutoScraper url = 'https://finance.yahoo.com/quote/AAPL/' wanted_list = ["124.81"] scraper = AutoScraper() # Here we can also pass html content via the html parameter instead of the url (html=html_content) result = scraper.build(url, wanted_list) print(result) ``` Note that you should update the `wanted_list` if you want to copy this code, as the content of the page dynamically changes. You can also pass any custom `requests` module parameter. for example you may want to use proxies or custom headers: ```python proxies = { "http": 'http://127.0.0.1:8001', "https": 'https://127.0.0.1:8001', } result = scraper.build(url, wanted_list, request_args=dict(proxies=proxies)) ``` Now we can get the price of any symbol: ```python scraper.get_result_exact('https://finance.yahoo.com/quote/MSFT/') ``` **You may want to get other info as well.** For example if you want to get market cap too, you can just append it to the wanted list. By using the `get_result_exact` method, it will retrieve the data as the same exact order in the wanted list. **Another example:** Say we want to scrape the about text, number of stars and the link to issues of Github repo pages: ```python from autoscraper import AutoScraper url = 'https://github.com/alirezamika/autoscraper' wanted_list = ['A Smart, Automatic, Fast and Lightweight Web Scraper for Python', '2.5k', 'https://github.com/alirezamika/autoscraper/issues'] scraper = AutoScraper() scraper.build(url, wanted_list) ``` Simple, right? ### Saving the model We can now save the built model to use it later. To save: ```python # Give it a file path scraper.save('yahoo-finance') ``` And to load: ```python scraper.load('yahoo-finance') ``` ## Tutorials - See [this gist](https://gist.github.com/alirezamika/72083221891eecd991bbc0a2a2467673) for more advanced usages. - [AutoScraper and Flask: Create an API From Any Website in Less Than 5 Minutes](https://medium.com/better-programming/autoscraper-and-flask-create-an-api-from-any-website-in-less-than-5-minutes-3f0f176fc4a3) ## Issues Feel free to open an issue if you have any problem using the module. ## Support the project Buy Me A Coffee #### Happy Coding ♥️ %package -n python3-autoscraper Summary: A Smart, Automatic, Fast and Lightweight Web Scraper for Python Provides: python-autoscraper BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-autoscraper # AutoScraper: A Smart, Automatic, Fast and Lightweight Web Scraper for Python ![img](https://user-images.githubusercontent.com/17881612/91968083-5ee92080-ed29-11ea-82ec-d99ec85367a5.png) This project is made for automatic web scraping to make scraping easy. It gets a url or the html content of a web page and a list of sample data which we want to scrape from that page. **This data can be text, url or any html tag value of that page.** It learns the scraping rules and returns the similar elements. Then you can use this learned object with new urls to get similar content or the exact same element of those new pages. ## Installation It's compatible with python 3. - Install latest version from git repository using pip: ```bash $ pip install git+https://github.com/alirezamika/autoscraper.git ``` - Install from PyPI: ```bash $ pip install autoscraper ``` - Install from source: ```bash $ python setup.py install ``` ## How to use ### Getting similar results Say we want to fetch all related post titles in a stackoverflow page: ```python from autoscraper import AutoScraper url = 'https://stackoverflow.com/questions/2081586/web-scraping-with-python' # We can add one or multiple candidates here. # You can also put urls here to retrieve urls. wanted_list = ["What are metaclasses in Python?"] scraper = AutoScraper() result = scraper.build(url, wanted_list) print(result) ``` Here's the output: ```python [ 'How do I merge two dictionaries in a single expression in Python (taking union of dictionaries)?', 'How to call an external command?', 'What are metaclasses in Python?', 'Does Python have a ternary conditional operator?', 'How do you remove duplicates from a list whilst preserving order?', 'Convert bytes to a string', 'How to get line count of a large file cheaply in Python?', "Does Python have a string 'contains' substring method?", 'Why is “1000000000000000 in range(1000000000000001)” so fast in Python 3?' ] ``` Now you can use the `scraper` object to get related topics of any stackoverflow page: ```python scraper.get_result_similar('https://stackoverflow.com/questions/606191/convert-bytes-to-a-string') ``` ### Getting exact result Say we want to scrape live stock prices from yahoo finance: ```python from autoscraper import AutoScraper url = 'https://finance.yahoo.com/quote/AAPL/' wanted_list = ["124.81"] scraper = AutoScraper() # Here we can also pass html content via the html parameter instead of the url (html=html_content) result = scraper.build(url, wanted_list) print(result) ``` Note that you should update the `wanted_list` if you want to copy this code, as the content of the page dynamically changes. You can also pass any custom `requests` module parameter. for example you may want to use proxies or custom headers: ```python proxies = { "http": 'http://127.0.0.1:8001', "https": 'https://127.0.0.1:8001', } result = scraper.build(url, wanted_list, request_args=dict(proxies=proxies)) ``` Now we can get the price of any symbol: ```python scraper.get_result_exact('https://finance.yahoo.com/quote/MSFT/') ``` **You may want to get other info as well.** For example if you want to get market cap too, you can just append it to the wanted list. By using the `get_result_exact` method, it will retrieve the data as the same exact order in the wanted list. **Another example:** Say we want to scrape the about text, number of stars and the link to issues of Github repo pages: ```python from autoscraper import AutoScraper url = 'https://github.com/alirezamika/autoscraper' wanted_list = ['A Smart, Automatic, Fast and Lightweight Web Scraper for Python', '2.5k', 'https://github.com/alirezamika/autoscraper/issues'] scraper = AutoScraper() scraper.build(url, wanted_list) ``` Simple, right? ### Saving the model We can now save the built model to use it later. To save: ```python # Give it a file path scraper.save('yahoo-finance') ``` And to load: ```python scraper.load('yahoo-finance') ``` ## Tutorials - See [this gist](https://gist.github.com/alirezamika/72083221891eecd991bbc0a2a2467673) for more advanced usages. - [AutoScraper and Flask: Create an API From Any Website in Less Than 5 Minutes](https://medium.com/better-programming/autoscraper-and-flask-create-an-api-from-any-website-in-less-than-5-minutes-3f0f176fc4a3) ## Issues Feel free to open an issue if you have any problem using the module. ## Support the project Buy Me A Coffee #### Happy Coding ♥️ %package help Summary: Development documents and examples for autoscraper Provides: python3-autoscraper-doc %description help # AutoScraper: A Smart, Automatic, Fast and Lightweight Web Scraper for Python ![img](https://user-images.githubusercontent.com/17881612/91968083-5ee92080-ed29-11ea-82ec-d99ec85367a5.png) This project is made for automatic web scraping to make scraping easy. It gets a url or the html content of a web page and a list of sample data which we want to scrape from that page. **This data can be text, url or any html tag value of that page.** It learns the scraping rules and returns the similar elements. Then you can use this learned object with new urls to get similar content or the exact same element of those new pages. ## Installation It's compatible with python 3. - Install latest version from git repository using pip: ```bash $ pip install git+https://github.com/alirezamika/autoscraper.git ``` - Install from PyPI: ```bash $ pip install autoscraper ``` - Install from source: ```bash $ python setup.py install ``` ## How to use ### Getting similar results Say we want to fetch all related post titles in a stackoverflow page: ```python from autoscraper import AutoScraper url = 'https://stackoverflow.com/questions/2081586/web-scraping-with-python' # We can add one or multiple candidates here. # You can also put urls here to retrieve urls. wanted_list = ["What are metaclasses in Python?"] scraper = AutoScraper() result = scraper.build(url, wanted_list) print(result) ``` Here's the output: ```python [ 'How do I merge two dictionaries in a single expression in Python (taking union of dictionaries)?', 'How to call an external command?', 'What are metaclasses in Python?', 'Does Python have a ternary conditional operator?', 'How do you remove duplicates from a list whilst preserving order?', 'Convert bytes to a string', 'How to get line count of a large file cheaply in Python?', "Does Python have a string 'contains' substring method?", 'Why is “1000000000000000 in range(1000000000000001)” so fast in Python 3?' ] ``` Now you can use the `scraper` object to get related topics of any stackoverflow page: ```python scraper.get_result_similar('https://stackoverflow.com/questions/606191/convert-bytes-to-a-string') ``` ### Getting exact result Say we want to scrape live stock prices from yahoo finance: ```python from autoscraper import AutoScraper url = 'https://finance.yahoo.com/quote/AAPL/' wanted_list = ["124.81"] scraper = AutoScraper() # Here we can also pass html content via the html parameter instead of the url (html=html_content) result = scraper.build(url, wanted_list) print(result) ``` Note that you should update the `wanted_list` if you want to copy this code, as the content of the page dynamically changes. You can also pass any custom `requests` module parameter. for example you may want to use proxies or custom headers: ```python proxies = { "http": 'http://127.0.0.1:8001', "https": 'https://127.0.0.1:8001', } result = scraper.build(url, wanted_list, request_args=dict(proxies=proxies)) ``` Now we can get the price of any symbol: ```python scraper.get_result_exact('https://finance.yahoo.com/quote/MSFT/') ``` **You may want to get other info as well.** For example if you want to get market cap too, you can just append it to the wanted list. By using the `get_result_exact` method, it will retrieve the data as the same exact order in the wanted list. **Another example:** Say we want to scrape the about text, number of stars and the link to issues of Github repo pages: ```python from autoscraper import AutoScraper url = 'https://github.com/alirezamika/autoscraper' wanted_list = ['A Smart, Automatic, Fast and Lightweight Web Scraper for Python', '2.5k', 'https://github.com/alirezamika/autoscraper/issues'] scraper = AutoScraper() scraper.build(url, wanted_list) ``` Simple, right? ### Saving the model We can now save the built model to use it later. To save: ```python # Give it a file path scraper.save('yahoo-finance') ``` And to load: ```python scraper.load('yahoo-finance') ``` ## Tutorials - See [this gist](https://gist.github.com/alirezamika/72083221891eecd991bbc0a2a2467673) for more advanced usages. - [AutoScraper and Flask: Create an API From Any Website in Less Than 5 Minutes](https://medium.com/better-programming/autoscraper-and-flask-create-an-api-from-any-website-in-less-than-5-minutes-3f0f176fc4a3) ## Issues Feel free to open an issue if you have any problem using the module. ## Support the project Buy Me A Coffee #### Happy Coding ♥️ %prep %autosetup -n autoscraper-1.1.14 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "\"/%h/%f.gz\"\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-autoscraper -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Thu Jun 08 2023 Python_Bot - 1.1.14-1 - Package Spec generated