summaryrefslogtreecommitdiff
path: root/python-comcrawl.spec
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-05 09:03:37 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-05 09:03:37 +0000
commit670c30ffdf316769e254f1ac759ca74f851c0ac6 (patch)
tree640dd4f0bfeda49314abd24c30a0ab0ed83e7434 /python-comcrawl.spec
parente0e10ff99ad5a1fa3272747a6c692bf130465431 (diff)
automatic import of python-comcrawlopeneuler20.03
Diffstat (limited to 'python-comcrawl.spec')
-rw-r--r--python-comcrawl.spec433
1 files changed, 433 insertions, 0 deletions
diff --git a/python-comcrawl.spec b/python-comcrawl.spec
new file mode 100644
index 0000000..993378c
--- /dev/null
+++ b/python-comcrawl.spec
@@ -0,0 +1,433 @@
+%global _empty_manifest_terminate_build 0
+Name: python-comcrawl
+Version: 1.0.2
+Release: 1
+Summary: A python utility for downloading Common Crawl data.
+License: MIT
+URL: https://github.com/michaelharms/comcrawl
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/7c/46/0c519595db0a5e217ab43b0755f7d8d3be305e0da98caee31df0454d20b5/comcrawl-1.0.2.tar.gz
+BuildArch: noarch
+
+Requires: python3-requests
+
+%description
+# comcrawl
+
+![GitHub Workflow Status](https://img.shields.io/github/workflow/status/michaelharms/comcrawl/CI)
+[![codecov](https://codecov.io/gh/michaelharms/comcrawl/branch/master/graph/badge.svg?token=FEw4KEcpRm)](https://codecov.io/gh/michaelharms/comcrawl)
+![GitHub](https://img.shields.io/github/license/michaelharms/comcrawl)
+
+_comcrawl_ is a python package for easily querying and downloading pages from [commoncrawl.org](https://commoncrawl.org).
+
+## Introduction
+
+I was inspired to make _comcrawl_ by reading this [article](https://www.bellingcat.com/resources/2015/08/13/using-python-to-mine-common-crawl/).
+
+**Note:** I made this for personal projects and for fun. Thus this package is intended for use in small to medium projects, because it is not optimized for handling gigabytes or terrabytes of data. You might want to check out [cdx-toolkit](https://pypi.org/project/cdx-toolkit/) or [cdx-index-client](https://github.com/ikreymer/cdx-index-client) in such cases.
+
+### What is Common Crawl?
+
+The Common Crawl project is an _"open repository of web crawl data that can be accessed and analyzed by anyone"_.
+It contains billions of web pages and is often used for NLP projects to gather large amounts of text data.
+
+Common Crawl provides a [search index](https://index.commoncrawl.org), which you can use to search for certain URLs in their crawled data.
+Each search result contains a link and byte offset to a specific location in their [AWS S3 buckets](https://commoncrawl.s3.amazonaws.com/cc-index/collections/index.html) to download the page.
+
+### What does _comcrawl_ offer?
+
+_comcrawl_ simplifies this process of searching and downloading from Common Crawl by offering a simple API interface you can use in your python program.
+
+## Installation
+
+_comcrawl_ is available on PyPI.
+
+Install it via pip by running the following command from your terminal:
+
+```
+pip install comcrawl
+```
+
+## Usage
+
+### Basic
+
+The HTML for each page will be available as a string in the 'html' key in each results dictionary after calling the `download` method.
+
+```python
+from comcrawl import IndexClient
+
+client = IndexClient()
+
+client.search("reddit.com/r/MachineLearning/*")
+client.download()
+
+first_page_html = client.results[0]["html"]
+```
+
+### Multithreading
+
+You can leverage multithreading while searching or downloading by specifying the number of threads you want to use.
+
+Please keep in mind to not overdo this, so you don't put too much stress on the Common Crawl servers (have a look at [Code of Conduct](#code-of-conduct)).
+
+```python
+from comcrawl import IndexClient
+
+client = IndexClient()
+
+client.search("reddit.com/r/MachineLearning/*", threads=4)
+client.download(threads=4)
+```
+
+### Removing duplicates & Saving
+
+You can easily combine this package with the [pandas](https://github.com/pandas-dev/pandas) library, to filter out duplicate results and persist them to disk:
+
+```python
+from comcrawl import IndexClient
+import pandas as pd
+
+client = IndexClient()
+client.search("reddit.com/r/MachineLearning/*")
+
+client.results = (pd.DataFrame(client.results)
+ .sort_values(by="timestamp")
+ .drop_duplicates("urlkey", keep="last")
+ .to_dict("records"))
+
+client.download()
+
+pd.DataFrame(client.results).to_csv("results.csv")
+```
+
+The urlkey alone might not be sufficient here, so you might want to write a function to compute a custom id from the results' properties for the removal of duplicates.
+
+### Searching subsets of Indexes
+
+By default, when instantiated, the `IndexClient` fetches a list of currently available Common Crawl indexes to search. You can also restrict the search to certain Common Crawl Indexes, by specifying them as a list.
+
+```python
+from comcrawl import IndexClient
+
+client = IndexClient(["2019-51", "2019-47"])
+client.search("reddit.com/r/MachineLearning/*")
+client.download()
+```
+
+### Logging HTTP requests
+
+When debugging your code, you can enable logging of all HTTP requests that are made.
+
+```python
+from comcrawl import IndexClient
+
+client = IndexClient(verbose=True)
+client.search("reddit.com/r/MachineLearning/*")
+client.download()
+```
+
+## Code of Conduct
+
+When accessing Common Crawl, please beware these guidelines posted by one of the Common Crawl maintainers:
+
+https://groups.google.com/forum/#!msg/common-crawl/3QmQjFA_3y4/vTbhGqIBBQAJ
+
+
+%package -n python3-comcrawl
+Summary: A python utility for downloading Common Crawl data.
+Provides: python-comcrawl
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-comcrawl
+# comcrawl
+
+![GitHub Workflow Status](https://img.shields.io/github/workflow/status/michaelharms/comcrawl/CI)
+[![codecov](https://codecov.io/gh/michaelharms/comcrawl/branch/master/graph/badge.svg?token=FEw4KEcpRm)](https://codecov.io/gh/michaelharms/comcrawl)
+![GitHub](https://img.shields.io/github/license/michaelharms/comcrawl)
+
+_comcrawl_ is a python package for easily querying and downloading pages from [commoncrawl.org](https://commoncrawl.org).
+
+## Introduction
+
+I was inspired to make _comcrawl_ by reading this [article](https://www.bellingcat.com/resources/2015/08/13/using-python-to-mine-common-crawl/).
+
+**Note:** I made this for personal projects and for fun. Thus this package is intended for use in small to medium projects, because it is not optimized for handling gigabytes or terrabytes of data. You might want to check out [cdx-toolkit](https://pypi.org/project/cdx-toolkit/) or [cdx-index-client](https://github.com/ikreymer/cdx-index-client) in such cases.
+
+### What is Common Crawl?
+
+The Common Crawl project is an _"open repository of web crawl data that can be accessed and analyzed by anyone"_.
+It contains billions of web pages and is often used for NLP projects to gather large amounts of text data.
+
+Common Crawl provides a [search index](https://index.commoncrawl.org), which you can use to search for certain URLs in their crawled data.
+Each search result contains a link and byte offset to a specific location in their [AWS S3 buckets](https://commoncrawl.s3.amazonaws.com/cc-index/collections/index.html) to download the page.
+
+### What does _comcrawl_ offer?
+
+_comcrawl_ simplifies this process of searching and downloading from Common Crawl by offering a simple API interface you can use in your python program.
+
+## Installation
+
+_comcrawl_ is available on PyPI.
+
+Install it via pip by running the following command from your terminal:
+
+```
+pip install comcrawl
+```
+
+## Usage
+
+### Basic
+
+The HTML for each page will be available as a string in the 'html' key in each results dictionary after calling the `download` method.
+
+```python
+from comcrawl import IndexClient
+
+client = IndexClient()
+
+client.search("reddit.com/r/MachineLearning/*")
+client.download()
+
+first_page_html = client.results[0]["html"]
+```
+
+### Multithreading
+
+You can leverage multithreading while searching or downloading by specifying the number of threads you want to use.
+
+Please keep in mind to not overdo this, so you don't put too much stress on the Common Crawl servers (have a look at [Code of Conduct](#code-of-conduct)).
+
+```python
+from comcrawl import IndexClient
+
+client = IndexClient()
+
+client.search("reddit.com/r/MachineLearning/*", threads=4)
+client.download(threads=4)
+```
+
+### Removing duplicates & Saving
+
+You can easily combine this package with the [pandas](https://github.com/pandas-dev/pandas) library, to filter out duplicate results and persist them to disk:
+
+```python
+from comcrawl import IndexClient
+import pandas as pd
+
+client = IndexClient()
+client.search("reddit.com/r/MachineLearning/*")
+
+client.results = (pd.DataFrame(client.results)
+ .sort_values(by="timestamp")
+ .drop_duplicates("urlkey", keep="last")
+ .to_dict("records"))
+
+client.download()
+
+pd.DataFrame(client.results).to_csv("results.csv")
+```
+
+The urlkey alone might not be sufficient here, so you might want to write a function to compute a custom id from the results' properties for the removal of duplicates.
+
+### Searching subsets of Indexes
+
+By default, when instantiated, the `IndexClient` fetches a list of currently available Common Crawl indexes to search. You can also restrict the search to certain Common Crawl Indexes, by specifying them as a list.
+
+```python
+from comcrawl import IndexClient
+
+client = IndexClient(["2019-51", "2019-47"])
+client.search("reddit.com/r/MachineLearning/*")
+client.download()
+```
+
+### Logging HTTP requests
+
+When debugging your code, you can enable logging of all HTTP requests that are made.
+
+```python
+from comcrawl import IndexClient
+
+client = IndexClient(verbose=True)
+client.search("reddit.com/r/MachineLearning/*")
+client.download()
+```
+
+## Code of Conduct
+
+When accessing Common Crawl, please beware these guidelines posted by one of the Common Crawl maintainers:
+
+https://groups.google.com/forum/#!msg/common-crawl/3QmQjFA_3y4/vTbhGqIBBQAJ
+
+
+%package help
+Summary: Development documents and examples for comcrawl
+Provides: python3-comcrawl-doc
+%description help
+# comcrawl
+
+![GitHub Workflow Status](https://img.shields.io/github/workflow/status/michaelharms/comcrawl/CI)
+[![codecov](https://codecov.io/gh/michaelharms/comcrawl/branch/master/graph/badge.svg?token=FEw4KEcpRm)](https://codecov.io/gh/michaelharms/comcrawl)
+![GitHub](https://img.shields.io/github/license/michaelharms/comcrawl)
+
+_comcrawl_ is a python package for easily querying and downloading pages from [commoncrawl.org](https://commoncrawl.org).
+
+## Introduction
+
+I was inspired to make _comcrawl_ by reading this [article](https://www.bellingcat.com/resources/2015/08/13/using-python-to-mine-common-crawl/).
+
+**Note:** I made this for personal projects and for fun. Thus this package is intended for use in small to medium projects, because it is not optimized for handling gigabytes or terrabytes of data. You might want to check out [cdx-toolkit](https://pypi.org/project/cdx-toolkit/) or [cdx-index-client](https://github.com/ikreymer/cdx-index-client) in such cases.
+
+### What is Common Crawl?
+
+The Common Crawl project is an _"open repository of web crawl data that can be accessed and analyzed by anyone"_.
+It contains billions of web pages and is often used for NLP projects to gather large amounts of text data.
+
+Common Crawl provides a [search index](https://index.commoncrawl.org), which you can use to search for certain URLs in their crawled data.
+Each search result contains a link and byte offset to a specific location in their [AWS S3 buckets](https://commoncrawl.s3.amazonaws.com/cc-index/collections/index.html) to download the page.
+
+### What does _comcrawl_ offer?
+
+_comcrawl_ simplifies this process of searching and downloading from Common Crawl by offering a simple API interface you can use in your python program.
+
+## Installation
+
+_comcrawl_ is available on PyPI.
+
+Install it via pip by running the following command from your terminal:
+
+```
+pip install comcrawl
+```
+
+## Usage
+
+### Basic
+
+The HTML for each page will be available as a string in the 'html' key in each results dictionary after calling the `download` method.
+
+```python
+from comcrawl import IndexClient
+
+client = IndexClient()
+
+client.search("reddit.com/r/MachineLearning/*")
+client.download()
+
+first_page_html = client.results[0]["html"]
+```
+
+### Multithreading
+
+You can leverage multithreading while searching or downloading by specifying the number of threads you want to use.
+
+Please keep in mind to not overdo this, so you don't put too much stress on the Common Crawl servers (have a look at [Code of Conduct](#code-of-conduct)).
+
+```python
+from comcrawl import IndexClient
+
+client = IndexClient()
+
+client.search("reddit.com/r/MachineLearning/*", threads=4)
+client.download(threads=4)
+```
+
+### Removing duplicates & Saving
+
+You can easily combine this package with the [pandas](https://github.com/pandas-dev/pandas) library, to filter out duplicate results and persist them to disk:
+
+```python
+from comcrawl import IndexClient
+import pandas as pd
+
+client = IndexClient()
+client.search("reddit.com/r/MachineLearning/*")
+
+client.results = (pd.DataFrame(client.results)
+ .sort_values(by="timestamp")
+ .drop_duplicates("urlkey", keep="last")
+ .to_dict("records"))
+
+client.download()
+
+pd.DataFrame(client.results).to_csv("results.csv")
+```
+
+The urlkey alone might not be sufficient here, so you might want to write a function to compute a custom id from the results' properties for the removal of duplicates.
+
+### Searching subsets of Indexes
+
+By default, when instantiated, the `IndexClient` fetches a list of currently available Common Crawl indexes to search. You can also restrict the search to certain Common Crawl Indexes, by specifying them as a list.
+
+```python
+from comcrawl import IndexClient
+
+client = IndexClient(["2019-51", "2019-47"])
+client.search("reddit.com/r/MachineLearning/*")
+client.download()
+```
+
+### Logging HTTP requests
+
+When debugging your code, you can enable logging of all HTTP requests that are made.
+
+```python
+from comcrawl import IndexClient
+
+client = IndexClient(verbose=True)
+client.search("reddit.com/r/MachineLearning/*")
+client.download()
+```
+
+## Code of Conduct
+
+When accessing Common Crawl, please beware these guidelines posted by one of the Common Crawl maintainers:
+
+https://groups.google.com/forum/#!msg/common-crawl/3QmQjFA_3y4/vTbhGqIBBQAJ
+
+
+%prep
+%autosetup -n comcrawl-1.0.2
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-comcrawl -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Fri May 05 2023 Python_Bot <Python_Bot@openeuler.org> - 1.0.2-1
+- Package Spec generated