summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-05 07:54:53 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-05 07:54:53 +0000
commit79d2cf54b56099e3dabd5e2666258a22c7e6f9d9 (patch)
treea48474fb9a3218960e5e3e9e3f8e9d1d2c514a54
parent906a0353bb1ea4043090778a1a5e5898671636aa (diff)
automatic import of python-spyse-pythonopeneuler20.03
-rw-r--r--.gitignore1
-rw-r--r--python-spyse-python.spec611
-rw-r--r--sources1
3 files changed, 613 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..caa7b2e 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/spyse-python-2.2.3.tar.gz
diff --git a/python-spyse-python.spec b/python-spyse-python.spec
new file mode 100644
index 0000000..0f5851f
--- /dev/null
+++ b/python-spyse-python.spec
@@ -0,0 +1,611 @@
+%global _empty_manifest_terminate_build 0
+Name: python-spyse-python
+Version: 2.2.3
+Release: 1
+Summary: Python wrapper for spyse.com
+License: MIT License Copyright (c) 2021 Internet Telemetry Company Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+URL: https://github.com/spyse-com/spyse-python
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/95/4b/4153143ff130e4f60266bd514472884e1dbfcb62fdf91f58e052971776a6/spyse-python-2.2.3.tar.gz
+BuildArch: noarch
+
+Requires: python3-requests
+Requires: python3-dataclasses
+Requires: python3-dataclasses-json
+Requires: python3-responses
+Requires: python3-limiter
+
+%description
+# Python wrapper for Spyse API
+
+The official wrapper for [spyse.com](https://spyse.com/) API, written in Python, aimed to help developers build their
+integrations with Spyse.
+
+[Spyse](https://spyse.com/) is the most complete Internet assets search engine for every cybersecurity
+professional.
+
+Examples of data Spyse delivers:
+
+* List of 300+ most popular open ports found on 3.5 Billion publicly accessible IPv4 hosts.
+* Technologies used on 300+ most popular open ports and IP addresses and domains using a particular technology.
+* Security score for each IP host and website, calculated based on the found vulnerabilities.
+* List of websites hosted on each IPv4 host.
+* DNS and WHOIS records of the domain names.
+* SSL certificates provided by the website hosts.
+* Structured content of the website homepages.
+* Abuse reports associated with IPv4 hosts.
+* Organizations and industries associated with the domain names.
+* Email addresses found during the Internet scanning, associated with a domain name.
+
+More information about the data Spyse collects is available on the [Our data](https://spyse.com/our-data) page.
+
+Spyse provides an API accessible via **token-based authentication**.
+API tokens are **available only for registered users** on their [account page](https://spyse.com/user).
+
+For more information about the API, please check the [API Reference](https://spyse-dev.readme.io/reference/quick-start).
+
+## Installation
+
+```bash
+pip3 install spyse-python
+```
+
+## Updating
+
+```bash
+pip3 install --no-cache-dir spyse-python
+```
+
+
+## Quick start
+```python
+from spyse import Client
+
+client = Client("your-api-token-here")
+
+d = client.get_domain_details('tesla.com')
+
+print(f"Domain details:")
+print(f"Website title: {d.http_extract.title}")
+print(f"Alexa rank: {d.alexa.rank}")
+print(f"Certificate subject org: {d.cert_summary.subject.organization}")
+print(f"Certificate issuer org: {d.cert_summary.issuer.organization}")
+print(f"Updated at: {d.updated_at}")
+print(f"DNS Records: {d.dns_records}")
+print(f"Technologies: {d.technologies}")
+print(f"Vulnerabilities: {d.cve_list}")
+print(f"Trackers: {d.trackers}")
+# ...
+
+```
+
+## Examples
+
+- [Check your API quotas](https://github.com/spyse-com/spyse-python/tree/main/examples/get_account_quotas.py)
+- [Subdomains lookup ('Search', 'Scroll', 'Count' methods demo)](https://github.com/spyse-com/spyse-python/tree/main/examples/subdomains_lookup.py)
+- [Domain lookup](https://github.com/spyse-com/spyse-python/tree/main/examples/domain_lookup.py)
+
+
+Note: You need to export access_token to run any example:
+```bash
+export SPYSE_API_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+```
+
+## How to search
+Using Spyse you can search for any Internet assets by their digital fingerprints. To do that, you need to form a specific search query and pass it to 'search', 'scroll', or 'count' methods.
+
+Each search query can contain multiple search params. Each search param consists of name, operator, and value.
+
+Check API docs to find out all existing combinations. Here is an example for domains search: https://spyse-dev.readme.io/reference/domains#domain_search
+You may also be interested in our GUI for building and testing queries before jumping to code: https://spyse.com/advanced-search/domain
+
+Example search request to find subdomains of att.com:
+```python
+from spyse import Client, SearchQuery, QueryParam, DomainSearchParams, Operators
+
+# Prepare query
+q = SearchQuery()
+domain = "att.com"
+
+# Add param to search for att.com subdomains
+q.append_param(QueryParam(DomainSearchParams.name, Operators.ends_with, '.' + domain))
+
+# Add param to search only for alive subdomains
+q.append_param(QueryParam(DomainSearchParams.http_extract_status_code, Operators.equals, "200"))
+
+# Add param to remove subdomains seen as PTR records
+q.append_param(QueryParam(DomainSearchParams.is_ptr, Operators.equals, "False"))
+
+# Next, you can use the query to run search, count or scroll methods
+c = Client("your-api-token-here")
+total_count = c.count_domains(q)
+search_results = c.search_domains(q).results
+scroll_results = c.scroll_domains(q).results
+```
+
+Example search request to find any alive IPv4 hosts in US, with open port 22 and running nginx:
+```python
+from spyse import Client, SearchQuery, QueryParam, IPSearchParams, Operators
+
+# Prepare query
+q = SearchQuery()
+
+# Add param to search for IPv4 hosts located in US
+q.append_param(QueryParam(IPSearchParams.geo_country_iso_code, Operators.equals, 'US'))
+
+# Add param to search only for hosts with open 22 port
+q.append_param(QueryParam(IPSearchParams.open_port, Operators.equals, "22"))
+
+# Add param to search only for hosts with nginx
+q.append_param(QueryParam(IPSearchParams.port_technology_name, Operators.contains, "nginx"))
+
+# Next, you can use the query to run search, count or scroll methods
+c = Client("your-api-token-here")
+total_count = c.count_ip(q)
+search_results = c.search_ip(q).results
+scroll_results = c.scroll_ip(q).results
+```
+
+## Scroll vs Search
+While a 'search' request allows to paginate over the first 10'000 results, the 'scroll search' can be used for deep pagination over a larger number of results (or even all results) in much the same way as you would use a cursor on a traditional database.
+
+In order to use scrolling, the initial search response will return a 'search_id' data field which should be specified in the subsequent requests in order to iterate over the rest of results.
+
+### Limitations
+The scroll is available only for customers with 'Pro' subscription.
+
+Example code to check if the scroll is available for your account
+```python
+from spyse import Client
+c = Client("your-api-token-here")
+
+if c.get_quotas().is_scroll_search_enabled:
+ print("Scroll is available")
+else:
+ print("Scroll is NOT available")
+```
+
+
+## Development
+
+### Installation
+```bash
+git clone https://github.com/spyse-com/spyse-python
+pip install -e .
+```
+
+
+Run tests:
+```bash
+cd tests
+python client_test.py
+```
+
+## License
+
+Distributed under the MIT License. See [LICENSE](https://github.com/spyse-com/spyse-python/tree/main/LICENSE.md) for more information.
+
+## Troubleshooting and contacts
+
+For any proposals and questions, please write at:
+
+- Email: [contact@spyse.com](mailto:contact@spyse.com)
+- Discord: [channel](https://discord.gg/XqaUP8c)
+- Twitter: [@scanpatch](https://twitter.com/scanpatch), [@MrMristov](https://twitter.com/MrMristov)
+
+
+
+
+%package -n python3-spyse-python
+Summary: Python wrapper for spyse.com
+Provides: python-spyse-python
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-spyse-python
+# Python wrapper for Spyse API
+
+The official wrapper for [spyse.com](https://spyse.com/) API, written in Python, aimed to help developers build their
+integrations with Spyse.
+
+[Spyse](https://spyse.com/) is the most complete Internet assets search engine for every cybersecurity
+professional.
+
+Examples of data Spyse delivers:
+
+* List of 300+ most popular open ports found on 3.5 Billion publicly accessible IPv4 hosts.
+* Technologies used on 300+ most popular open ports and IP addresses and domains using a particular technology.
+* Security score for each IP host and website, calculated based on the found vulnerabilities.
+* List of websites hosted on each IPv4 host.
+* DNS and WHOIS records of the domain names.
+* SSL certificates provided by the website hosts.
+* Structured content of the website homepages.
+* Abuse reports associated with IPv4 hosts.
+* Organizations and industries associated with the domain names.
+* Email addresses found during the Internet scanning, associated with a domain name.
+
+More information about the data Spyse collects is available on the [Our data](https://spyse.com/our-data) page.
+
+Spyse provides an API accessible via **token-based authentication**.
+API tokens are **available only for registered users** on their [account page](https://spyse.com/user).
+
+For more information about the API, please check the [API Reference](https://spyse-dev.readme.io/reference/quick-start).
+
+## Installation
+
+```bash
+pip3 install spyse-python
+```
+
+## Updating
+
+```bash
+pip3 install --no-cache-dir spyse-python
+```
+
+
+## Quick start
+```python
+from spyse import Client
+
+client = Client("your-api-token-here")
+
+d = client.get_domain_details('tesla.com')
+
+print(f"Domain details:")
+print(f"Website title: {d.http_extract.title}")
+print(f"Alexa rank: {d.alexa.rank}")
+print(f"Certificate subject org: {d.cert_summary.subject.organization}")
+print(f"Certificate issuer org: {d.cert_summary.issuer.organization}")
+print(f"Updated at: {d.updated_at}")
+print(f"DNS Records: {d.dns_records}")
+print(f"Technologies: {d.technologies}")
+print(f"Vulnerabilities: {d.cve_list}")
+print(f"Trackers: {d.trackers}")
+# ...
+
+```
+
+## Examples
+
+- [Check your API quotas](https://github.com/spyse-com/spyse-python/tree/main/examples/get_account_quotas.py)
+- [Subdomains lookup ('Search', 'Scroll', 'Count' methods demo)](https://github.com/spyse-com/spyse-python/tree/main/examples/subdomains_lookup.py)
+- [Domain lookup](https://github.com/spyse-com/spyse-python/tree/main/examples/domain_lookup.py)
+
+
+Note: You need to export access_token to run any example:
+```bash
+export SPYSE_API_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+```
+
+## How to search
+Using Spyse you can search for any Internet assets by their digital fingerprints. To do that, you need to form a specific search query and pass it to 'search', 'scroll', or 'count' methods.
+
+Each search query can contain multiple search params. Each search param consists of name, operator, and value.
+
+Check API docs to find out all existing combinations. Here is an example for domains search: https://spyse-dev.readme.io/reference/domains#domain_search
+You may also be interested in our GUI for building and testing queries before jumping to code: https://spyse.com/advanced-search/domain
+
+Example search request to find subdomains of att.com:
+```python
+from spyse import Client, SearchQuery, QueryParam, DomainSearchParams, Operators
+
+# Prepare query
+q = SearchQuery()
+domain = "att.com"
+
+# Add param to search for att.com subdomains
+q.append_param(QueryParam(DomainSearchParams.name, Operators.ends_with, '.' + domain))
+
+# Add param to search only for alive subdomains
+q.append_param(QueryParam(DomainSearchParams.http_extract_status_code, Operators.equals, "200"))
+
+# Add param to remove subdomains seen as PTR records
+q.append_param(QueryParam(DomainSearchParams.is_ptr, Operators.equals, "False"))
+
+# Next, you can use the query to run search, count or scroll methods
+c = Client("your-api-token-here")
+total_count = c.count_domains(q)
+search_results = c.search_domains(q).results
+scroll_results = c.scroll_domains(q).results
+```
+
+Example search request to find any alive IPv4 hosts in US, with open port 22 and running nginx:
+```python
+from spyse import Client, SearchQuery, QueryParam, IPSearchParams, Operators
+
+# Prepare query
+q = SearchQuery()
+
+# Add param to search for IPv4 hosts located in US
+q.append_param(QueryParam(IPSearchParams.geo_country_iso_code, Operators.equals, 'US'))
+
+# Add param to search only for hosts with open 22 port
+q.append_param(QueryParam(IPSearchParams.open_port, Operators.equals, "22"))
+
+# Add param to search only for hosts with nginx
+q.append_param(QueryParam(IPSearchParams.port_technology_name, Operators.contains, "nginx"))
+
+# Next, you can use the query to run search, count or scroll methods
+c = Client("your-api-token-here")
+total_count = c.count_ip(q)
+search_results = c.search_ip(q).results
+scroll_results = c.scroll_ip(q).results
+```
+
+## Scroll vs Search
+While a 'search' request allows to paginate over the first 10'000 results, the 'scroll search' can be used for deep pagination over a larger number of results (or even all results) in much the same way as you would use a cursor on a traditional database.
+
+In order to use scrolling, the initial search response will return a 'search_id' data field which should be specified in the subsequent requests in order to iterate over the rest of results.
+
+### Limitations
+The scroll is available only for customers with 'Pro' subscription.
+
+Example code to check if the scroll is available for your account
+```python
+from spyse import Client
+c = Client("your-api-token-here")
+
+if c.get_quotas().is_scroll_search_enabled:
+ print("Scroll is available")
+else:
+ print("Scroll is NOT available")
+```
+
+
+## Development
+
+### Installation
+```bash
+git clone https://github.com/spyse-com/spyse-python
+pip install -e .
+```
+
+
+Run tests:
+```bash
+cd tests
+python client_test.py
+```
+
+## License
+
+Distributed under the MIT License. See [LICENSE](https://github.com/spyse-com/spyse-python/tree/main/LICENSE.md) for more information.
+
+## Troubleshooting and contacts
+
+For any proposals and questions, please write at:
+
+- Email: [contact@spyse.com](mailto:contact@spyse.com)
+- Discord: [channel](https://discord.gg/XqaUP8c)
+- Twitter: [@scanpatch](https://twitter.com/scanpatch), [@MrMristov](https://twitter.com/MrMristov)
+
+
+
+
+%package help
+Summary: Development documents and examples for spyse-python
+Provides: python3-spyse-python-doc
+%description help
+# Python wrapper for Spyse API
+
+The official wrapper for [spyse.com](https://spyse.com/) API, written in Python, aimed to help developers build their
+integrations with Spyse.
+
+[Spyse](https://spyse.com/) is the most complete Internet assets search engine for every cybersecurity
+professional.
+
+Examples of data Spyse delivers:
+
+* List of 300+ most popular open ports found on 3.5 Billion publicly accessible IPv4 hosts.
+* Technologies used on 300+ most popular open ports and IP addresses and domains using a particular technology.
+* Security score for each IP host and website, calculated based on the found vulnerabilities.
+* List of websites hosted on each IPv4 host.
+* DNS and WHOIS records of the domain names.
+* SSL certificates provided by the website hosts.
+* Structured content of the website homepages.
+* Abuse reports associated with IPv4 hosts.
+* Organizations and industries associated with the domain names.
+* Email addresses found during the Internet scanning, associated with a domain name.
+
+More information about the data Spyse collects is available on the [Our data](https://spyse.com/our-data) page.
+
+Spyse provides an API accessible via **token-based authentication**.
+API tokens are **available only for registered users** on their [account page](https://spyse.com/user).
+
+For more information about the API, please check the [API Reference](https://spyse-dev.readme.io/reference/quick-start).
+
+## Installation
+
+```bash
+pip3 install spyse-python
+```
+
+## Updating
+
+```bash
+pip3 install --no-cache-dir spyse-python
+```
+
+
+## Quick start
+```python
+from spyse import Client
+
+client = Client("your-api-token-here")
+
+d = client.get_domain_details('tesla.com')
+
+print(f"Domain details:")
+print(f"Website title: {d.http_extract.title}")
+print(f"Alexa rank: {d.alexa.rank}")
+print(f"Certificate subject org: {d.cert_summary.subject.organization}")
+print(f"Certificate issuer org: {d.cert_summary.issuer.organization}")
+print(f"Updated at: {d.updated_at}")
+print(f"DNS Records: {d.dns_records}")
+print(f"Technologies: {d.technologies}")
+print(f"Vulnerabilities: {d.cve_list}")
+print(f"Trackers: {d.trackers}")
+# ...
+
+```
+
+## Examples
+
+- [Check your API quotas](https://github.com/spyse-com/spyse-python/tree/main/examples/get_account_quotas.py)
+- [Subdomains lookup ('Search', 'Scroll', 'Count' methods demo)](https://github.com/spyse-com/spyse-python/tree/main/examples/subdomains_lookup.py)
+- [Domain lookup](https://github.com/spyse-com/spyse-python/tree/main/examples/domain_lookup.py)
+
+
+Note: You need to export access_token to run any example:
+```bash
+export SPYSE_API_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+```
+
+## How to search
+Using Spyse you can search for any Internet assets by their digital fingerprints. To do that, you need to form a specific search query and pass it to 'search', 'scroll', or 'count' methods.
+
+Each search query can contain multiple search params. Each search param consists of name, operator, and value.
+
+Check API docs to find out all existing combinations. Here is an example for domains search: https://spyse-dev.readme.io/reference/domains#domain_search
+You may also be interested in our GUI for building and testing queries before jumping to code: https://spyse.com/advanced-search/domain
+
+Example search request to find subdomains of att.com:
+```python
+from spyse import Client, SearchQuery, QueryParam, DomainSearchParams, Operators
+
+# Prepare query
+q = SearchQuery()
+domain = "att.com"
+
+# Add param to search for att.com subdomains
+q.append_param(QueryParam(DomainSearchParams.name, Operators.ends_with, '.' + domain))
+
+# Add param to search only for alive subdomains
+q.append_param(QueryParam(DomainSearchParams.http_extract_status_code, Operators.equals, "200"))
+
+# Add param to remove subdomains seen as PTR records
+q.append_param(QueryParam(DomainSearchParams.is_ptr, Operators.equals, "False"))
+
+# Next, you can use the query to run search, count or scroll methods
+c = Client("your-api-token-here")
+total_count = c.count_domains(q)
+search_results = c.search_domains(q).results
+scroll_results = c.scroll_domains(q).results
+```
+
+Example search request to find any alive IPv4 hosts in US, with open port 22 and running nginx:
+```python
+from spyse import Client, SearchQuery, QueryParam, IPSearchParams, Operators
+
+# Prepare query
+q = SearchQuery()
+
+# Add param to search for IPv4 hosts located in US
+q.append_param(QueryParam(IPSearchParams.geo_country_iso_code, Operators.equals, 'US'))
+
+# Add param to search only for hosts with open 22 port
+q.append_param(QueryParam(IPSearchParams.open_port, Operators.equals, "22"))
+
+# Add param to search only for hosts with nginx
+q.append_param(QueryParam(IPSearchParams.port_technology_name, Operators.contains, "nginx"))
+
+# Next, you can use the query to run search, count or scroll methods
+c = Client("your-api-token-here")
+total_count = c.count_ip(q)
+search_results = c.search_ip(q).results
+scroll_results = c.scroll_ip(q).results
+```
+
+## Scroll vs Search
+While a 'search' request allows to paginate over the first 10'000 results, the 'scroll search' can be used for deep pagination over a larger number of results (or even all results) in much the same way as you would use a cursor on a traditional database.
+
+In order to use scrolling, the initial search response will return a 'search_id' data field which should be specified in the subsequent requests in order to iterate over the rest of results.
+
+### Limitations
+The scroll is available only for customers with 'Pro' subscription.
+
+Example code to check if the scroll is available for your account
+```python
+from spyse import Client
+c = Client("your-api-token-here")
+
+if c.get_quotas().is_scroll_search_enabled:
+ print("Scroll is available")
+else:
+ print("Scroll is NOT available")
+```
+
+
+## Development
+
+### Installation
+```bash
+git clone https://github.com/spyse-com/spyse-python
+pip install -e .
+```
+
+
+Run tests:
+```bash
+cd tests
+python client_test.py
+```
+
+## License
+
+Distributed under the MIT License. See [LICENSE](https://github.com/spyse-com/spyse-python/tree/main/LICENSE.md) for more information.
+
+## Troubleshooting and contacts
+
+For any proposals and questions, please write at:
+
+- Email: [contact@spyse.com](mailto:contact@spyse.com)
+- Discord: [channel](https://discord.gg/XqaUP8c)
+- Twitter: [@scanpatch](https://twitter.com/scanpatch), [@MrMristov](https://twitter.com/MrMristov)
+
+
+
+
+%prep
+%autosetup -n spyse-python-2.2.3
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-spyse-python -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Fri May 05 2023 Python_Bot <Python_Bot@openeuler.org> - 2.2.3-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..bbda3d1
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+14815251a6c9fb00cb402b027357190e spyse-python-2.2.3.tar.gz