diff options
-rw-r--r-- | .gitignore | 1 | ||||
-rw-r--r-- | python-twitter-scraper.spec | 533 | ||||
-rw-r--r-- | sources | 1 |
3 files changed, 535 insertions, 0 deletions
@@ -0,0 +1 @@ +/twitter-scraper-0.4.4.tar.gz diff --git a/python-twitter-scraper.spec b/python-twitter-scraper.spec new file mode 100644 index 0000000..348a15e --- /dev/null +++ b/python-twitter-scraper.spec @@ -0,0 +1,533 @@ +%global _empty_manifest_terminate_build 0 +Name: python-twitter-scraper +Version: 0.4.4 +Release: 1 +Summary: Scrape the Twitter Frontend API without authentication. +License: MIT +URL: https://github.com/bisguzar/twitter-scraper +Source0: https://mirrors.nju.edu.cn/pypi/web/packages/fb/53/cbe5ecbbe361c23db0f4000a5f9073dc13d0a719b9eb51798f2334c245af/twitter-scraper-0.4.4.tar.gz +BuildArch: noarch + +Requires: python3-requests-html +Requires: python3-MechanicalSoup + +%description + +# Twitter Scraper + +    + +[🇰🇷 Read Korean Version](https://github.com/bisguzar/twitter-scraper/blob/master/twitter_scraper/__init__.py) + +Twitter's API is annoying to work with, and has lots of limitations — luckily their frontend (JavaScript) has it's own API, which I reverse–engineered. No API rate limits. No restrictions. Extremely fast. + +You can use this library to get the text of any user's Tweets trivially. + +## Prerequisites + +Before you begin, ensure you have met the following requirements: + +* Internet Connection +* Python 3.6+ + +## Installing twitter-scraper + +If you want to use latest version, install from source. To install twitter-scraper from source, follow these steps: + +Linux and macOS: +```bash +git clone https://github.com/bisguzar/twitter-scraper.git +cd twitter-scraper +sudo python3 setup.py install +``` + +Also, you can install with PyPI. + +```bash +pip3 install twitter_scraper +``` + +## Using twitter_scraper + +Just import **twitter_scraper** and call functions! + + +### → function **get_tweets(query: str [, pages: int])** -> dictionary +You can get tweets of profile or parse tweets from hashtag, **get_tweets** takes username or hashtag on first parameter as string and how much pages you want to scan on second parameter as integer. + +#### Keep in mind: +* First parameter need to start with #, number sign, if you want to get tweets from hashtag. +* **pages** parameter is optional. + +```python +Python 3.7.3 (default, Mar 26 2019, 21:43:19) +[GCC 8.2.1 20181127] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> from twitter_scraper import get_tweets +>>> +>>> for tweet in get_tweets('twitter', pages=1): +... print(tweet['text']) +... +spooky vibe check +… +``` + +It returns a dictionary for each tweet. Keys of the dictionary; + +| Key | Type | Description | +|-----------|------------|------------------------------------------------------------------| +| tweetId | string | Tweet's identifier, visit twitter.com/USERNAME/ID to view tweet. | +| userId | string | Tweet's userId | +| username | string | Tweet's username | +| tweetUrl | string | Tweet's URL | +| isRetweet | boolean | True if it is a retweet, False otherwise | +| isPinned | boolean | True if it is a pinned tweet, False otherwise | +| time | datetime | Published date of tweet | +| text | string | Content of tweet | +| replies | integer | Replies count of tweet | +| retweets | integer | Retweet count of tweet | +| likes | integer | Like count of tweet | +| entries | dictionary | Has hashtags, videos, photos, urls keys. Each one's value is list| + +### → function **get_trends()** -> list +You can get the Trends of your area simply by calling `get_trends()`. It will return a list of strings. + +```python +Python 3.7.3 (default, Mar 26 2019, 21:43:19) +[GCC 8.2.1 20181127] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> from twitter_scraper import get_trends +>>> get_trends() +['#WHUTOT', '#ARSSOU', 'West Ham', '#AtalantaJuve', '#バビロニア', '#おっさんずラブinthasky', 'Southampton', 'Valverde', '#MMKGabAndMax', '#23NParoNacional'] +``` + +### → class **Profile(username: str)** -> class instance +You can get personal information of a profile, like birthday and biography if exists and public. This class takes username parameter. And returns itself. Access informations with class variables. + + +```python +Python 3.7.3 (default, Mar 26 2019, 21:43:19) +[GCC 8.2.1 20181127] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> from twitter_scraper import Profile +>>> profile = Profile('bugraisguzar') +>>> profile.location +'Istanbul' +>>> profile.name +'Buğra İşgüzar' +>>> profile.username +'bugraisguzar' +``` + +#### → **.to_dict()** -> dict + +**to_dict** is a method of *Profile* class. Returns profile datas as Python dictionary. + +```python +Python 3.7.3 (default, Mar 26 2019, 21:43:19) +[GCC 8.2.1 20181127] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> from twitter_scraper import Profile +>>> profile = Profile("bugraisguzar") +>>> profile.to_dict() +{'name': 'Buğra İşgüzar', 'username': 'bugraisguzar', 'birthday': None, 'biography': 'geliştirici@peptr', 'website': 'bisguzar.com', 'profile_photo': 'https://pbs.twimg.com/profile_images/1199305322474745861/nByxOcDZ_400x400.jpg', 'banner_photo': 'https://pbs.twimg.com/profile_banners/1019138658/1555346657/1500x500', 'likes_count': 2512, 'tweets_count': 756, 'followers_count': 483, 'following_count': 255, 'is_verified': False, 'is_private': False, user_id: "1019138658"} +``` + + + +## Contributing to twitter-scraper +To contribute to twitter-scraper, follow these steps: + +1. Fork this repository. +2. Create a branch with clear name: `git checkout -b <branch_name>`. +3. Make your changes and commit them: `git commit -m '<commit_message>'` +4. Push to the original branch: `git push origin <project_name>/<location>` +5. Create the pull request. + +Alternatively see the GitHub documentation on [creating a pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request). + +## Contributors + +Thanks to the following people who have contributed to this project: + +* @kennethreitz (author) +* @bisguzar (maintainer) +* @lionking6792 +* @ozanbayram + + + +## Contact +If you want to contact me you can reach me at [@bugraisguzar](https://twitter.com/bugraisguzar). + + +## License +This project uses the following license: [MIT](https://github.com/bisguzar/twitter-scraper/blob/master/LICENSE). + + + + +%package -n python3-twitter-scraper +Summary: Scrape the Twitter Frontend API without authentication. +Provides: python-twitter-scraper +BuildRequires: python3-devel +BuildRequires: python3-setuptools +BuildRequires: python3-pip +%description -n python3-twitter-scraper + +# Twitter Scraper + +    + +[🇰🇷 Read Korean Version](https://github.com/bisguzar/twitter-scraper/blob/master/twitter_scraper/__init__.py) + +Twitter's API is annoying to work with, and has lots of limitations — luckily their frontend (JavaScript) has it's own API, which I reverse–engineered. No API rate limits. No restrictions. Extremely fast. + +You can use this library to get the text of any user's Tweets trivially. + +## Prerequisites + +Before you begin, ensure you have met the following requirements: + +* Internet Connection +* Python 3.6+ + +## Installing twitter-scraper + +If you want to use latest version, install from source. To install twitter-scraper from source, follow these steps: + +Linux and macOS: +```bash +git clone https://github.com/bisguzar/twitter-scraper.git +cd twitter-scraper +sudo python3 setup.py install +``` + +Also, you can install with PyPI. + +```bash +pip3 install twitter_scraper +``` + +## Using twitter_scraper + +Just import **twitter_scraper** and call functions! + + +### → function **get_tweets(query: str [, pages: int])** -> dictionary +You can get tweets of profile or parse tweets from hashtag, **get_tweets** takes username or hashtag on first parameter as string and how much pages you want to scan on second parameter as integer. + +#### Keep in mind: +* First parameter need to start with #, number sign, if you want to get tweets from hashtag. +* **pages** parameter is optional. + +```python +Python 3.7.3 (default, Mar 26 2019, 21:43:19) +[GCC 8.2.1 20181127] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> from twitter_scraper import get_tweets +>>> +>>> for tweet in get_tweets('twitter', pages=1): +... print(tweet['text']) +... +spooky vibe check +… +``` + +It returns a dictionary for each tweet. Keys of the dictionary; + +| Key | Type | Description | +|-----------|------------|------------------------------------------------------------------| +| tweetId | string | Tweet's identifier, visit twitter.com/USERNAME/ID to view tweet. | +| userId | string | Tweet's userId | +| username | string | Tweet's username | +| tweetUrl | string | Tweet's URL | +| isRetweet | boolean | True if it is a retweet, False otherwise | +| isPinned | boolean | True if it is a pinned tweet, False otherwise | +| time | datetime | Published date of tweet | +| text | string | Content of tweet | +| replies | integer | Replies count of tweet | +| retweets | integer | Retweet count of tweet | +| likes | integer | Like count of tweet | +| entries | dictionary | Has hashtags, videos, photos, urls keys. Each one's value is list| + +### → function **get_trends()** -> list +You can get the Trends of your area simply by calling `get_trends()`. It will return a list of strings. + +```python +Python 3.7.3 (default, Mar 26 2019, 21:43:19) +[GCC 8.2.1 20181127] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> from twitter_scraper import get_trends +>>> get_trends() +['#WHUTOT', '#ARSSOU', 'West Ham', '#AtalantaJuve', '#バビロニア', '#おっさんずラブinthasky', 'Southampton', 'Valverde', '#MMKGabAndMax', '#23NParoNacional'] +``` + +### → class **Profile(username: str)** -> class instance +You can get personal information of a profile, like birthday and biography if exists and public. This class takes username parameter. And returns itself. Access informations with class variables. + + +```python +Python 3.7.3 (default, Mar 26 2019, 21:43:19) +[GCC 8.2.1 20181127] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> from twitter_scraper import Profile +>>> profile = Profile('bugraisguzar') +>>> profile.location +'Istanbul' +>>> profile.name +'Buğra İşgüzar' +>>> profile.username +'bugraisguzar' +``` + +#### → **.to_dict()** -> dict + +**to_dict** is a method of *Profile* class. Returns profile datas as Python dictionary. + +```python +Python 3.7.3 (default, Mar 26 2019, 21:43:19) +[GCC 8.2.1 20181127] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> from twitter_scraper import Profile +>>> profile = Profile("bugraisguzar") +>>> profile.to_dict() +{'name': 'Buğra İşgüzar', 'username': 'bugraisguzar', 'birthday': None, 'biography': 'geliştirici@peptr', 'website': 'bisguzar.com', 'profile_photo': 'https://pbs.twimg.com/profile_images/1199305322474745861/nByxOcDZ_400x400.jpg', 'banner_photo': 'https://pbs.twimg.com/profile_banners/1019138658/1555346657/1500x500', 'likes_count': 2512, 'tweets_count': 756, 'followers_count': 483, 'following_count': 255, 'is_verified': False, 'is_private': False, user_id: "1019138658"} +``` + + + +## Contributing to twitter-scraper +To contribute to twitter-scraper, follow these steps: + +1. Fork this repository. +2. Create a branch with clear name: `git checkout -b <branch_name>`. +3. Make your changes and commit them: `git commit -m '<commit_message>'` +4. Push to the original branch: `git push origin <project_name>/<location>` +5. Create the pull request. + +Alternatively see the GitHub documentation on [creating a pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request). + +## Contributors + +Thanks to the following people who have contributed to this project: + +* @kennethreitz (author) +* @bisguzar (maintainer) +* @lionking6792 +* @ozanbayram + + + +## Contact +If you want to contact me you can reach me at [@bugraisguzar](https://twitter.com/bugraisguzar). + + +## License +This project uses the following license: [MIT](https://github.com/bisguzar/twitter-scraper/blob/master/LICENSE). + + + + +%package help +Summary: Development documents and examples for twitter-scraper +Provides: python3-twitter-scraper-doc +%description help + +# Twitter Scraper + +    + +[🇰🇷 Read Korean Version](https://github.com/bisguzar/twitter-scraper/blob/master/twitter_scraper/__init__.py) + +Twitter's API is annoying to work with, and has lots of limitations — luckily their frontend (JavaScript) has it's own API, which I reverse–engineered. No API rate limits. No restrictions. Extremely fast. + +You can use this library to get the text of any user's Tweets trivially. + +## Prerequisites + +Before you begin, ensure you have met the following requirements: + +* Internet Connection +* Python 3.6+ + +## Installing twitter-scraper + +If you want to use latest version, install from source. To install twitter-scraper from source, follow these steps: + +Linux and macOS: +```bash +git clone https://github.com/bisguzar/twitter-scraper.git +cd twitter-scraper +sudo python3 setup.py install +``` + +Also, you can install with PyPI. + +```bash +pip3 install twitter_scraper +``` + +## Using twitter_scraper + +Just import **twitter_scraper** and call functions! + + +### → function **get_tweets(query: str [, pages: int])** -> dictionary +You can get tweets of profile or parse tweets from hashtag, **get_tweets** takes username or hashtag on first parameter as string and how much pages you want to scan on second parameter as integer. + +#### Keep in mind: +* First parameter need to start with #, number sign, if you want to get tweets from hashtag. +* **pages** parameter is optional. + +```python +Python 3.7.3 (default, Mar 26 2019, 21:43:19) +[GCC 8.2.1 20181127] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> from twitter_scraper import get_tweets +>>> +>>> for tweet in get_tweets('twitter', pages=1): +... print(tweet['text']) +... +spooky vibe check +… +``` + +It returns a dictionary for each tweet. Keys of the dictionary; + +| Key | Type | Description | +|-----------|------------|------------------------------------------------------------------| +| tweetId | string | Tweet's identifier, visit twitter.com/USERNAME/ID to view tweet. | +| userId | string | Tweet's userId | +| username | string | Tweet's username | +| tweetUrl | string | Tweet's URL | +| isRetweet | boolean | True if it is a retweet, False otherwise | +| isPinned | boolean | True if it is a pinned tweet, False otherwise | +| time | datetime | Published date of tweet | +| text | string | Content of tweet | +| replies | integer | Replies count of tweet | +| retweets | integer | Retweet count of tweet | +| likes | integer | Like count of tweet | +| entries | dictionary | Has hashtags, videos, photos, urls keys. Each one's value is list| + +### → function **get_trends()** -> list +You can get the Trends of your area simply by calling `get_trends()`. It will return a list of strings. + +```python +Python 3.7.3 (default, Mar 26 2019, 21:43:19) +[GCC 8.2.1 20181127] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> from twitter_scraper import get_trends +>>> get_trends() +['#WHUTOT', '#ARSSOU', 'West Ham', '#AtalantaJuve', '#バビロニア', '#おっさんずラブinthasky', 'Southampton', 'Valverde', '#MMKGabAndMax', '#23NParoNacional'] +``` + +### → class **Profile(username: str)** -> class instance +You can get personal information of a profile, like birthday and biography if exists and public. This class takes username parameter. And returns itself. Access informations with class variables. + + +```python +Python 3.7.3 (default, Mar 26 2019, 21:43:19) +[GCC 8.2.1 20181127] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> from twitter_scraper import Profile +>>> profile = Profile('bugraisguzar') +>>> profile.location +'Istanbul' +>>> profile.name +'Buğra İşgüzar' +>>> profile.username +'bugraisguzar' +``` + +#### → **.to_dict()** -> dict + +**to_dict** is a method of *Profile* class. Returns profile datas as Python dictionary. + +```python +Python 3.7.3 (default, Mar 26 2019, 21:43:19) +[GCC 8.2.1 20181127] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> from twitter_scraper import Profile +>>> profile = Profile("bugraisguzar") +>>> profile.to_dict() +{'name': 'Buğra İşgüzar', 'username': 'bugraisguzar', 'birthday': None, 'biography': 'geliştirici@peptr', 'website': 'bisguzar.com', 'profile_photo': 'https://pbs.twimg.com/profile_images/1199305322474745861/nByxOcDZ_400x400.jpg', 'banner_photo': 'https://pbs.twimg.com/profile_banners/1019138658/1555346657/1500x500', 'likes_count': 2512, 'tweets_count': 756, 'followers_count': 483, 'following_count': 255, 'is_verified': False, 'is_private': False, user_id: "1019138658"} +``` + + + +## Contributing to twitter-scraper +To contribute to twitter-scraper, follow these steps: + +1. Fork this repository. +2. Create a branch with clear name: `git checkout -b <branch_name>`. +3. Make your changes and commit them: `git commit -m '<commit_message>'` +4. Push to the original branch: `git push origin <project_name>/<location>` +5. Create the pull request. + +Alternatively see the GitHub documentation on [creating a pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request). + +## Contributors + +Thanks to the following people who have contributed to this project: + +* @kennethreitz (author) +* @bisguzar (maintainer) +* @lionking6792 +* @ozanbayram + + + +## Contact +If you want to contact me you can reach me at [@bugraisguzar](https://twitter.com/bugraisguzar). + + +## License +This project uses the following license: [MIT](https://github.com/bisguzar/twitter-scraper/blob/master/LICENSE). + + + + +%prep +%autosetup -n twitter-scraper-0.4.4 + +%build +%py3_build + +%install +%py3_install +install -d -m755 %{buildroot}/%{_pkgdocdir} +if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi +if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi +if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi +if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi +pushd %{buildroot} +if [ -d usr/lib ]; then + find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/lib64 ]; then + find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/bin ]; then + find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/sbin ]; then + find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst +fi +touch doclist.lst +if [ -d usr/share/man ]; then + find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst +fi +popd +mv %{buildroot}/filelist.lst . +mv %{buildroot}/doclist.lst . + +%files -n python3-twitter-scraper -f filelist.lst +%dir %{python3_sitelib}/* + +%files help -f doclist.lst +%{_docdir}/* + +%changelog +* Tue Apr 11 2023 Python_Bot <Python_Bot@openeuler.org> - 0.4.4-1 +- Package Spec generated @@ -0,0 +1 @@ +9a9f9aba414d324754ffc07fe32759e7 twitter-scraper-0.4.4.tar.gz |