%global _empty_manifest_terminate_build 0 Name: python-bcpy Version: 0.1.8 Release: 1 Summary: Microsoft SQL Server bcp (Bulk Copy) wrapper License: MIT License URL: https://github.com/titan550/bcpy Source0: https://mirrors.nju.edu.cn/pypi/web/packages/f5/55/2ce7ad290d4907cd420424a8e685c2b39c453cebda3aa0e549a424fddf9d/bcpy-0.1.8.tar.gz BuildArch: noarch %description # bcpy
Latest Release latest release
License license
Build Status (master) travis build status
## What is it? This package is a wrapper for Microsoft's SQL Server bcp utility. Current database drivers available in Python are not fast enough for transferring millions of records (yes, I have tried [pyodbc fast_execute_many](https://github.com/mkleehammer/pyodbc/wiki/Features-beyond-the-DB-API#fast_executemany)). Despite the IO hits, the fastest option by far is saving the data to a CSV file in file system (preferably /dev/shm tmpfs) and using the bcp utility to transfer the CSV file to SQL Server. ## How Can I Install It? 1. Make sure your computeer has the [requirements](#requirements). 1. You can download and install this package from PyPI repository by running the command below. ```bash pip install bcpy ``` ## Examples Following examples show you how to load (1) flat files and (2) DataFrame objects to SQL Server using this package. ### Flat File Following example assumes that you have a comma separated file with no qualifier in path 'tests/data1.csv'. The code below sends the the file to SQL Server. ```python import bcpy sql_config = { 'server': 'sql_server_hostname', 'database': 'database_name', 'username': 'test_user', 'password': 'test_user_password1234' } sql_table_name = 'test_data1' csv_file_path = 'tests/data1.csv' flat_file = bcpy.FlatFile(qualifier='', path=csv_file_path) sql_table = bcpy.SqlTable(sql_config, table=sql_table_name) flat_file.to_sql(sql_table) ``` ### DataFrame The following example creates a DataFrame with 100 rows and 4 columns populated with random data and then it sends it to SQL Server. ```python import bcpy import numpy as np import pandas as pd sql_config = { 'server': 'sql_server_hostname', 'database': 'database_name', 'username': 'test_user', 'password': 'test_user_password1234' } table_name = 'test_dataframe' df = pd.DataFrame(np.random.randint(-100, 100, size=(100, 4)), columns=list('ABCD')) bdf = bcpy.DataFrame(df) sql_table = bcpy.SqlTable(sql_config, table=table_name) bdf.to_sql(sql_table) ``` ## Requirements You need a working version of Microsoft bcp installed in your system. Your PATH environment variable should contain the directory of the bcp utility. Following are the installation tutorials for different operating systems. - [Dockerfile (Ubuntu 18.04)](./bcp.Dockerfile) - [Linux](https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-tools) - [Mac](https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-tools?view=sql-server-2017#macos) - [Windows](https://docs.microsoft.com/en-us/sql/tools/bcp-utility) %package -n python3-bcpy Summary: Microsoft SQL Server bcp (Bulk Copy) wrapper Provides: python-bcpy BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-bcpy # bcpy
Latest Release latest release
License license
Build Status (master) travis build status
## What is it? This package is a wrapper for Microsoft's SQL Server bcp utility. Current database drivers available in Python are not fast enough for transferring millions of records (yes, I have tried [pyodbc fast_execute_many](https://github.com/mkleehammer/pyodbc/wiki/Features-beyond-the-DB-API#fast_executemany)). Despite the IO hits, the fastest option by far is saving the data to a CSV file in file system (preferably /dev/shm tmpfs) and using the bcp utility to transfer the CSV file to SQL Server. ## How Can I Install It? 1. Make sure your computeer has the [requirements](#requirements). 1. You can download and install this package from PyPI repository by running the command below. ```bash pip install bcpy ``` ## Examples Following examples show you how to load (1) flat files and (2) DataFrame objects to SQL Server using this package. ### Flat File Following example assumes that you have a comma separated file with no qualifier in path 'tests/data1.csv'. The code below sends the the file to SQL Server. ```python import bcpy sql_config = { 'server': 'sql_server_hostname', 'database': 'database_name', 'username': 'test_user', 'password': 'test_user_password1234' } sql_table_name = 'test_data1' csv_file_path = 'tests/data1.csv' flat_file = bcpy.FlatFile(qualifier='', path=csv_file_path) sql_table = bcpy.SqlTable(sql_config, table=sql_table_name) flat_file.to_sql(sql_table) ``` ### DataFrame The following example creates a DataFrame with 100 rows and 4 columns populated with random data and then it sends it to SQL Server. ```python import bcpy import numpy as np import pandas as pd sql_config = { 'server': 'sql_server_hostname', 'database': 'database_name', 'username': 'test_user', 'password': 'test_user_password1234' } table_name = 'test_dataframe' df = pd.DataFrame(np.random.randint(-100, 100, size=(100, 4)), columns=list('ABCD')) bdf = bcpy.DataFrame(df) sql_table = bcpy.SqlTable(sql_config, table=table_name) bdf.to_sql(sql_table) ``` ## Requirements You need a working version of Microsoft bcp installed in your system. Your PATH environment variable should contain the directory of the bcp utility. Following are the installation tutorials for different operating systems. - [Dockerfile (Ubuntu 18.04)](./bcp.Dockerfile) - [Linux](https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-tools) - [Mac](https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-tools?view=sql-server-2017#macos) - [Windows](https://docs.microsoft.com/en-us/sql/tools/bcp-utility) %package help Summary: Development documents and examples for bcpy Provides: python3-bcpy-doc %description help # bcpy
Latest Release latest release
License license
Build Status (master) travis build status
## What is it? This package is a wrapper for Microsoft's SQL Server bcp utility. Current database drivers available in Python are not fast enough for transferring millions of records (yes, I have tried [pyodbc fast_execute_many](https://github.com/mkleehammer/pyodbc/wiki/Features-beyond-the-DB-API#fast_executemany)). Despite the IO hits, the fastest option by far is saving the data to a CSV file in file system (preferably /dev/shm tmpfs) and using the bcp utility to transfer the CSV file to SQL Server. ## How Can I Install It? 1. Make sure your computeer has the [requirements](#requirements). 1. You can download and install this package from PyPI repository by running the command below. ```bash pip install bcpy ``` ## Examples Following examples show you how to load (1) flat files and (2) DataFrame objects to SQL Server using this package. ### Flat File Following example assumes that you have a comma separated file with no qualifier in path 'tests/data1.csv'. The code below sends the the file to SQL Server. ```python import bcpy sql_config = { 'server': 'sql_server_hostname', 'database': 'database_name', 'username': 'test_user', 'password': 'test_user_password1234' } sql_table_name = 'test_data1' csv_file_path = 'tests/data1.csv' flat_file = bcpy.FlatFile(qualifier='', path=csv_file_path) sql_table = bcpy.SqlTable(sql_config, table=sql_table_name) flat_file.to_sql(sql_table) ``` ### DataFrame The following example creates a DataFrame with 100 rows and 4 columns populated with random data and then it sends it to SQL Server. ```python import bcpy import numpy as np import pandas as pd sql_config = { 'server': 'sql_server_hostname', 'database': 'database_name', 'username': 'test_user', 'password': 'test_user_password1234' } table_name = 'test_dataframe' df = pd.DataFrame(np.random.randint(-100, 100, size=(100, 4)), columns=list('ABCD')) bdf = bcpy.DataFrame(df) sql_table = bcpy.SqlTable(sql_config, table=table_name) bdf.to_sql(sql_table) ``` ## Requirements You need a working version of Microsoft bcp installed in your system. Your PATH environment variable should contain the directory of the bcp utility. Following are the installation tutorials for different operating systems. - [Dockerfile (Ubuntu 18.04)](./bcp.Dockerfile) - [Linux](https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-tools) - [Mac](https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-tools?view=sql-server-2017#macos) - [Windows](https://docs.microsoft.com/en-us/sql/tools/bcp-utility) %prep %autosetup -n bcpy-0.1.8 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-bcpy -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Mon Apr 10 2023 Python_Bot - 0.1.8-1 - Package Spec generated