Project Background
==================
python package that helps data engineers and data scientists accelerate data-pipeline development
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The goal of this python project is to build a bunch of wrappers that can
be reused for building data pipelines from - - Relational databases:
postgres, mysql, greenplum, redshift, etc. - NOSQL databases: hive,
mongo, etc. - messaging sources and caches: kafka, redis, rabbitmq, etc.
- cloud service providers: salesforce, mixpanel, jira, google-drive,
delighted, wootric, etc.
Installation
------------
There are 3 ways to install dattasa package -
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1) Easiest way is to install from pypi using pip
::
pip install dattasa
2) Download from github and build from scratch
::
git clone git@github.com:kartikra/dattasa.git
cd dattasa
python setup.py build
python setup.py clean
python setup.py install
3) Download from github and install using pip
::
git clone git@github.com:kartikra/dattasa.git
cd dattasa
pip install -e .
pip install -U -e . (if upgrading)
Config Files
------------
By default dattasa expects the config files to be in the mode directory
of user. These can be overridden. See links to sample code in README
file below to find out more. There are 2 yaml config files -
database.yaml - conists of database credentials and api keys needed for
making connection. see `sample database
config <documentation/database.yaml>`__ - ftpsites.yaml - Needed for
performing sftp transfers. see `sample ftpsites
config <documentation/ftpsites.yaml>`__
Environment Variables
---------------------
dattasa package relies on the following environment variables. Make sure
to set these in your bash profile - GPLOAD_HOME: Path to gpload package
(needed only if using gpload utilities for greenplum or redshift) -
PROJECT_HOME: Path to python project directory -
PROJECT_HOME/python_bash_scripts: python scripts to invoke gpload
(needed only if using gpload utilities for greenplum or redshift) -
SQL_DIR: Place to keep all sql scripts - TEMP_DIR: All temp files
created in this folder - LOG_DIR: All log files are created in this
folder
Description of classes
----------------------
v1.0 of the package comprises of the following classes. Please see link
to sample code for details on how to use each of them.
+----------------+----------------------------------+------------------+
| class | Description | Sample Code |
+================+==================================+==================+
| environment | Lets you source all the os | `see first row |
| | environment variables | in mongo |
| | | example <documen |
| | | tation/mongo_exa |
| | | mple.ipynb>`__ |
+----------------+----------------------------------+------------------+
| postgres_clien | Lets you use psql and gpload | `sample postgres |
| t | utilities provided by `pivotal | code <documentat |
| | greenplum <https://gpdb.docs.piv | ion/postgres_cli |
| | otal.io/4350/common/client-docs- | ent.ipynb>`__ |
| | unix.html>`__. | |
| | Make connections to postgres / | |
| | greenplum database using | |
| | pyscopg2 or sqlalchemy.Use the | |
| | connections to interact with | |
| | database in interactive program | |
| | or run queries from a sql file | |
| | using the connection | |
+----------------+----------------------------------+------------------+
| greenplum_clie | Lets you use psql and gpload | `sample |
| nt | utilities provided by `pivotal | greenplum |
| (inherits | greenplum <https://gpdb.docs.piv | code <documentat |
| postgres_clien | otal.io/4350/common/client-docs- | ion/greenplum_cl |
| t) | unix.html>`__. | ient.ipynb>`__ |
| | Make connections to postgres / | |
| | greenplum database using | |
| | pyscopg2 or sqlalchemy.Use the | |
| | connections to interact with | |
| | database in interactive program | |
| | or run queries from a sql file | |
| | using the connection | |
+----------------+----------------------------------+------------------+
| mysql_client | Lets you use mysql and other | `sample mysql |
| | methods provided by PyMySQL | code <documentat |
| | Package | ion/mysql_client |
| | | .ipynb>`__ |
+----------------+----------------------------------+------------------+
| file_processor | Create sftp connection using | `see file |
| | `paramiko <https://github.com/pa | processing |
| | ramiko/paramiko.git>`__ | example <documen |
| | package. Other file | tation/file_proc |
| | manipulations like row_count, | essing.ipynb>`__ |
| | encryption, archive (File Class) | |
+----------------+----------------------------------+------------------+
| notification | Send email notifications | |
+----------------+----------------------------------+------------------+
| mongo_client | Load data to mongodb using bulk | `see mongo |
| | load. Run java script queries | example <documen |
| | | tation/mongo_exa |
| | | mple.ipynb>`__ |
+----------------+----------------------------------+------------------+
| redis_client | Read data from a redis cache or | `see redis |
| | load a redis cache | example <documen |
| | | tation/redis_exa |
| | | mple.ipynb>`__ |
+----------------+----------------------------------+------------------+
| kafka_system | Currently allows Publisher and | `see kafka |
| | Consumer to use kafka in batch | example <documen |
| | mode | tation/kafka_exa |
| | | mple.ipynb>`__ |
+----------------+----------------------------------+------------------+
| rabbitmq_syste | Currently has Publisher to | |
| m | publish messages in rabbitmq | |
+----------------+----------------------------------+------------------+
| mixpanel_clien | Connect to mixpanel api and | `see mixpnael |
| t | fetch data using jql or export | section in api |
| | raw events data. `mixpanel api | example <documen |
| | documentation <https://mixpanel. | tation/api_examp |
| | com/help/reference/jql/api-refer | les.ipynb>`__ |
| | ence>`__ | |
+----------------+----------------------------------+------------------+
| salesforce_cli | Create a connection to | `see salesforce |
| ent | salesforce using | section in api |
| | `simple_salesforce <https://gith | example <documen |
| | ub.com/simple-salesforce/simple- | tation/api_examp |
| | salesforce>`__ | les.ipynb>`__ |
| | package | |
+----------------+----------------------------------+------------------+
| delighted_clie | Get nps scores and survey | `see delighted |
| nt | responses from delighted.\ `api | section in api |
| | documentation <https://delighted | example <documen |
| | .com/docs/api/>`__ | tation/api_examp |
| | | les.ipynb>`__ |
+----------------+----------------------------------+------------------+
| wootric_client | Gets nps scores and survey | `see wootric |
| | responses from wootric.\ `api | section in api |
| | documentation <http://docs.wootr | example <documen |
| | ic.com/api>`__ | tation/api_examp |
| | | les.ipynb>`__ |
+----------------+----------------------------------+------------------+
| dag_controller | Functions needed to integrate | |
| | this package within an airflow | |
| | dag. `airflow | |
| | documentation <https://airflow.a | |
| | pache.org/>`__ | |
| | and `github | |
| | project <https://github.com/apac | |
| | he/incubator-airflow>`__ | |
+----------------+----------------------------------+------------------+
data_pipeline class
~~~~~~~~~~~~~~~~~~~
This is the main class that’s accessible to other projects. The data
pipeline consists of data from components and API. Each object of
data-processor can use individual data streams and process them
data_pipeline decides which modules to call based on type of database
(as defined in config file). data_pipeline comprises of 3 classes -
DataComponent : Each database connection is considered to be
data-component object.See examples for postgres, mysql, greenplum, etc
above - APICall : Each api call is an apicall object. See examples for
mixpanel, delighted, salesforce and wootric above - DataProcessor :
transfers and loads data between data components. `see
examples <documentation/data_processor.ipynb>`__
Adding ipython notebook files to github
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Use git lfs See
`documentation <https://git-lfs.github.com/?utm_source=github_site&utm_medium=jupyter_blog_link&utm_campaign=gitlfs>`__
- if using mac install git-lfs using brew ``brew install git-lfs``
- install lfs ``git lfs install``
- track ipynb files in your project. go to the project folder and do
``git lfs track "*.psd"``
- add ``.*ipynb_checkpoints/`` to .gitignore file
- Finally add .gitattributes file ``git add .gitatttributes``
Deploying code in pypi
~~~~~~~~~~~~~~~~~~~~~~
- build the code:
``python setup.py build && python setup.py clean && python setup.py install``
- push to pypitest : ``python setup.py sdist upload -r pypitest``
- push to pypi prod : ``python setup.py sdist upload -r pypi``