معرفی شرکت ها


datascientist-0.2.7


Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر

توضیحات

A light set of enablers based on Cloudframe's proprietary data science codebase.
ویژگی مقدار
سیستم عامل -
نام فایل datascientist-0.2.7
نام datascientist
نسخه کتابخانه 0.2.7
نگهدارنده []
ایمیل نگهدارنده []
نویسنده Cloudframe Analytics
ایمیل نویسنده info@cloudframe.io
آدرس صفحه اصلی https://github.com/cloudframe/datascientist
آدرس اینترنتی https://pypi.org/project/datascientist/
مجوز MIT
# The Ephemerai Data Scientist Enabler At Ephemerai we employ teams of Data Scientists, Data Engineers, and Software Developers. Check us out at [http://ephemer.ai](http://ephemer.ai "Ephemerai website") If you're interested in joining our team as a Data Scientist see here: [Bid Prediction Repo](https://github.com/ephemer-ai/texas-bid-prediction). There you'll find a fun problem and more info about our evergreen positions for Data Scientists, Data Engineers, and Software Developers. This package contains some convenience functions meant help a Data Scientist: * get data into a format that is useful for training models, * track experiments as a natural workflow, and * use common cloud resources like AWS S3. It is a light version of some of our proprietary enablers that we use to deliver data-informed products to our clients. The `workflow` sub-module contains `tracker` which is intended to support data science experimentation. ## Installation `pip install datascientist` ## Dependencies In addition to the following packages, `datascientist` requires that you have the credentials (et cetera) to perform the operation required. For example, when connecting to a Redshift database you must have the correct credentials stored either as environment variables (see the example bash profile) or in an `rs_creds.json` file located in the home directory. * `pandas` * `numpy` * `psycopg2` * `PyYAML` ## Structure ``` data-scientist/ | |-- connections/ | |-- __init__.py | |-- rsconnect.py | |-- workflow/ | |-- __init__.py | |-- tracker.py | |-- special/ | |-- __init__.py | |-- s3session.py | |-- Manifest.in |-- README.md |-- setup.py |-- bash_profile_example ``` ## Usage ### `connections.rsconnect` A set of convenience functions for interacting with a Redshift database. In addition to merely establishing connections and fetching data, this sub-module can perform do things like: * Infer the schema of your DataFrame * CREATE and DROP tables * WRITE data to a table * Perform an UPSERT operation * Get the names of tables in your cluster * Et cetera For example, upsert data or write a new table: ``` import connections.rsconnect as rs ### Store a local file to S3 bucket, key = rs.df_to_s3( df, bucket = 'my-bucket', key = 'location/on/s3/my-file.csv', primary = 'my_primary_key' ) ### If the table exists, perform an upsert operation from the CSV ### If it doesn't, create a table from the CSV tname = 'my_table' fields = rs.infer_schema(df) if rs.table_check(tname): _ = rs.upsert_table( tname, fields, bucket = bucket, key = key, primary = 'my_primary_key' ) else: _ = rs.create_table( tname, fields, primary = 'my_primary_key' ) _ = rs.write_data( tname, bucket, key ) ``` Note also that the function to fetch data is: `rs.sql_to_df()`. ### `workflow.tracker` The `workflow.tracker` provides a lightweight tool for tracking a data science workflow. It is intended to help data scientists produce human-readable artifacts and obviate the need for things like complex naming conventions to keep track of the state of modeling experiments. It also has features to enable reproducibility, iterative improvment, and model deployent in a cloud environment (AWS right now). The fundamental object of this library is the `Project` class. A Project conceptually is a single effort to build a Machine Learning function to address a particular problem. Individual experiments are conceptualized as 'runs'. A Run covers the data science workflow from data conditioning (post ETL and feature generation) through model validation and testing. For more information and to learn how to use the Workflow Tracker, see the sample notebooks in the ['cloud-event-modeling'](https://github.com/ephemer-ai/cloud-event-modeling/) repository. ### `special.s3session` The `special.s3session` module contains a set of convenience functions for creating an S3 session with credentials, checking a bucket's existence, listing a bucket's objects, and the like.


نیازمندی

مقدار نام
- pandas
- numpy
- boto3
- psycopg2-binary
- PyYAML


نحوه نصب


نصب پکیج whl datascientist-0.2.7:

    pip install datascientist-0.2.7.whl


نصب پکیج tar.gz datascientist-0.2.7:

    pip install datascientist-0.2.7.tar.gz