معرفی شرکت ها


datasets-2.9.0


Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر

توضیحات

HuggingFace community-driven open-source library of datasets
ویژگی مقدار
سیستم عامل -
نام فایل datasets-2.9.0
نام datasets
نسخه کتابخانه 2.9.0
نگهدارنده []
ایمیل نگهدارنده []
نویسنده HuggingFace Inc.
ایمیل نویسنده thomas@huggingface.co
آدرس صفحه اصلی https://github.com/huggingface/datasets
آدرس اینترنتی https://pypi.org/project/datasets/
مجوز Apache 2.0
<p align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/datasets-logo-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/datasets-logo-light.svg"> <img alt="Hugging Face Datasets Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/datasets-logo-light.svg" width="352" height="59" style="max-width: 100%;"> </picture> <br/> <br/> </p> <p align="center"> <a href="https://github.com/huggingface/datasets/actions/workflows/ci.yml?query=branch%3Amain"> <img alt="Build" src="https://github.com/huggingface/datasets/actions/workflows/ci.yml/badge.svg?branch=main"> </a> <a href="https://github.com/huggingface/datasets/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/datasets.svg?color=blue"> </a> <a href="https://huggingface.co/docs/datasets/index.html"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/datasets/index.html.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/datasets/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/datasets.svg"> </a> <a href="https://huggingface.co/datasets/"> <img alt="Number of datasets" src="https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/datasets&color=brightgreen"> </a> <a href="CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/250213286"><img src="https://zenodo.org/badge/250213286.svg" alt="DOI"></a> </p> 🤗 Datasets is a lightweight library providing **two** main features: - **one-line dataloaders for many public datasets**: one-liners to download and pre-process any of the ![number of datasets](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/datasets&color=brightgreen) major public datasets (image datasets, audio datasets, text datasets in 467 languages and dialects, etc.) provided on the [HuggingFace Datasets Hub](https://huggingface.co/datasets). With a simple command like `squad_dataset = load_dataset("squad")`, get any of these datasets ready to use in a dataloader for training/evaluating a ML model (Numpy/Pandas/PyTorch/TensorFlow/JAX), - **efficient data pre-processing**: simple, fast and reproducible data pre-processing for the public datasets as well as your own local datasets in CSV, JSON, text, PNG, JPEG, WAV, MP3, Parquet, etc. With simple commands like `processed_dataset = dataset.map(process_example)`, efficiently prepare the dataset for inspection and ML model evaluation and training. [🎓 **Documentation**](https://huggingface.co/docs/datasets/) [🕹 **Colab tutorial**](https://colab.research.google.com/github/huggingface/datasets/blob/main/notebooks/Overview.ipynb) [🔎 **Find a dataset in the Hub**](https://huggingface.co/datasets) [🌟 **Add a new dataset to the Hub**](https://huggingface.co/docs/datasets/share.html) <h3 align="center"> <a href="https://hf.co/course"><img src="https://raw.githubusercontent.com/huggingface/datasets/main/docs/source/imgs/course_banner.png"></a> </h3> 🤗 Datasets is designed to let the community easily add and share new datasets. 🤗 Datasets has many additional interesting features: - Thrive on large datasets: 🤗 Datasets naturally frees the user from RAM memory limitation, all datasets are memory-mapped using an efficient zero-serialization cost backend (Apache Arrow). - Smart caching: never wait for your data to process several times. - Lightweight and fast with a transparent and pythonic API (multi-processing/caching/memory-mapping). - Built-in interoperability with NumPy, pandas, PyTorch, Tensorflow 2 and JAX. - Native support for audio and image data - Enable streaming mode to save disk space and start iterating over the dataset immediately. 🤗 Datasets originated from a fork of the awesome [TensorFlow Datasets](https://github.com/tensorflow/datasets) and the HuggingFace team want to deeply thank the TensorFlow Datasets team for building this amazing library. More details on the differences between 🤗 Datasets and `tfds` can be found in the section [Main differences between 🤗 Datasets and `tfds`](#main-differences-between--datasets-and-tfds). # Installation ## With pip 🤗 Datasets can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance) ```bash pip install datasets ``` ## With conda 🤗 Datasets can be installed using conda as follows: ```bash conda install -c huggingface -c conda-forge datasets ``` Follow the installation pages of TensorFlow and PyTorch to see how to install them with conda. For more details on installation, check the installation page in the documentation: https://huggingface.co/docs/datasets/installation ## Installation to use with PyTorch/TensorFlow/pandas If you plan to use 🤗 Datasets with PyTorch (1.0+), TensorFlow (2.2+) or pandas, you should also install PyTorch, TensorFlow or pandas. For more details on using the library with NumPy, pandas, PyTorch or TensorFlow, check the quick start page in the documentation: https://huggingface.co/docs/datasets/quickstart # Usage 🤗 Datasets is made to be very simple to use. The main methods are: - `datasets.list_datasets()` to list the available datasets - `datasets.load_dataset(dataset_name, **kwargs)` to instantiate a dataset This library can be used for text/image/audio/etc. datasets. Here is an example to load a text dataset: Here is a quick example: ```python from datasets import list_datasets, load_dataset # Print all the available datasets print(list_datasets()) # Load a dataset and print the first example in the training set squad_dataset = load_dataset('squad') print(squad_dataset['train'][0]) # Process the dataset - add a column with the length of the context texts dataset_with_length = squad_dataset.map(lambda x: {"length": len(x["context"])}) # Process the dataset - tokenize the context texts (using a tokenizer from the 🤗 Transformers library) from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') tokenized_dataset = squad_dataset.map(lambda x: tokenizer(x['context']), batched=True) ``` If your dataset is bigger than your disk or if you don't want to wait to download the data, you can use streaming: ```python # If you want to use the dataset immediately and efficiently stream the data as you iterate over the dataset image_dataset = load_dataset('cifar100', streaming=True) for example in image_dataset["train"]: break ``` For more details on using the library, check the quick start page in the documentation: https://huggingface.co/docs/datasets/quickstart.html and the specific pages on: - Loading a dataset: https://huggingface.co/docs/datasets/loading - What's in a Dataset: https://huggingface.co/docs/datasets/access - Processing data with 🤗 Datasets: https://huggingface.co/docs/datasets/process - Processing audio data: https://huggingface.co/docs/datasets/audio_process - Processing image data: https://huggingface.co/docs/datasets/image_process - Processing text data: https://huggingface.co/docs/datasets/nlp_process - Streaming a dataset: https://huggingface.co/docs/datasets/stream - Writing your own dataset loading script: https://huggingface.co/docs/datasets/dataset_script - etc. Another introduction to 🤗 Datasets is the tutorial on Google Colab here: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/datasets/blob/main/notebooks/Overview.ipynb) # Add a new dataset to the Hub We have a very detailed step-by-step guide to add a new dataset to the ![number of datasets](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/datasets&color=brightgreen) datasets already provided on the [HuggingFace Datasets Hub](https://huggingface.co/datasets). You can find: - [how to upload a dataset to the Hub using your web browser or Python](https://huggingface.co/docs/datasets/upload_dataset) and also - [how to upload it using Git](https://huggingface.co/docs/datasets/share). # Main differences between 🤗 Datasets and `tfds` If you are familiar with the great TensorFlow Datasets, here are the main differences between 🤗 Datasets and `tfds`: - the scripts in 🤗 Datasets are not provided within the library but are queried, downloaded/cached and dynamically loaded upon request - 🤗 Datasets also provides evaluation metrics in a similar fashion to the datasets, i.e. as dynamically installed scripts with a unified API. This gives access to the pair of a benchmark dataset and a benchmark metric for instance for benchmarks like [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) or [GLUE](https://gluebenchmark.com/). - the backend serialization of 🤗 Datasets is based on [Apache Arrow](https://arrow.apache.org/) instead of TF Records and leverage python dataclasses for info and features with some diverging features (we mostly don't do encoding and store the raw data as much as possible in the backend serialization cache). - the user-facing dataset object of 🤗 Datasets is not a `tf.data.Dataset` but a built-in framework-agnostic dataset class with methods inspired by what we like in `tf.data` (like a `map()` method). It basically wraps a memory-mapped Arrow table cache. # Disclaimers Similar to TensorFlow Datasets, 🤗 Datasets is a utility library that downloads and prepares public datasets. We do not host or distribute most of these datasets, vouch for their quality or fairness, or claim that you have license to use them. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license. Moreover 🤗 Datasets may run Python code defined by the dataset authors to parse certain data formats or structures. For security reasons, we ask users to: - check the dataset scripts they're going to run beforehand and - pin the `revision` of the repositories they use. If you're a dataset owner and wish to update any part of it (description, citation, license, etc.), or do not want your dataset to be included in the Hugging Face Hub, please get in touch by opening a discussion or a pull request in the Community tab of the dataset page. Thanks for your contribution to the ML community! ## BibTeX If you want to cite our 🤗 Datasets library, you can use our [paper](https://arxiv.org/abs/2109.02846): ```bibtex @inproceedings{lhoest-etal-2021-datasets, title = "Datasets: A Community Library for Natural Language Processing", author = "Lhoest, Quentin and Villanova del Moral, Albert and Jernite, Yacine and Thakur, Abhishek and von Platen, Patrick and Patil, Suraj and Chaumond, Julien and Drame, Mariama and Plu, Julien and Tunstall, Lewis and Davison, Joe and {\v{S}}a{\v{s}}ko, Mario and Chhablani, Gunjan and Malik, Bhavitvya and Brandeis, Simon and Le Scao, Teven and Sanh, Victor and Xu, Canwen and Patry, Nicolas and McMillan-Major, Angelina and Schmid, Philipp and Gugger, Sylvain and Delangue, Cl{\'e}ment and Matussi{\`e}re, Th{\'e}o and Debut, Lysandre and Bekman, Stas and Cistac, Pierric and Goehringer, Thibault and Mustar, Victor and Lagunas, Fran{\c{c}}ois and Rush, Alexander and Wolf, Thomas", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.21", pages = "175--184", abstract = "The scale, variety, and quantity of publicly-available NLP datasets has grown rapidly as researchers propose new tasks, larger models, and novel benchmarks. Datasets is a community library for contemporary NLP designed to support this ecosystem. Datasets aims to standardize end-user interfaces, versioning, and documentation, while providing a lightweight front-end that behaves similarly for small datasets as for internet-scale corpora. The design of the library incorporates a distributed, community-driven approach to adding datasets and documenting usage. After a year of development, the library now includes more than 650 unique datasets, has more than 250 contributors, and has helped support a variety of novel cross-dataset research projects and shared tasks. The library is available at https://github.com/huggingface/datasets.", eprint={2109.02846}, archivePrefix={arXiv}, primaryClass={cs.CL}, } ``` If you need to cite a specific version of our 🤗 Datasets library for reproducibility, you can use the corresponding version Zenodo DOI from this [list](https://zenodo.org/search?q=conceptrecid:%224817768%22&sort=-version&all_versions=True).


نیازمندی

مقدار نام
>=1.17 numpy
>=8.0.0 pyarrow
<0.3.7,>=0.3.0 dill
- pandas
>=2.19.0 requests
>=4.62.1 tqdm
- xxhash
- multiprocess
>=2021.11.1 fsspec[http]
- aiohttp
<1.0.0,>=0.11.0 huggingface-hub
- packaging
<0.19 responses
>=5.1 pyyaml
- importlib-metadata
<2.44.0,>=2.26.0 apache-beam
>=0.12.1 soundfile
- librosa
==1.18.5 numpy
==2.3.0 tensorflow
==1.7.1 torch
==3.0.2 transformers
==3.20.3 protobuf
- absl-py
- pytest
- pytest-datadir
- pytest-xdist
<8.0.0 elasticsearch
>=1.6.4 faiss-cpu
- lz4
>=3.4 pyspark
- py7zr
>=4.0 rarfile
<2.0.0 sqlalchemy
- torch
>=0.12.1 soundfile
- transformers
- zstandard
>=6.2.1 Pillow
- librosa
~=23.1 black
>=0.0.241 ruff
>=5.3.1 pyyaml
- s3fs
<2.44.0,>=2.26.0 apache-beam
>=2021.11.1 s3fs
- tiktoken
!=2.6.0,!=2.6.1,>=2.3 tensorflow
- tensorflow-macos
- s3fs
!=0.3.2,<=0.3.25,>=0.2.8 jax
<=0.3.25,>=0.1.65 jaxlib
>=0.3.6 bert-score
- jiwer
- langdetect
- mauve-text
- nltk
- rouge-score
- sacrebleu
- sacremoses
- scikit-learn
- scipy
- sentencepiece
- seqeval
>=3.0.0 spacy
- tldextract
>=0.10.1 toml
<0.5.0 typer
>=1.5.1 requests-file
>=3.1.0 tldextract
>=1.6.3 texttable
>=1.0.1 Werkzeug
~=1.15.0 six
~=23.1 black
>=0.0.241 ruff
>=5.3.1 pyyaml
- s3fs
!=2.6.0,!=2.6.1,>=2.2.0 tensorflow
- tensorflow-macos
!=2.6.0,!=2.6.1,>=2.2.0 tensorflow-gpu
- absl-py
- pytest
- pytest-datadir
- pytest-xdist
<8.0.0 elasticsearch
>=1.6.4 faiss-cpu
- lz4
>=3.4 pyspark
- py7zr
>=4.0 rarfile
<2.0.0 sqlalchemy
- torch
>=0.12.1 soundfile
- transformers
- zstandard
>=6.2.1 Pillow
- librosa
<2.44.0,>=2.26.0 apache-beam
>=2021.11.1 s3fs
- tiktoken
!=2.6.0,!=2.6.1,>=2.3 tensorflow
- tensorflow-macos
- torch
>=6.2.1 Pillow


زبان مورد نیاز

مقدار نام
>=3.7.0 Python


نحوه نصب


نصب پکیج whl datasets-2.9.0:

    pip install datasets-2.9.0.whl


نصب پکیج tar.gz datasets-2.9.0:

    pip install datasets-2.9.0.tar.gz