<p align="center" style="font-size: 500%">🧞♂️</p>
# <p align="center">𝔸𝕤𝕤𝕚𝕤𝕥𝕒𝕟𝕥</p>
<p align="center">Your very own Assistant. Because you deserve it.</p>
## Requirements
You need `python 3` with the following requirements:
- `Python 3.x`
- [`Domain Management Tool`](https://gitlab.com/waser-technologies/technologies/dmt)
- (optional) [`say`](https://gitlab.com/waser-technologies/technologies/say)
- (optional) [`listen`](https://gitlab.com/waser-technologies/technologies/listen)
- min. 12 Gb RAM
- min. 30 Gb availible disk space
- (optional, recommended) min. 11 Gb VRAM on a Nvidia GPU
- May require an internet connection to download the models initially
## Installation
To install `Assistant` use `pip`:
```bash
pip install assistant
```
Using an arch based distro. (Availible on the [AUR](https://aur.archlinux.org/packages/python-assistant) and pre-built on [Singularity](https://github.com/wasertech/singularity/releases/tag/x86_64))
```zsh
pacman -S python-assistant
```
From source:
```bash
pip install -U git+https://gitlab.com/waser-technologies/technologies/assistant.git
```
From local source
```bash
git clone https://gitlab.com/waser-technologies/technologies/assistant.git ./assistant
cd assistant
pip install -U .
```
## Start the service
To talk with Assistant, you need to load the service up first.
```bash
cp ./assistant.service.example /usr/usr/lib/systemd/user/assistant.service
systemctl --user enable --now dmt assistant
```
(optional) enable listen for assistant
```bash
cp ./assistant.listen.service.example /usr/usr/lib/systemd/user/assistant.listen.service
systemctl --user enable --now assistant.listen
```
Or manually from python:
```bash
dmt -S & # load the NLP models #
sleep 60 && # wait for the models to load #
python -m assistant.as_service & # Assistance is a service #
python -m listen.STT.as_service & # optional #
# let assistant listen when you speak #
python -m assistant.listen
```
Once the service is up, you can type or say `assistant`.
Note that you need the `dmt` service loaded up first. First time you use the `dmt` to serve NLP models with `rasa`, the `dmt` will download the latest pre-build models for your language and install the required domains.
However it's recommended that you acually use the `dmt` to train your own models based on the domains you want to use and have installed.
```
# List installed domains
dmt -L
# Validate the data and train
dmt -V -T
# You now can serve your last trained model
dmt -S
```
## Usage
Just call `Assistant` like any other shell.
```bash
❯ assistant --help
usage: assistant [-h] [-V] [-c COMMAND] [-i] [-l] [--rc RC [RC ...]] [--no-rc]
[--no-script-cache] [--cache-everything] [-D ITEM]
[--shell-type {b,best,d,dumb,ptk,ptk1,ptk2,prompt-toolkit,prompt_toolkit,prompt-toolkit1,prompt-toolkit2,prompt-toolkit3,prompt_toolkit3,ptk3,rand,random,rl,readline}]
[--timings]
[script-file] ...
Assistant: a clever shell implementation
positional arguments:
script-file If present, execute the script in script-file or (if
not present) execute as a command and exit.
args Additional arguments to the script (or command)
specified by script-file.
optional arguments:
-h, --help Show help and exit.
-V, --version Show version information and exit.
-c COMMAND Run a single command and exit.
-i, --interactive Force running in interactive mode.
-l, --login Run as a login shell.
--rc RC [RC ...] RC files to load.
--no-rc Do not load any rc files.
--no-script-cache Do not cache scripts as they are run.
--cache-everything Use a cache, even for interactive commands.
-D ITEM Define an environment variable, in the form of
-DNAME=VAL. May be used many times.
--shell-type {b,best,d,dumb,ptk,ptk1,ptk2,prompt-toolkit,prompt_toolkit,prompt-toolkit1,prompt-toolkit2,prompt-toolkit3,prompt_toolkit3,ptk3,rand,random,rl,readline}
What kind of shell should be used. Possible options:
readline, prompt_toolkit, random. Warning! If set this
overrides $SHELL_TYPE variable.
--timings Prints timing information before the prompt is shown.
This is useful while tracking down performance issues
and investigating startup times.
❯ assistant Hi
Hey, how are you today?
❯ assistant -c "what time is it"
The current time is 1:35 p.m.
❯ assistant -i -l --no-rc --no-script-cache -DPATH="PATH:/share/assistant/"
❯ assistant script.nlp
```
## Examples
The examples below are produced in interactive mode.
### Jaques à dit: répond
```assistant
❯ echo Hello
Hello
❯ say Hello World # This requires say to be installed
Hello World
❯ Hi Assistant.
Hello $USERNAME.
```
### Navigate files
```assistant
❯ Where am I
You are at home.
❯ Open Documents
You are now in the Documents directory inside home.
❯ What do we have here?
Listing the current working directory.
...
```
### Get to the bottom of things
Using [WebSearch](https://gitlab.com/waser-technologies/data/nlu/en/web-search) domain, you can get pretty meaningful answers.
```assistant
❯ How many moons does Saturn have?
I'll need a moment.
❯ Take your time.
I think I have found the answer to how many moons Saturn has.
Saturn has 83 moons.
❯ How old is the universe?
Give me a moment.
I know how old the universe is.
The universe is 13.7 billion years old.
```
### Exit the session
To exit the current session, you can type pretty much anything. As long as `Assistant` can reasonnably understand your intent.
*i.e.* :
```assistant
❯ exit
❯ Q
❯ :q
❯ quit
❯ stop()
❯ terminate
❯ This conversation is over.
❯ Stop this session.
```
## Using voice
### Text-To-Speech
Assistant can talk. Just install [`say`](https://gitlab.com/waser-technologies/technologies/say) and authorize the system to speak. Make sure the service is running and Assistant should be able to connect to it.
Within an interactve session with Assistant, you can toggle TTS using either:
- the `Interface` menu,
- the keyboard shortcut `[Ctrl] + [S]`
- Assistant by authorizing it to speak (e.g. `assistant you can speak now`).
```assistant
assistant say Hello World and welcome to everyone.
```
### Speech-To-Text
Assistant can also understand when you talk. Just install [`listen`](https://gitlab.com/waser-technologies/technologies/listen) and authorize the system to listen. Make sure `listen.service`, `assistant.service` and `assistant.listen.service` are enabled for Assistant to be able to pick up what you say.
By default, neither the accoustic model nor the language model are ajusted for Assistant, so it's a good idea to at least create a custom scorer using the STT Training Wizard.
```zsh
trainer=~/.assistant/trainers/stt.train
git archive --remote=git@gitlab.com:waser-technologies/models/en/stt.git HEAD | tar xvf - model.train
mv model.train $trainer
chmod +x $trainer
zsh $trainer
```
## Use Assistant as your default shell
> **This is not recommended in alpha!**
You sould be able to add the location of `assistant` at the end of `/etc/shells`. You'll then be able to set `Assistant` as your default shell using `chsh`.
```bash
sudo sh -c 'w=$(which assistant); echo $w >> /etc/shells'
chsh -s $(which assistant)
```
Log out and when you come back, `Assistant` will be your default shell.
## Contributions
You like the projet and want to improve upon it?
Checkout [`CONTRIBUTING.md`](CONTRIBUTING.md) to see how you might be able to help.
## Credits
Thanks to all the projects that make this possible:
- [Xonsh](https://github.com/xonsh/xonsh): the best snail in the jungle
- [RASA](https://github.com/RasaHQ/rasa): so Assistant can answer at all
- [coqui-STT](https://github.com/coqui-ai/STT): so you can speak too
- [coqui-TTS](https://github.com/coqui-ai/TTS): so Assistant can reply out-loud
- And many many many more.