# Baarutil
**This Custom Library is specifically created for the developers/users who use BAAR. Which is a product of [Allied Media Inc](https://www.alliedmedia.com/).**
<h2>
Authors:
</h2>
**Souvik Roy [sroy-2019](https://github.com/sroy-2019)**
**Zhaoyu (Thomas) Xu [xuzhaoyu](https://github.com/xuzhaoyu)**
<h2>
Additional Info:
</h2>
The string structure that follows is a streamlined structure that the developers/users follow throughout an automation workflow designed in BAAR:
~~~
"Column_1__=__abc__$$__Column_2__=__def__::__Column_1__=__hello__$$__Column_2__=__world"
~~~
<h2>
Available functions and the examples are listed below:
</h2>
<h3>
1. read_convert(string), Output Data Type: list of dictionary
</h3>
**Attributes:**
*i. **string:** Input String, Data Type = String*
~~~
Input: "Column_1__=__abc__$$__Column_2__=__def__::__Column_1__=__hello__$$__Column_2__=__world"
Output: [{"Column_1":"abc", "Column_2":"def"}, {"Column_1":"hello", "Column_2":"world"}]
~~~
<h3>
2. write_convert(input_list), Output Data Type: string
</h3>
**Attributes:**
*i. **input_list:** List that contains the Dictionaries of Data, Data Type = List*
~~~
Input: [{"Column_1":"abc", "Column_2":"def"}, {"Column_1":"hello", "Column_2":"world"}]
Output: "Column_1__=__abc__$$__Column_2__=__def__::__Column_1__=__hello__$$__Column_2__=__world"
~~~
<h3>
3. string_to_df(string, rename_cols, drop_dupes), Output Data Type: pandas DataFrame
</h3>
**Attributes:**
*i. **string:** Input String, Data Type = String*
*ii. **rename_cols:** Dictionary that contains old column names and new column names mapping, Data Type = Dictionary, Default Value = {}*
*iii. **drop_dupes:** Drop duplicate rows from the final dataframe, Data Type = Bool, Default Value = False*
~~~
Input: "Column_1__=__abc__$$__Column_2__=__def__::__Column_1__=__hello__$$__Column_2__=__world"
~~~
Output:
<table>
<thead>
<tr>
<th>Column_1</th>
<th>Column_2</th>
</tr>
</thead>
<tbody>
<tr>
<td>abc</td>
<td>def</td>
</tr>
<tr>
<td>hello</td>
<td>world</td>
</tr>
</tbody>
</table>
<h3>
4. df_to_string(input_df, rename_cols, drop_dupes), Output Data Type: string
</h3>
**Attributes:**
*i. **input_df:** Input DataFrame, Data Type = pandas DataFrame*
*ii. **rename_cols:** Dictionary that contains old column names and new column names mapping, Data Type = Dictionary, Default Value = {}*
*iii. **drop_dupes:** Drop duplicate rows from the final dataframe, Data Type = Bool, Default Value = False*
Input:
<table>
<thead>
<tr>
<th>Column_1</th>
<th>Column_2</th>
</tr>
</thead>
<tbody>
<tr>
<td>abc</td>
<td>def</td>
</tr>
<tr>
<td>hello</td>
<td>world</td>
</tr>
</tbody>
</table>
~~~
Output: "Column_1__=__abc__$$__Column_2__=__def__::__Column_1__=__hello__$$__Column_2__=__world"
~~~
<h3>
5. df_to_listdict(input_df, rename_cols, drop_dupes), Output Data Type: list
</h3>
**Attributes:**
*i. **input_df:** Input DataFrame, Data Type = pandas DataFrame*
*ii. **rename_cols:** Dictionary that contains old column names and new column names mapping, Data Type = Dictionary, Default Value = {}*
*iii. **drop_dupes:** Drop duplicate rows from the final dataframe, Data Type = Bool, Default Value = False*
Input:
<table>
<thead>
<tr>
<th>Column_1</th>
<th>Column_2</th>
</tr>
</thead>
<tbody>
<tr>
<td>abc</td>
<td>def</td>
</tr>
<tr>
<td>hello</td>
<td>world</td>
</tr>
</tbody>
</table>
~~~
Output: [{"Column_1":"abc", "Column_2":"def"}, {"Column_1":"hello", "Column_2":"world"}]
~~~
<h3>
6. decrypt_vault(encrypted_message, config_file), Output Data Type: string
</h3>
**Attributes:**
*i. **encrypted_message:** Encrypted Baar Vault Data, Data Type = string*
*ii. **config_file:** Keys, that need to be provided by [Allied Media](https://www.alliedmedia.com/).*
This function can also be called from a Robot Framework Script by importing the baarutil library and using the Decrypt Vault keyword. Upon initiation of this function, this will set the Log Level of the Robot Framework script to NONE for security reasons. The Developers have to use *Set Log Level INFO* in the robot script in order to restart the Log.
~~~
Input: <<Encrypted Text>>
Output: <<Decrypted Text>>
~~~
<h3>
7. generate_password(password_size, upper, lower, digits, symbols, exclude_chars), Output Data Type: string
</h3>
**Attributes:**
*i. **password_size:** Password Length, Data Type = int, Default Value = 10, (Should be greater than 4)*
*ii. **upper:** Are Uppercase characters required?, Data Type = Bool (True/False), Default Value = True*
*iii. **lower:** Are Lowercase characters required?, Data Type = Bool (True/False), Default Value = True*
*iv. **digits:** Are Digits characters required?, Data Type = Bool (True/False), Default Value = True*
*v. **symbols:** Are Symbols/ Special characters required?, Data Type = Bool (True/False), Default Value = True*
*vi. **exclude_chars:** List of characters to be excluded from the final password, Data Type = List, Default Value = []*
This function can also be called from a Robot Framework Script by importing the baarutil library and using Generate Password keyword. Upon initiation of this function, this will set the Log Level of the Robot Framework script to NONE for security reasons. The Developers have to use *Set Log Level INFO* in the robot script in order to restart the Log.
~~~
Input (Optional): <<Password Length>>, <<Uppercase Required?>>, <<Lowercase Required?>>, <<Digits Required?>>, <<Symbols Required?>>
Output: <<Password String>>
~~~
<h3>
8. generate_report(data_df, file_name, path, file_type, detailed_report, replace_old_file, final_file_name_case, time_stamp, encoding, index, engine, max_new_files_count, sheet_name), Output Data Type: Bool or, Dictionary (based on the input value of detailed_report)
</h3>
**Attributes:**
*i. **data_df:** Input Dataframe, Data Type = pandas.DataFrame()*
*ii. **file_name:** Final file name, Data Type = str*
*iii. **path:** Final file path, Data Type = str, Default Value = Current working directory*
*iv. **file_type:** Final file extension/ file type, Data Type = str, Default Value = 'csv', Available Options = 'csv' or, 'xlsx'*
*v. **detailed_report:** Is detailed status of the run required?, Data Type = Bool (True/False), Default Value = False*
*vi. **replace_old_file:** Should the program replace the old files after each run? or keep creating new files (only works if the final file name is the same each time), Data Type = Bool (True/False)*
*vii. **final_file_name_case:** Font case of the final file name, Data Type = str, Default Value = 'unchanged', Available Options = 'upper' or, 'lower' or, 'unchanged'*
*viii. **time_stamp:** Time stamp at the end of the filename to make each file unique, Data Type = Bool (True/False), Default Value = False*
*ix. **encoding:** Encoding of the file, Data Type = str, Default Value = 'utf-8'*
*x. **index:** Dataframe index in the final file, Data Type = Bool (True/False), Default Value = False*
*xi. **engine:** Engine of the excelwriter for pandas to_excel function, Data Type = str, Default Value = 'openpyxl'*
*xii. **max_new_files_count:** Count of maximum new files if the replace_old_file is False, Data Type = int, Default Value = 100*
*xiii. **sheet_name:** Sheet name in the final excel, Data Type = str, Default Value = 'Sheet1'*
This function can also be called from a Robot Framework Script by importing the baarutil library and using Generate Report.
~~~
Input: Mandetory arguments -> data_df, file_name
Output (if detailed_report==False): True/ False
Output (if detailed_report==True): {'file_path': <<Absolute path of the newly generated file>>, 'file_generation_status': True/ False, 'message': <<Detailed message>>, 'start_time': <<Start time when the function was initiated>>, 'end_time': <<End time when the function was completed>>}
~~~
<h3>
9. string_to_report(data, file_name, path, file_type, detailed_report, replace_old_file, final_file_name_case, time_stamp, encoding, index, engine, max_new_files_count, sheet_name), Output Data Type: Bool or, Dictionary (based on the input value of detailed_report, rename_cols, drop_dupes)
</h3>
**Attributes:**
*i. **data:** Input BAAR string, Data Type = str*
*ii. **file_name:** Final file name, Data Type = str*
*iii. **path:** Final file path, Data Type = str, Default Value = Current working directory*
*iv. **file_type:** Final file extension/ file type, Data Type = str, Default Value = 'csv', Available Options = 'csv' or, 'xlsx'*
*v. **detailed_report:** Is detailed status of the run required?, Data Type = Bool (True/False), Default Value = False*
*vi. **replace_old_file:** Should the program replace the old files after each run? or keep creating new files (only works if the final file name is the same each time), Data Type = Bool (True/False)*
*vii. **final_file_name_case:** Font case of the final file name, Data Type = str, Default Value = 'unchanged', Available Options = 'upper' or, 'lower' or, 'unchanged'*
*viii. **time_stamp:** Time stamp at the end of the filename to make each file unique, Data Type = Bool (True/False), Default Value = False*
*ix. **encoding:** Encoding of the file, Data Type = str, Default Value = 'utf-8'*
*x. **index:** Dataframe index in the final file, Data Type = Bool (True/False), Default Value = False*
*xi. **engine:** Engine of the excelwriter for pandas to_excel function, Data Type = str, Default Value = 'openpyxl'*
*xii. **max_new_files_count:** Count of maximum new files if the replace_old_file is False, Data Type = int, Default Value = 100*
*xiii. **sheet_name:** Sheet name in the final excel, Data Type = str, Default Value = 'Sheet1'*
*xiv. **rename_cols:** Dictionary that contains old column names and new column names mapping, Data Type = Dictionary, Default Value = {}*
*xv. **drop_dupes:** Drop duplicate rows from the final dataframe, Data Type = Bool, Default Value = False*
This function can also be called from a Robot Framework Script by importing the baarutil library and using String To Report.
~~~
Input: Mandetory arguments -> data (BAAR String: Column_1__=__abc__$$__Column_2__=__def__::__Column_1__=__hello__$$__Column_2__=__world), file_name
Output (if detailed_report==False): True/ False
Output (if detailed_report==True): {'file_path': <<Absolute path of the newly generated file>>, 'file_generation_status': True/ False, 'message': <<Detailed message>>, 'start_time': <<Start time when the function was initiated>>, 'end_time': <<End time when the function was completed>>}
~~~
<h3>
10. clean_directory(path, remove_directory), Output Data Type: boolean
</h3>
**Attributes:**
*i. **path:** Absolute paths of the target directories separated by "|", Data Type = str*
*ii. **remove_directory:** Should the nested directories be deleted?, Data Type = Bool (True/False), Default Value = False*
This function can also be called from a Robot Framework Script by importing the baarutil library and using the Clean Directory keyword.
~~~
Input: "C:/Path1|C:/Path2|C:/Path3"
Output: True/False
~~~
<h3>
11. archive(source, destination, operation_type, dynamic_folder, dynamic_filename, custom_folder_name_prefix, timestamp_format), Output Data Type: boolean & string
</h3>
**Attributes:**
*i. **source:** Absolute source path, Data Type = str*
*ii. **destination:** Absolute destination path, Data Type = str*
*iii. **operation_type:** What type of operation?, Data Type = str, Default Value = 'cut', Available Options = 'cut' or, 'copy'*
*iv. **dynamic_folder:** Should there be a folder created within the destination folder in which the archived files will be placed?, Data Type = Bool (True/False), Default Value = 'True'*
*v. **dynamic_filename:** Should the files be renamed after being archived with a Timestamp as a Postfix?, Data Type = str, Data Type = Bool (True/False), Default Value = 'False'*
*vi. **custom_folder_name_prefix:** What should be the name of the dynamic custom folder if the dynaimc_folder = True?, Data Type = str, Default Value = 'Archive'*
*vii. **timestamp_format:** Format of the timestamp for the folder name/ file name postfixes, Data Type = str, Default Value = '%d-%m-%Y_%H.%M.%S', Available Options = any python datetime formats*
This function can also be called from a Robot Framework Script by importing the baarutil library and using Archive keyword.
~~~
Input: source="C:/Path1", destination="C:/Path2"
Output1 (completion_flag), Output2 (final_destination): True/False, "C:/Path2/Archive_24-02-2022_17.44.07"
~~~