معرفی شرکت ها


Indago-0.2.7


Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر

توضیحات

Numerical optimization framework
ویژگی مقدار
سیستم عامل OS Independent
نام فایل Indago-0.2.7
نام Indago
نسخه کتابخانه 0.2.7
نگهدارنده []
ایمیل نگهدارنده []
نویسنده sim.riteh.hr
ایمیل نویسنده stefan.ivic@riteh.hr
آدرس صفحه اصلی http://sim.riteh.hr/
آدرس اینترنتی https://pypi.org/project/Indago/
مجوز -
# Indago Indago is a Python 3 module for numerical optimization. Indago containts several modern algorithms for real fitness function optimization over a real parameter domain. It was developed at the Department for Fluid Mechanics and Computational Engineering of the University of Rijeka, Faculty of Engineering, by Stefan Ivić, Siniša Družeta, and others. Indago is developed for in-house research and teaching purposes and is not officially supported in any way, comes with no guarantees whatsoever and is not properly documented. However, we use it regulary and it seems to be working fine. Hopefully you will find it useful as well. **Important**: After every Indago update please check this document since Indago methods and APIs can undergo significant changes at any time. ## Installation For easiest install use ``` pip3 install indago ``` or if you wish to update your existing Indago installation ``` pip3 install indago --upgrade ``` <!--In order to obtain Indago code, clone Gitlab repository by executing following command in the directory where you want to locate Indago root directory: ``` git clone https://gitlab.com/sivic/indago.git ``` For building and installing Indago package into your Python environment ``` python setup.py build python setup.py install ``` Or for continous testing/developing: ``` python setup.py clean build install ```--> ## Dependencies The following packages should be installed using `aptitude`: - `python3` - `python3-pip` - `python3-tk` ``` sudo apt install python3 python3-pip python3-tk ``` After packages installation using above command, additional python packages should be installed using `pip` from `requirements.txt`: ``` pip install -r requirements.txt ``` ## Optimization problem setup Using Indago is easy. The setup of the optimization problem in Indago is the same regardless of the used optimization algorithm (a.k.a. optimizer): ```python # Evaluation function def evaluation(x): obj = np.sum(x ** 2) # minimization objective constr1 = x[0] - x[1] # constraint x_0 - x_1 <= 0 constr2 = - np.sum(x) # constraint sum x_i >= 0 return obj, constr1, constr2 # Initialize the chosen algorithm from indago import PSO # ...or any other Indago algorithm optimizer = PSO() # Optimization variables settings optimizer.dimensions = 10 # number of variables (i.e. size of design vector x) optimizer.lb = -10 # lower bound, given as scalar (equal for all variables) optimizer.ub = 10 + np.range(pso.dimensions) # upper bounds, given as np.array (one bound value per variable) # Set evaluation function optimizer.evaluation_function = evaluation # Objectives and constraints settings optimizer.objectives = 1 # number of objectives (optional parameter, default objectives=1), this is obj in evaluation function optimizer.objective_labels = ['Squared sum minimization'] # labels for objectives (optional parameter, used in reporting) optimizer.constraints = 2 # number of constraints (optional parameter, default constraints=0), these are constr1 and constr2 in evaluation function optimizer.constraint_labels = ['Constraint 1', 'Constraint 2'] # labels for constraints (optional parameter, used in reporting) # Print optimizer parameters print(optimizer) # not necessary, but useful for checking the setup of the optimizer # Run optimization result = optimizer.optimize() # (using default algorithm parameters) # Extract results print(result.f) # minimum of obj with constr1 and constr2 satisfied print(result.X) # design vector at minimum ``` ## Algorithms As of now, Indago consists of several stochastic optimizers: - Particle Swarm Optimization (PSO) - Fireworks Algorithm (FWA) - Squirrel Search Algorithm (SSA) - Differential Evolution (DE) - Bat Algorithm (BA) - Electromagnetic Field Optimization (EFO) - Manta Ray Foraging Optimization (MRFO) These algorithms are available through a unified API, which was designed to be as accessible as possible. Indago relies heavily on NumPy, so the inputs and outputs of the optimizers are mostly NumPy arrays. Besides NumPy and a couple of other stuff here and there (a few SciPy functions and `rich` module for fancy monitoring), Indago is pure Python. Indago optimizers also include some of our original research improvements, so feel free to try those as well. And don't forget to cite. :) ### Particle Swarm Optimization Let us use PSO as a primary step-by-step example. First, we need to import NumPy and Indago PSO, and then initialize an optimizer object: ```python import numpy as np from indago import PSO pso = PSO() ``` Then, we must provide a goal function which needs to be minimized, say: ```python def goalfun(x): # must take 1d np.array return np.sum(x**2) # must return scalar number pso.evaluation_function = goalfun ``` Now we can define optimizer inputs: ```python pso.method = 'Vanilla' # we will use Standard PSO, the other available option is 'TVAC' [1]; default method='Vanilla' pso.dimensions = 20 # number of variables in the design vector (x) pso.lb = np.ones(pso.dimensions) * -1 # 1d np.array of lower bound values (if scalar value is given, it will automatically be transformed to 1d np.array of size dimensions, filled with the value) pso.ub = np.ones(pso.dimensions) * 1 # 1d np.array of upper bound values (if scalar value is given, it will automatically be transformed to 1d np.array of size dimensions, filled with the value) pso.iterations = 1000 # default iterations=100*dimensions pso.maximum_evaluations = 5000 # optional maximum allowed number of function evaluations; when surpassed, optimization is stopped (if reached before pso.iterations are exhausted) pso.target_fitness = 10**-3 # optional fitness threshold; when reached, optimization is stopped (if it didn't already stop due to exhausted pso.iterations or pso.maximum_evaluations) ``` Also, we can provide optimization method parameters: ```python pso.params['swarm_size'] = 15 # number of PSO particles; default swarm_size=dimensions pso.params['inertia'] = 0.8 # PSO parameter known as inertia weight w (should range from 0.5 to 1.0), the other available options are 'LDIW' (w linearly decreasing from 1.0 to 0.4) and 'anakatabatic'; default inertia=0.72 pso.params['cognitive_rate'] = 1.0 # PSO parameter also known as c1 (should range from 0.0 to 2.0); default cognitive_rate=1.0 pso.params['social_rate'] = 1.0 # PSO parameter also known as c2 (should range from 0.0 to 2.0); default social_rate=1.0 ``` If we want to use our novel adaptive inertia weight technique [2], which will often produce faster convergence and better accuracy, we invoke it by: ```python pso.params['inertia'] = 'anakatabatic' ``` and then we need to also specify the anakatabatic model: ```python pso.params['akb_model'] = 'Languid' # other options explained below ``` Apart from `'Languid'` [3,4], we can use `'TipsySpider'`, `'FlyingStork'` or `'MessyTie'` models for Vanilla PSO, and `'RightwardPeaks'` or `'OrigamiSnake'` models for TVAC PSO [2]. According to our experience, your best bets are `'Languid'` and `'TipsySpider'`. We can enable reporting during the optimization process by providing the monitoring argument: ```python pso.monitoring = 'basic' # other options are 'none' and 'dashboard'; default monitoring='none' ``` Finally, we can start the optimization and retrieve the results: ```python result = pso.optimize() min_f = result.f # fitness at minimum, scalar number x_min = result.X # design vector at minimum, 1d np.array ``` And that's it! ### Fireworks Algorithm If we want to use FWA [5], we just have to import it instead of PSO: ```python from indago import FWA fwa = FWA() ``` Now we can proceed in the same manner as with PSO. For FWA, the only method available is basic FWA, which is implemented in two versions: 'Vanilla' (ignores contraints) and 'Rank' (supports using constraints): ```python fwa.method = 'Rank' # the other option is 'Vanilla' which does not support constraints; default method='Rank' ``` In FWA we can set the following method parameters: ```python fwa.params['n'] = 20 # default n=dimensions fwa.params['m1'] = 10 # default m1=dimensions/2 fwa.params['m2'] = 10 # default m2=dimensions/2 ``` ### Squirrel Search Algorithm We can try our luck also with SSA [6]. We initialize it like this: ```python from indago import SSA ssa = SSA() ``` In SSA, the only available method is 'Vanilla' (which is set as default), and there is only one important method parameter: ```python ssa.params['acorn_tree_attraction'] = 0.6 # ranges from 0.0 to 1.0; default acorn_tree_attraction=0.5 ``` If we want to fine-tune the algorithm, we can define a few other SSA parameters: ```python ssa.params['predator_presence_probability'] = 0.1 # default ssa.params['gliding_constant'] = 1.9 # default ssa.params['gliding_distance_limits'] = [0.5, 1.11] # default ``` ### Differential Evolution If we want to use DE [7], we initialize it in the same way as with the other methods: ```python from indago import DE de = DE() ``` There are two DE methods implemented, namely SHADE and LSHADE. Say we want to use LSHADE: ```python de.method = 'LSHADE' # default method='SHADE' ``` Both DE methods use the following parameters: ```python de.params['initial_population_size'] = 200 # default initial_population_size=dimensions*18 de.params['external_archive_size_factor'] = 2.6 # default de.params['historical_memory_size'] = 4 # default historical_memory_size=6 de.params['p_mutation'] = 0.2 # default p_mutation=0.11 ``` DE implementation does not (yet) support using constraints. ### Bat Algorithm For using BA [8], we initialize it in the same way as with the other methods: ```python from indago import BA ba = BA() ``` The only BA version implemented is the original Bat Algorithm [8] with mutation modified to make it fitness-scalable. We specifiy it as: ```python ba.method = 'Vanilla' ``` The following parameters are used: ```python ba.params['bat_swarm_size'] = 15 # default bat_swarm_size=dimensions ba.params['loudness'] = 1 # default ba.params['pulse_rate'] = 0.001 # default ba.params['alpha'] = 0.9 # default ba.params['gamma'] = 0.1 # default ba.params['freq_range'] = [0, 1] # default ``` ### Electromagnetic Field Optimization To use EFO [9], we initialize it in the same way as with the other methods: ```python from indago import EFO efo = EFO() ``` In EFO, the only available method is 'Vanilla' (which is set as default): ```python efo.method = 'Vanilla' ``` The following parameters are used: ```python efo.params['population_size'] = 100 # default population_size=10*dimensions efo.params['R_rate'] = 0.25 # should range from 0.1 to 0.4; default R_rate=0.25 efo.params['Ps_rate'] = 0.25 # should range from 0.1 to 0.4; default Ps_rate=0.25 efo.params['P_field'] = 0.075 # should range from 0.05 to 0.1; default P_field=0.075 efo.params['N_field'] = 0.45 # should range from 0.4 to 0.5; default N_field=0.45 ``` Currently, parallelization in EFO is not allowed due to it being entirely ineffective for this method. ### Manta Ray Foraging Optimization If we want to use MRFO [10], we initialize it in the same way as with the other methods: ```python from indago import MRFO mrfo = MRFO() ``` In MRFO, the only available method is 'Vanilla' (set as default): ```python mrfo.method = 'Vanilla' ``` The following parameters are used: ```python mrfo.params['manta_population'] = 3 # default mrfo.params['somersault_factor'] = 2 # default (added for experimentation purposes, probably best be left at default) ``` ## Custom initialization Optionally, we can provide pre-defined design vector(s) for initialization. This can be useful for boosting the optimizer by feeding it known near-optimal solutions, or using non-uniform random generators, etc. The provided design vectors are injected into the algorithm's population at the start of the optimization (the rest of the population will be initialized with uniform random values, as per usual). ```python optimizer.X0 = np.array([[1,2,3], [2,3,4]]) # 1d or 2d np.array; each row represents one design vector ``` ## Multiple objectives and constraints handling The optimization algorithms implemented in Indago are able to consider nonlinear constraints defined as `c(x) <= 0`. The constraints handling is enabled by the multi-level comparison which is able to compare multi-constraint solution candidates. Multi-objective optimization problems can also be treated in Indago by automatically constructed weighted sum fitness, hence reducing the problem to single-objective. The following example prepares a PSO optimizer for an evaluation which returns two objectives and two constraints: ```python pso.objectives = 2 pso.objective_labels = ['Route length', 'Passing time'] pso.objective_weights = [0.4, 0.6] pso.constraints = 2 pso.constraint_labels = ['Obstacles intersection length', 'Curvature limit'] ``` The evaluation function needs to be modified accordingly: ```python def evaluate(x): # Calculate minimization objectives o1 and o2 # Calculate constraints c1 and c2 # Constraints are defined as c1 <= 0 and c2 <= 0 return o1, o2, c1, c2 ``` ## Stopping criteria Five distinct criteria can be enabled for stopping the Indago optimization: - Stop when reached maximum number of iterations (`optimizer.iterations`), - Stop when reached maximum number of evaluations (`optimizer.maximum_evaluations`), - Stop when reached target fitness (`optimizer.target_fitness`), and - Stop when reached maximum number of iterations with no progress (`optimizer.maximum_stalled_iterations`) - Stop when reached maximum number of evaluations with no progress (`optimizer.maximum_stalled_evaluations`) The optimization stops when any of the specified criteria is reached. The maximum number of iterations (`optimizer.iterations`) is a mandatory stopping condition. If not set by the user, it is automatically set to the default value calculated as `iterations = 100 * dimensions`. Maximum number of evaluations, target fitness criterion and stall criteria are enabled only if they are specified by the user. Stopping criteria can be used in any combination. ## Optimization monitoring Three different modes of optimization process monitoring can be used by specifiying the parameter `optimizer.monitoring`. The available options are: - `'none'` - no output is displayed (this is the default behavior), - `'basic'` - one line of output per iteration is provided, comprising basic convergence parameters, and - `'dashboard'` - a live dashboard is shown, featuring progress bars and continuously updated values of parameters most important for tracking optimization convergence. ## Parallel evaluation Indago is able to evaluate a group of solution candidates (e.g. a swarm in PSO) in parallel mode. This is especially useful for expensive (in terms of computational time) engineering problems whose evaluation relies on simulations such as CFD or FEM. Indago utilizes the multiprocessing module for parallelization and it can be enabled by specifying the `number_of_processes` parameter: ```python optimizer.number_of_processes = 4 # use 'maximum' for employing all available processors/cores ``` Note that the implemented parallelization scales well only on relatively slow goal functions. Also keep in mind that Python multiprocessing sometimes does not work when initiated from imported code, so you need to have the optimization run call (`optimizer.optimize()`) wrapped in `if __name__ == '__main__':`. When dealing with numerical simulations, one mostly needs to specify input files and a directory in which the simulation runs. If execution is performed in parallel, these file/directory names need to be unique to avoid possible conflicts in simulation files. In order to facilitate this, Indago offers the option of passing a unique string over to the evaluation function, thus enabling execution of simulations without no conflicts. To enable the passing of a unique string to evaluation function, set `forward_unique_str` to `True`: ```python optimizer.forward_unique_str = True ``` Note that the evaluation function needs an additional argument through which the unique string is received: ```python def evaluation(X, unique_str=None): # Prepare a simulation case in a new file and/or a new directory with names based on unique_str # Run external simulation and extract results return objective ``` ## Post-iteration processing When dealing with simulation based evaluation functions, one often needs to keep some of the results (i.e. simulation results stored in `uniqe_str` directory). However, numerous evaluations can easily consume a large amount of the disk space. Bearing this in mind, Indago offers setting of an post-iteration processing function which can be customized in order to perform the desired actions, such as cleanup, visualisations, etc. The function needs to be defined to take three arguments: iteration, list/array of evaluated candidates (in a given iteration) and the overall best candidate, i.e. `(it, candidates, best)`. This feature works for both single- and multi-processing evaluations. The used post-processing function is called after each collective evaluation (possibly more than once in each iteration, depending on the optimization algorithm used). The following example shows how to cleanup the evaluation results and store convergence log: ```python def evaluation(X, unique_str): # Create directory for calculation os.mkdir(f'{test_dir}/{unique_str}') time.sleep(0.02) o, c1, c2 = np.sum(X**2), X[0] - X[1] + 35, np.sum(np.cos(X) + 0.2) # Save stuff to the directory np.savetxt(f'{test_dir}/{unique_str}/in_out.txt', np.hstack((X, [o, c1, c2]))) return o, c1, c2 def post_iteration_processing(it, candidates, best): if candidates[0] <= best: # Keeping only overall best solution if os.path.exists(f'{test_dir}/best'): shutil.rmtree(f'{test_dir}/best') os.rename(f'{test_dir}/{candidates[0].unique_str}', f'{test_dir}/best') # Keeping best solution of each iteration (if it is the best overall) # os.rename(f'{test_dir}/{candidates[0].unique_str}', f'{test_dir}/best_it{it}') # Log keeps track of new best solutions in each iteration with open(f'{test_dir}/log.txt', 'a') as log: X = ', '.join(f'{x:13.6e}' for x in candidates[0].X) O = ', '.join(f'{o:13.6e}' for o in candidates[0].O) C = ', '.join(f'{c:13.6e}' for c in candidates[0].C) log.write(f'{it:6d} X:[{X}], O:[{O}], C:[{C}], fitness:{candidates[0].f:13.6e}\n') candidates = np.delete(candidates, 0) # Remove the bes from candidates (since its directory is already renamed) # Remove candidates' directories for c in candidates: shutil.rmtree(f'{test_dir}/{c.unique_str}') return optimizer.post_iteration_processing = post_iteration_processing optimizer.optimize() ``` ## Failing evaluation function Sometimes, for whatever reason, goal function (i.e. `optimizer.evaluation_function`) may fail to compute. Indago features a built-in (semi-experimental) scheme for handling such cases, which are identified by evaluation function returning `np.nan`. You can control this scheme by setting the `optimizer.eval_fail_behavior` to one of the following: - `'abort'` - optimization is stopped at the first event of evaluation function returning `np.nan` (default) - `'ignore'` - optimizer will ignore any `np.nan` values returned by the evaluation function (note that Vanilla FWA does not support this) - `'retry'` - optimizer will try to resolve the issue by repeatedly receding a failed design vector a small step towards the best solution thus far When using `optimizer.eval_fail_behavior = 'retry'` the retrying mechanism can be fine-tuned by setting additional parameters: ```python optimizer.eval_retry_attempts = 5 # retry at most 5 times to evaluate the unevaluated design vector; default eval_retry_attempts=10 optimizer.eval_retry_recede = 0.05 # at each retry move the unevaluated design vector 5% towards the hitherto best solution; any value in range (0,1) is allowed; default eval_retry_recede=0.01 ``` Note that setting `optimizer.eval_retry_recede = 0` yields pure evaluation retries without design vector modification, which might be useful for randomly failing fitness functions. Failed evaluations are detected by `optimizer.evaluation_function` returning `np.nan`. Thus the function should be prepared in such a way so that it returns `np.nan` if it fails to compute. However, if you want Indago to handle this for you, you can enable ```python optimizer.safe_evaluation = True # default safe_evaluation=False ``` Although this treatment of failing evaluation functions will probably solve the problem, it may also hamper the efficiency of the optimization algorithm. Therefore keep in mind that it is always better to make sure that your goal function never fails to compute. ## Results and convergence plot Some intermediate optimization results are stored in `optimizer.results` object, which can be explored/analyzed after the optimization is finished. Also, a utility function is available for visualizing optimization convergence, which produces convergence plots for all defined objectives and constraints: ```python optimizer.results.plot_convergence() ``` ## CEC 2014 Indago also includes the CEC 2014 test suite [11], comprising 30 test functions for real optimization methods. You can use it by importing it like this: ```python from indago.benchmarks import CEC2014 ``` Then, you have to initialize it for a specific dimensionality of the test functions: ```python test = CEC2014(20) # initialization od 20-dimension functions, you can also use 10, 50 and 100 ``` Now you can use specific test functions (`test.F1`, `test.F2`, ... up to `test.F30`), they all take 1d `np.array` of size 10/20/50/100 and return a scalar number. Alternatively, you can iterate through the built-in list of them all: ```python test_results = [] for f in test.functions: optimizer.evaluation_function = f test_results.append(optimizer.optimize().f) ``` Have fun! ## References: 1. Ratnaweera, A., Halgamuge, S. K., & Watson, H. C. (2004). Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Transactions on evolutionary computation, 8(3), 240-255. 2. Družeta, S., & Ivić, S. (2020). Anakatabatic Inertia: Particle-wise Adaptive Inertia for PSO, arXiv:2008.00979 [cs.NE]. 3. Družeta, S., & Ivić, S. (2017). Examination of benefits of personal fitness improvement dependent inertia for Particle Swarm Optimization. Soft Computing, 21(12), 3387-3400. 4. Družeta, S., Ivić, S., Grbčić, L., & Lučin, I. (2019). Introducing languid particle dynamics to a selection of PSO variants. Egyptian Informatics Journal, 21(2), 119-129. 5. Tan, Y., & Zhu, Y. (2010, June). Fireworks algorithm for optimization. In International conference in swarm intelligence (pp. 355-364). Springer, Berlin, Heidelberg. 6. Jain, M., Singh, V., & Rani, A. (2019). A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm and evolutionary computation, 44, 148-175. 7. Tanabe, R., & Fukunaga, A. S. (2014). Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), pp. 1658–1665, Beijing, China. 8. Yang, X. S., & Gandomi, A. H. (2012). Bat algorithm: a novel approach for global engineering optimization. Engineering computations. 9. Abedinpourshotorban, H., Shamsuddin, S. M., Beheshti, Z., & Jawawi, D. N. (2016). Electromagnetic field optimization: a physics-inspired metaheuristic optimization algorithm. Swarm and Evolutionary Computation, 26, 8-22. 10. Zhao, W., Zhang, Z., & Wang, L. (2020). Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Engineering Applications of Artificial Intelligence, 87, 103300. 11. Liang, J. J., Qu, B. Y., & Suganthan, P. N. (2013). Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization. Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singapore, 635.


نیازمندی

مقدار نام
>=1.16 numpy
>=2 matplotlib
- scipy
- rich


زبان مورد نیاز

مقدار نام
>=3.6 Python


نحوه نصب


نصب پکیج whl Indago-0.2.7:

    pip install Indago-0.2.7.whl


نصب پکیج tar.gz Indago-0.2.7:

    pip install Indago-0.2.7.tar.gz