معرفی شرکت ها


aws-cdk.aws-eks-legacy-1.99.0


Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر

توضیحات

The CDK Construct Library for AWS::EKS (Legacy)
ویژگی مقدار
سیستم عامل -
نام فایل aws-cdk.aws-eks-legacy-1.99.0
نام aws-cdk.aws-eks-legacy
نسخه کتابخانه 1.99.0
نگهدارنده []
ایمیل نگهدارنده []
نویسنده Amazon Web Services
ایمیل نویسنده -
آدرس صفحه اصلی https://github.com/aws/aws-cdk
آدرس اینترنتی https://pypi.org/project/aws-cdk.aws-eks-legacy/
مجوز Apache-2.0
# Amazon EKS Construct Library <!--BEGIN STABILITY BANNER-->--- ![Deprecated](https://img.shields.io/badge/deprecated-critical.svg?style=for-the-badge) > This API may emit warnings. Backward compatibility is not guaranteed. --- <!--END STABILITY BANNER--> **This module is available for backwards compatibility purposes only ([details](https://github.com/aws/aws-cdk/pull/5540)). It will no longer be released with the CDK starting March 1st, 2020. See [issue ## 5544](https://github.com/aws/aws-cdk/issues/5544) for upgrade instructions.** --- This construct library allows you to define [Amazon Elastic Container Service for Kubernetes (EKS)](https://aws.amazon.com/eks/) clusters programmatically. This library also supports programmatically defining Kubernetes resource manifests within EKS clusters. This example defines an Amazon EKS cluster with the following configuration: * 2x **m5.large** instances (this instance type suits most common use-cases, and is good value for money) * Dedicated VPC with default configuration (see [ec2.Vpc](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ec2-readme.html#vpc)) * A Kubernetes pod with a container based on the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes) image. ```python cluster = eks.Cluster(self, "hello-eks") cluster.add_resource("mypod", { "api_version": "v1", "kind": "Pod", "metadata": {"name": "mypod"}, "spec": { "containers": [{ "name": "hello", "image": "paulbouwer/hello-kubernetes:1.5", "ports": [{"container_port": 8080}] } ] } }) ``` Here is a [complete sample](https://github.com/aws/aws-cdk/blob/master/packages/@aws-cdk/aws-eks-legacy/test/integ.eks-kubectl.lit.ts). ### Capacity By default, `eks.Cluster` is created with x2 `m5.large` instances. ```python eks.Cluster(self, "cluster-two-m5-large") ``` The quantity and instance type for the default capacity can be specified through the `defaultCapacity` and `defaultCapacityInstance` props: ```python eks.Cluster(self, "cluster", default_capacity=10, default_capacity_instance=ec2.InstanceType("m2.xlarge") ) ``` To disable the default capacity, simply set `defaultCapacity` to `0`: ```python eks.Cluster(self, "cluster-with-no-capacity", default_capacity=0) ``` The `cluster.defaultCapacity` property will reference the `AutoScalingGroup` resource for the default capacity. It will be `undefined` if `defaultCapacity` is set to `0`: ```python cluster = eks.Cluster(self, "my-cluster") cluster.default_capacity.scale_on_cpu_utilization("up", target_utilization_percent=80 ) ``` You can add customized capacity through `cluster.addCapacity()` or `cluster.addAutoScalingGroup()`: ```python # cluster: eks.Cluster cluster.add_capacity("frontend-nodes", instance_type=ec2.InstanceType("t2.medium"), desired_capacity=3, vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC) ) ``` ### Spot Capacity If `spotPrice` is specified, the capacity will be purchased from spot instances: ```python # cluster: eks.Cluster cluster.add_capacity("spot", spot_price="0.1094", instance_type=ec2.InstanceType("t3.large"), max_capacity=10 ) ``` Spot instance nodes will be labeled with `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`. The [Spot Termination Handler](https://github.com/awslabs/ec2-spot-labs/tree/master/ec2-spot-eks-solution/spot-termination-handler) DaemonSet will be installed on these nodes. The termination handler leverages [EC2 Spot Instance Termination Notices](https://aws.amazon.com/blogs/aws/new-ec2-spot-instance-termination-notices/) to gracefully stop all pods running on spot nodes that are about to be terminated. ### Bootstrapping When adding capacity, you can specify options for [/etc/eks/boostrap.sh](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh) which is responsible for associating the node to the EKS cluster. For example, you can use `kubeletExtraArgs` to add custom node labels or taints. ```python # up to ten spot instances # cluster: eks.Cluster cluster.add_capacity("spot", instance_type=ec2.InstanceType("t3.large"), desired_capacity=2, bootstrap_options=eks.BootstrapOptions( kubelet_extra_args="--node-labels foo=bar,goo=far", aws_api_retry_attempts=5 ) ) ``` To disable bootstrapping altogether (i.e. to fully customize user-data), set `bootstrapEnabled` to `false` when you add the capacity. ### Masters Role The Amazon EKS construct library allows you to specify an IAM role that will be granted `system:masters` privileges on your cluster. Without specifying a `mastersRole`, you will not be able to interact manually with the cluster. The following example defines an IAM role that can be assumed by all users in the account and shows how to use the `mastersRole` property to map this role to the Kubernetes `system:masters` group: ```python # first define the role cluster_admin = iam.Role(self, "AdminRole", assumed_by=iam.AccountRootPrincipal() ) # now define the cluster and map role to "masters" RBAC group eks.Cluster(self, "Cluster", masters_role=cluster_admin ) ``` When you `cdk deploy` this CDK app, you will notice that an output will be printed with the `update-kubeconfig` command. Something like this: ```plaintext Outputs: eks-integ-defaults.ClusterConfigCommand43AAE40F = aws eks update-kubeconfig --name cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 --role-arn arn:aws:iam::112233445566:role/eks-integ-defaults-Role1ABCC5F0-1EFK2W5ZJD98Y ``` Copy & paste the "`aws eks update-kubeconfig ...`" command to your shell in order to connect to your EKS cluster with the "masters" role. Now, given [AWS CLI](https://aws.amazon.com/cli/) is configured to use AWS credentials for a user that is trusted by the masters role, you should be able to interact with your cluster through `kubectl` (the above example will trust all users in the account). For example: ```console $ aws eks update-kubeconfig --name cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 --role-arn arn:aws:iam::112233445566:role/eks-integ-defaults-Role1ABCC5F0-1EFK2W5ZJD98Y Added new context arn:aws:eks:eu-west-2:112233445566:cluster/cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 to /Users/boom/.kube/config $ kubectl get nodes # list all nodes NAME STATUS ROLES AGE VERSION ip-10-0-147-66.eu-west-2.compute.internal Ready <none> 21m v1.13.7-eks-c57ff8 ip-10-0-169-151.eu-west-2.compute.internal Ready <none> 21m v1.13.7-eks-c57ff8 $ kubectl get all -n kube-system NAME READY STATUS RESTARTS AGE pod/aws-node-fpmwv 1/1 Running 0 21m pod/aws-node-m9htf 1/1 Running 0 21m pod/coredns-5cb4fb54c7-q222j 1/1 Running 0 23m pod/coredns-5cb4fb54c7-v9nxx 1/1 Running 0 23m pod/kube-proxy-d4jrh 1/1 Running 0 21m pod/kube-proxy-q7hh7 1/1 Running 0 21m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 23m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/aws-node 2 2 2 2 2 <none> 23m daemonset.apps/kube-proxy 2 2 2 2 2 <none> 23m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/coredns 2/2 2 2 23m NAME DESIRED CURRENT READY AGE replicaset.apps/coredns-5cb4fb54c7 2 2 2 23m ``` For your convenience, an AWS CloudFormation output will automatically be included in your template and will be printed when running `cdk deploy`. **NOTE**: if the cluster is configured with `kubectlEnabled: false`, it will be created with the role/user that created the AWS CloudFormation stack. See [Kubectl Support](#kubectl-support) for details. ### Kubernetes Resources The `KubernetesResource` construct or `cluster.addResource` method can be used to apply Kubernetes resource manifests to this cluster. The following examples will deploy the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes) service on the cluster: ```python # cluster: eks.Cluster app_label = {"app": "hello-kubernetes"} deployment = { "api_version": "apps/v1", "kind": "Deployment", "metadata": {"name": "hello-kubernetes"}, "spec": { "replicas": 3, "selector": {"match_labels": app_label}, "template": { "metadata": {"labels": app_label}, "spec": { "containers": [{ "name": "hello-kubernetes", "image": "paulbouwer/hello-kubernetes:1.5", "ports": [{"container_port": 8080}] } ] } } } } service = { "api_version": "v1", "kind": "Service", "metadata": {"name": "hello-kubernetes"}, "spec": { "type": "LoadBalancer", "ports": [{"port": 80, "target_port": 8080}], "selector": app_label } } # option 1: use a construct eks.KubernetesResource(self, "hello-kub", cluster=cluster, manifest=[deployment, service] ) # or, option2: use `addResource` cluster.add_resource("hello-kub", service, deployment) ``` Since Kubernetes resources are implemented as CloudFormation resources in the CDK. This means that if the resource is deleted from your code (or the stack is deleted), the next `cdk deploy` will issue a `kubectl delete` command and the Kubernetes resources will be deleted. ### AWS IAM Mapping As described in the [Amazon EKS User Guide](https://docs.aws.amazon.com/en_us/eks/latest/userguide/add-user-role.html), you can map AWS IAM users and roles to [Kubernetes Role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac). The Amazon EKS construct manages the **aws-auth ConfigMap** Kubernetes resource on your behalf and exposes an API through the `cluster.awsAuth` for mapping users, roles and accounts. Furthermore, when auto-scaling capacity is added to the cluster (through `cluster.addCapacity` or `cluster.addAutoScalingGroup`), the IAM instance role of the auto-scaling group will be automatically mapped to RBAC so nodes can connect to the cluster. No manual mapping is required any longer. > NOTE: `cluster.awsAuth` will throw an error if your cluster is created with `kubectlEnabled: false`. For example, let's say you want to grant an IAM user administrative privileges on your cluster: ```python # cluster: eks.Cluster admin_user = iam.User(self, "Admin") cluster.aws_auth.add_user_mapping(admin_user, groups=["system:masters"]) ``` A convenience method for mapping a role to the `system:masters` group is also available: ```python # cluster: eks.Cluster # role: iam.Role cluster.aws_auth.add_masters_role(role) ``` ### Node ssh Access If you want to be able to SSH into your worker nodes, you must already have an SSH key in the region you're connecting to and pass it, and you must be able to connect to the hosts (meaning they must have a public IP and you should be allowed to connect to them on port 22): ```python asg = cluster.add_capacity("Nodes", instance_type=ec2.InstanceType("t2.medium"), vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC), key_name="my-key-name" ) # Replace with desired IP asg.connections.allow_from(ec2.Peer.ipv4("1.2.3.4/32"), ec2.Port.tcp(22)) ``` If you want to SSH into nodes in a private subnet, you should set up a bastion host in a public subnet. That setup is recommended, but is unfortunately beyond the scope of this documentation. ### kubectl Support When you create an Amazon EKS cluster, the IAM entity user or role, such as a [federated user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html) that creates the cluster, is automatically granted `system:masters` permissions in the cluster's RBAC configuration. In order to allow programmatically defining **Kubernetes resources** in your AWS CDK app and provisioning them through AWS CloudFormation, we will need to assume this "masters" role every time we want to issue `kubectl` operations against your cluster. At the moment, the [AWS::EKS::Cluster](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html) AWS CloudFormation resource does not support this behavior, so in order to support "programmatic kubectl", such as applying manifests and mapping IAM roles from within your CDK application, the Amazon EKS construct library uses a custom resource for provisioning the cluster. This custom resource is executed with an IAM role that we can then use to issue `kubectl` commands. The default behavior of this library is to use this custom resource in order to retain programmatic control over the cluster. In other words: to allow you to define Kubernetes resources in your CDK code instead of having to manage your Kubernetes applications through a separate system. One of the implications of this design is that, by default, the user who provisioned the AWS CloudFormation stack (executed `cdk deploy`) will not have administrative privileges on the EKS cluster. 1. Additional resources will be synthesized into your template (the AWS Lambda function, the role and policy). 2. As described in [Interacting with Your Cluster](#interacting-with-your-cluster), if you wish to be able to manually interact with your cluster, you will need to map an IAM role or user to the `system:masters` group. This can be either done by specifying a `mastersRole` when the cluster is defined, calling `cluster.awsAuth.addMastersRole` or explicitly mapping an IAM role or IAM user to the relevant Kubernetes RBAC groups using `cluster.addRoleMapping` and/or `cluster.addUserMapping`. If you wish to disable the programmatic kubectl behavior and use the standard AWS::EKS::Cluster resource, you can specify `kubectlEnabled: false` when you define the cluster: ```python eks.Cluster(self, "cluster", kubectl_enabled=False ) ``` **Take care**: a change in this property will cause the cluster to be destroyed and a new cluster to be created. When kubectl is disabled, you should be aware of the following: 1. When you log-in to your cluster, you don't need to specify `--role-arn` as long as you are using the same user that created the cluster. 2. As described in the Amazon EKS User Guide, you will need to manually edit the [aws-auth ConfigMap](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) when you add capacity in order to map the IAM instance role to RBAC to allow nodes to join the cluster. 3. Any `eks.Cluster` APIs that depend on programmatic kubectl support will fail with an error: `cluster.addResource`, `cluster.addChart`, `cluster.awsAuth`, `props.mastersRole`. ### Helm Charts The `HelmChart` construct or `cluster.addChart` method can be used to add Kubernetes resources to this cluster using Helm. The following example will install the [NGINX Ingress Controller](https://kubernetes.github.io/ingress-nginx/) to you cluster using Helm. ```python # cluster: eks.Cluster # option 1: use a construct eks.HelmChart(self, "NginxIngress", cluster=cluster, chart="nginx-ingress", repository="https://helm.nginx.com/stable", namespace="kube-system" ) # or, option2: use `addChart` cluster.add_chart("NginxIngress", chart="nginx-ingress", repository="https://helm.nginx.com/stable", namespace="kube-system" ) ``` Helm charts will be installed and updated using `helm upgrade --install`. This means that if the chart is added to CDK with the same release name, it will try to update the chart in the cluster. The chart will exists as CloudFormation resource. Helm charts are implemented as CloudFormation resources in CDK. This means that if the chart is deleted from your code (or the stack is deleted), the next `cdk deploy` will issue a `helm uninstall` command and the Helm chart will be deleted. When there is no `release` defined, the chart will be installed with a unique name allocated based on the construct path. ### Roadmap * [ ] AutoScaling (combine EC2 and Kubernetes scaling)


نیازمندی

مقدار نام
==1.200.0 aws-cdk.aws-autoscaling
==1.200.0 aws-cdk.aws-cloudformation
==1.200.0 aws-cdk.aws-ec2
==1.200.0 aws-cdk.aws-iam
==1.200.0 aws-cdk.aws-lambda
==1.200.0 aws-cdk.aws-ssm
==1.200.0 aws-cdk.core
<4.0.0,>=3.3.69 constructs
<2.0.0,>=1.74.0 jsii
>=0.0.3 publication
~=2.13.3 typeguard


زبان مورد نیاز

مقدار نام
~=3.7 Python


نحوه نصب


نصب پکیج whl aws-cdk.aws-eks-legacy-1.99.0:

    pip install aws-cdk.aws-eks-legacy-1.99.0.whl


نصب پکیج tar.gz aws-cdk.aws-eks-legacy-1.99.0:

    pip install aws-cdk.aws-eks-legacy-1.99.0.tar.gz