This text is kind of continuation of the article “A tale of two tools - Pulumi and AWS CDK”, although it will mainly focus on Pulumi, and write about YAML and Go support for Pulumi.
The Pulumi YAML support became generally available just recently, and this was announced on the Pulumi Cloud Engineering Days 2022. I will use this support to rewrite one solution in “A tale of two tools - Pulumi and AWS CDK” to YAML.
I will then use this YAML to generate a Go version of the code automatically and deploy that version as well.
AWS CDK has kind of similar feature to Pulumi YAML, and does not have the code conversion capability of Pulumi (YAML to Go in this case). That will be a topic for a separate post. In this one, we will focus on Pulumi.
Introduction to Pulumi YAML
Earlier this year at the PulumiUP virtual conference, Pulumi announced support for YAML as an additional language. This may have come as a surprise for some people, given that Pulumi has very much been advocating to use regular programming languages to define infrastructure.
This has not really changed. I believe, though, that Pulumi recognized that not every person who needs to work with infrastructure as software will be a skilled developer, and they do not need to be one either.
The programming language support allows for building good abstractions and interfaces other can consume efficiently. The consumption does not need to use a programming language though. This is where Pulumi YAML comes in.
In Pulumi YAML, you can refer to the same resources as you would use in a regular programming language, both low-level components provided by the cloud provider and higher-level components. These can be official or 3rd party components, or components you or your organisation has developed.
Pulumi YAML lives in the Pulumi.yaml
project file, together with the project configuration. The built-in support is just a single YAML file, which kind of reflects the priorities and intentions with the YAML support - a non-programming-language interface to some suitable abstractions.
I think this is a good constraint at this point. Keep it simple, and only if there is an actual need after customer feedback, then look at making it more complex or capable.
Define app solution in YAML
I used the Typescript-based solution using Pulumi CrossWalk for AWS from “A tale of Two tools - Pulumi and AWS CDK” as a starting point for my solution in Pulumi YAML.
This was a straightforward process. The syntax is similar, so it was relatively easy to do copy and paste, with some tweaks.
The main differences were in defining re-usable constants, and use some pre-defined constant references. For re-usable constants, I used the project configuration feature and defined 3 typed configuration settings with default values.
For pre-defined constants, I simply had to check what value had mapped there. I had installed the Pulumi YAML extension to Visual Studio Code, which helped with providing type checking, some auto-completion and inline documentation. The extension also checks if there are invalid references as well, which was quite helpful.
Thanks to the similarities between the Typescript code and YAML, and the support by the Pulumi YAML extension in the editor, it did not take that much time to write the YAML version (and even faster the second time, more about that later…).
The resulting Pulumi.yaml looks like this:
name: ias-pulumi-yaml
description: A test solution with Pulumi YAML
runtime: yaml
configuration:
port:
type: Number
default: 80
cpu:
type: Number
default: 512
memory:
type: Number
default: 1024
resources:
vpc:
type: awsx:ec2:Vpc
properties:
numberOfAvailabilityZones: 2
natGateways:
strategy: Single
# An ECS cluster to deploy into
cluster:
type: aws:ecs:Cluster
# An ECR repository for the app image
repo:
type: awsx:ecr:Repository
# Build and publish the image to ECR
image:
type: awsx:ecr:Image
properties:
repositoryUrl: ${repo.url}
path: ./my-image
lbsg:
type: aws:ec2:SecurityGroup
properties:
vpcId: ${vpc.vpcId}
ingress:
- fromPort: ${port}
toPort: ${port}
protocol: tcp
cidrBlocks:
- "0.0.0.0/0"
egress:
- fromPort: 0
toPort: 0
protocol: "-1"
cidrBlocks:
- "0.0.0.0/0"
# An ALB to serve the container endpoint to the internet
loadbalancer:
type: awsx:lb:ApplicationLoadBalancer
properties:
subnetIds: ${vpc.publicSubnetIds}
securityGroups:
- ${lbsg.id}
containersg:
type: aws:ec2:SecurityGroup
properties:
vpcId: ${vpc.vpcId}
ingress:
- fromPort: ${port}
toPort: ${port}
protocol: tcp
securityGroups:
- ${lbsg.id}
egress:
- fromPort: 0
toPort: 0
protocol: "-1"
cidrBlocks:
- "0.0.0.0/0"
# Deploy an ECS Service on Fargate to host the application container
service:
type: awsx:ecs:FargateService
properties:
cluster: ${cluster.arn}
taskDefinitionArgs:
container:
image: ${image.imageUri}
cpu: ${cpu}
memory: ${memory}
essential: true
portMappings:
- containerPort: ${port}
targetGroup: ${loadbalancer.defaultTargetGroup}
networkConfiguration:
subnets: ${vpc.privateSubnetIds}
securityGroups:
- ${containersg.id}
deploymentCircuitBreaker:
enable: true
rollback: true
outputs:
# The URL at which the container's HTTP endpoint will be available
url: http://${loadbalancer.loadBalancer.dnsName}
The setup worked right away to deploy. This probably would not have been the case without the Pulumi YAML extension, so I am happy that I had that installed.
I think the result is pretty clear to read, and since it can use higher-level components, much shorter than the corresponding CloudFormation would be (about 400 lines, if you want to know).
The next task was to create a Go version of the solution.
Converting YAML to Go
The Pulumi CLI has a convert option, which does just that - it converts a YAML-based solution to any of the other target languages. By default it will use the Pulumi.yaml
file in the current directory.
Now, in the back of my head, I was thinking - will it destroy my YAML config, since a Go version of Pulumi.yaml would not have any YAML definitions in it? Should I back up my YAML, or check in the latest changes in Git?
Despite these concerns, I YOLO:ed (You Only Live Once) and ran
pulumi convert --language go --generate-only
I noticed that the Pulumi.yaml in my editor suddenly looked much more empty…. Yes; it had overwritten my YAML solution, and I had not backed up the data.
Luckily, it was pretty easy and not a big solution, so it did not take that long to re-write the YAML version again. Maybe I will learn my lesson to next time. Or maybe Pulumi adds a safety guard prompt.
The resulting Go code looks like this:
package main
import (
"fmt"
"github.com/pulumi/pulumi-aws/sdk/v5/go/aws/ec2"
"github.com/pulumi/pulumi-aws/sdk/v5/go/aws/ecs"
"github.com/pulumi/pulumi-aws/sdk/v5/go/aws/lb"
"github.com/pulumi/pulumi-awsx/sdk/go/awsx/ec2"
"github.com/pulumi/pulumi-awsx/sdk/go/awsx/ecr"
"github.com/pulumi/pulumi-awsx/sdk/go/awsx/ecs"
"github.com/pulumi/pulumi-awsx/sdk/go/awsx/lb"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"
)
func main() {
.Run(func(ctx *pulumi.Context) error {
pulumi:= config.New(ctx, "")
cfg := float64(80)
port if param := cfg.GetFloat64("port"); param != 0 {
= param
port }
:= float64(512)
cpu if param := cfg.GetFloat64("cpu"); param != 0 {
= param
cpu }
:= float64(1024)
memory if param := cfg.GetFloat64("memory"); param != 0 {
= param
memory }
, err := ec2.NewVpc(ctx, "vpc", &ec2.VpcArgs{
vpc: 2,
NumberOfAvailabilityZones: &ec2.NatGatewayConfigurationArgs{
NatGateways: ec2.NatGatewayStrategySingle,
Strategy},
})
if err != nil {
return err
}
, err := ecs.NewCluster(ctx, "cluster", nil)
clusterif err != nil {
return err
}
, err := ecr.NewRepository(ctx, "repo", nil)
repoif err != nil {
return err
}
, err := ecr.NewImage(ctx, "image", &ecr.ImageArgs{
image: repo.Url,
RepositoryUrl: pulumi.String("./my-image"),
Path})
if err != nil {
return err
}
, err := ec2.NewSecurityGroup(ctx, "lbsg", &ec2.SecurityGroupArgs{
lbsg: vpc.VpcId,
VpcId: ec2.SecurityGroupIngressArray{
Ingress&ec2.SecurityGroupIngressArgs{
: pulumi.Float64(port),
FromPort: pulumi.Float64(port),
ToPort: pulumi.String("tcp"),
Protocol: pulumi.StringArray{
CidrBlocks.String("0.0.0.0/0"),
pulumi},
},
},
: ec2.SecurityGroupEgressArray{
Egress&ec2.SecurityGroupEgressArgs{
: pulumi.Int(0),
FromPort: pulumi.Int(0),
ToPort: pulumi.String("-1"),
Protocol: pulumi.StringArray{
CidrBlocks.String("0.0.0.0/0"),
pulumi},
},
},
})
if err != nil {
return err
}
, err := lb.NewApplicationLoadBalancer(ctx, "loadbalancer", &lb.ApplicationLoadBalancerArgs{
loadbalancer: vpc.PublicSubnetIds,
SubnetIds: pulumi.StringArray{
SecurityGroups.ID(),
lbsg},
})
if err != nil {
return err
}
, err := ec2.NewSecurityGroup(ctx, "containersg", &ec2.SecurityGroupArgs{
containersg: vpc.VpcId,
VpcId: ec2.SecurityGroupIngressArray{
Ingress&ec2.SecurityGroupIngressArgs{
: pulumi.Float64(port),
FromPort: pulumi.Float64(port),
ToPort: pulumi.String("tcp"),
Protocol: pulumi.StringArray{
SecurityGroups.ID(),
lbsg},
},
},
: ec2.SecurityGroupEgressArray{
Egress&ec2.SecurityGroupEgressArgs{
: pulumi.Int(0),
FromPort: pulumi.Int(0),
ToPort: pulumi.String("-1"),
Protocol: pulumi.StringArray{
CidrBlocks.String("0.0.0.0/0"),
pulumi},
},
},
})
if err != nil {
return err
}
, err = ecs.NewFargateService(ctx, "service", &ecs.FargateServiceArgs{
_: cluster.Arn,
Cluster: &ecs.FargateServiceTaskDefinitionArgs{
TaskDefinitionArgs: &ecs.TaskDefinitionContainerDefinitionArgs{
Container: image.ImageUri,
Image: pulumi.Float64(cpu),
Cpu: pulumi.Float64(memory),
Memory: pulumi.Bool(true),
Essential: []ecs.TaskDefinitionPortMappingArgs{
PortMappings&ecs.TaskDefinitionPortMappingArgs{
: pulumi.Float64(port),
ContainerPort: loadbalancer.DefaultTargetGroup,
TargetGroup},
},
},
},
: &ecs.ServiceNetworkConfigurationArgs{
NetworkConfiguration: vpc.PrivateSubnetIds,
Subnets: pulumi.StringArray{
SecurityGroups.ID(),
containersg},
},
: &ecs.ServiceDeploymentCircuitBreakerArgs{
DeploymentCircuitBreaker: pulumi.Bool(true),
Enable: pulumi.Bool(true),
Rollback},
})
if err != nil {
return err
}
.Export("url", loadbalancer.LoadBalancer.ApplyT(func(loadBalancer *lb.LoadBalancer) (string, error) {
ctxreturn fmt.Sprintf("http://%v", loadBalancer.DnsName), nil
}).(pulumi.StringOutput))
return nil
})
}
This looks nice! It did not include my comments from the YAML definition though, which would have been nice. Unfortunately, this code does not quite compile with the Pulumi CLI version I used (3.46.0), and I had to do some tweaks:
Use the newest version of the AWSX SDK, which was not included in the generated code.
Change a literal value 2 to pulumi.IntRef(2)
The generated export of the load balancer URL did not provide the expected result
Change references to pulumi.Float64() to pulumi.Int()
The latter is because of that the type information for YAML configuration types only support Number, and not Integer - which is likely because of that it currently only supports the data types supported by YAML.
Not a perfect conversion, but much simpler than writing it from scratch, and the tweaks to do were pretty simple.
The tweaked version looks like this, almost the same:
package main
import (
"github.com/pulumi/pulumi-aws/sdk/v5/go/aws/ec2"
"github.com/pulumi/pulumi-aws/sdk/v5/go/aws/ecs"
"github.com/pulumi/pulumi-awsx/sdk/go/awsx/ec2"
ec2x "github.com/pulumi/pulumi-awsx/sdk/go/awsx/ecr"
ecrx "github.com/pulumi/pulumi-awsx/sdk/go/awsx/ecs"
ecsx "github.com/pulumi/pulumi-awsx/sdk/go/awsx/lb"
lbx "github.com/pulumi/pulumi/sdk/v3/go/pulumi"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"
)
func main() {
.Run(func(ctx *pulumi.Context) error {
pulumi:= config.New(ctx, "")
cfg := 80
port if param := cfg.GetInt("port"); param != 0 {
= param
port }
:= 512
cpu if param := cfg.GetInt("cpu"); param != 0 {
= param
cpu }
:= 1024
memory if param := cfg.GetInt("memory"); param != 0 {
= param
memory }
, err := ec2x.NewVpc(ctx, "vpc", &ec2x.VpcArgs{
vpc: pulumi.IntRef(2),
NumberOfAvailabilityZones: &ec2x.NatGatewayConfigurationArgs{
NatGateways: ec2x.NatGatewayStrategySingle,
Strategy},
})
if err != nil {
return err
}
, err := ecs.NewCluster(ctx, "cluster", nil)
clusterif err != nil {
return err
}
, err := ecrx.NewRepository(ctx, "repo", nil)
repoif err != nil {
return err
}
, err := ecrx.NewImage(ctx, "image", &ecrx.ImageArgs{
image: repo.Url,
RepositoryUrl: pulumi.String("./my-image"),
Path})
if err != nil {
return err
}
, err := ec2.NewSecurityGroup(ctx, "lbsg", &ec2.SecurityGroupArgs{
lbsg: vpc.VpcId,
VpcId: ec2.SecurityGroupIngressArray{
Ingress&ec2.SecurityGroupIngressArgs{
: pulumi.Int(port),
FromPort: pulumi.Int(port),
ToPort: pulumi.String("tcp"),
Protocol: pulumi.StringArray{
CidrBlocks.String("0.0.0.0/0"),
pulumi},
},
},
: ec2.SecurityGroupEgressArray{
Egress&ec2.SecurityGroupEgressArgs{
: pulumi.Int(0),
FromPort: pulumi.Int(0),
ToPort: pulumi.String("-1"),
Protocol: pulumi.StringArray{
CidrBlocks.String("0.0.0.0/0"),
pulumi},
},
},
})
if err != nil {
return err
}
, err := lbx.NewApplicationLoadBalancer(ctx, "loadbalancer", &lbx.ApplicationLoadBalancerArgs{
loadbalancer: vpc.PublicSubnetIds,
SubnetIds: pulumi.StringArray{
SecurityGroups.ID(),
lbsg},
})
if err != nil {
return err
}
, err := ec2.NewSecurityGroup(ctx, "containersg", &ec2.SecurityGroupArgs{
containersg: vpc.VpcId,
VpcId: ec2.SecurityGroupIngressArray{
Ingress&ec2.SecurityGroupIngressArgs{
: pulumi.Int(port),
FromPort: pulumi.Int(port),
ToPort: pulumi.String("tcp"),
Protocol: pulumi.StringArray{
SecurityGroups.ID(),
lbsg},
},
},
: ec2.SecurityGroupEgressArray{
Egress&ec2.SecurityGroupEgressArgs{
: pulumi.Int(0),
FromPort: pulumi.Int(0),
ToPort: pulumi.String("-1"),
Protocol: pulumi.StringArray{
CidrBlocks.String("0.0.0.0/0"),
pulumi},
},
},
})
if err != nil {
return err
}
, err = ecsx.NewFargateService(ctx, "service", &ecsx.FargateServiceArgs{
_: cluster.Arn,
Cluster: &ecsx.FargateServiceTaskDefinitionArgs{
TaskDefinitionArgs: &ecsx.TaskDefinitionContainerDefinitionArgs{
Container: image.ImageUri,
Image: pulumi.Int(cpu),
Cpu: pulumi.Int(memory),
Memory: pulumi.Bool(true),
Essential: ecsx.TaskDefinitionPortMappingArray{
PortMappings&ecsx.TaskDefinitionPortMappingArgs{
: pulumi.Int(port),
ContainerPort: loadbalancer.DefaultTargetGroup,
TargetGroup},
},
},
},
: &ecs.ServiceNetworkConfigurationArgs{
NetworkConfiguration: vpc.PrivateSubnetIds,
Subnets: pulumi.StringArray{
SecurityGroups.ID(),
containersg},
},
: &ecs.ServiceDeploymentCircuitBreakerArgs{
DeploymentCircuitBreaker: pulumi.Bool(true),
Enable: pulumi.Bool(true),
Rollback},
})
if err != nil {
return err
}
.Export("url", pulumi.Sprintf("http://%s", loadbalancer.LoadBalancer.DnsName()))
ctxreturn nil
})
}
This worked fine to deploy properly. I also tried conversion from YAML to Typescript and to Python, which also did not work properly right away and some tweaks were needed.
Final notes
I really like both the YAML language option and the pulumi convert
command. I think YAML support is a nice approach if you want to start out with a simpler interface. It can also, in combination with the convert command, be a stepping stone towards diving into the usage of programming languages with Pulumi.
The conversion was not perfect out of the box, but goes a long way. Pulumi YAML was just recently made generally available, and Crosswalk for AWS, which I use here, is not yet in its multi-language 1.0 release. So some glitches are not surprising.
Have you tried these yourself, and what is your experience with these tools?