Writing Code for Lambda Service Components
This tutorial references the code in the aws-connectedcar-dotnet-serverless repository. If you're new to this course, see the introduction for information about setting up your workstation and getting the sample code.
In the next two tutorials we’re going to look at writing code for Lambdas. In this first tutorial we’re going to focus on the service components that are called by the Lambdas in the sample code, and how they’re implemented in a way that maximizes portability. The next tutorial will focus on the event handling code in the Lambdas themselves.
Organizing Service Component Code
As outlined earlier in this section, our goal with service components is to define interfaces that can be called from different request-handling service hosts, such as the Lambdas we’re covering in this course, or API frameworks that run in containers. How we organize the code for these interfaces and components is the first step in achieving this goal. Here’s how this code is organized in the .Net version of the sample code:
First, the code for the service components is in a “core” repository that’s separate from that used for the Lambdas. Second, within this core repository, the Core.Shared project that contains the interfaces and their exported data classes is separate from (and has no dependencies on) the Core.Services project that contains the implementation code for the service components.
Let’s look at these projects to see how their package references enforce the code portability we’re aiming for. Below, you can see the ConnectedCar.Core.Shared.csproj project file from the .Net version of the sample code. As noted above, this shared project contains the interfaces and their exported data classes. Note that there are no references to any AWS libraries for Lambda or DynamoDB:
Here’s the equivalent file for the ConnectedCar.Core.Services project, which does include references to libraries for the three downstream AWS services as well as the shared project. Note the absence of any references to upstream Lambda libraries:
Writing Service Components
As we’ve outlined above, the interfaces for the service components mustn’t contain references to anything that will preclude their being called from different request-handling code running on different frameworks. Let’s see what this interface code looks like, as a result.
Here’s the interface for the Dealer service, from the ConnectedCar.CoreShared project, showing no AWS-related library imports:
Continuing in the same vein, here’s the Dealer class exported by the interface shown above. And once again, note the absence of AWS-related library imports:
Now, when we look at the Dealer service implementation code from the ConnectedCar.Core.Services project, we can see that the Dealer data class shown above is translated to and from a DynamoDB-specific DealerItem class, as shown below on line 27:
Here’s what the DealerItem class that’s referenced in the service code above looks like. In contrast to the Dealer class, this is only used internally by the service to access the target DynamoDB table, and of necessity has dependencies on the DynamoDB frameworks:
Using Dependency Injection
The data service components in the sample code all have dependencies on a service context component that provides access to client interfaces for AWS services as well as access to configuration values. The data services also depend on a translator component that converts data objects to and from the internal data classes that we’ve looked at above.
Not surprisingly, the sample code uses dependency injection to connect these services together at runtime. This is done using the built-in DI framework in .Net, as shown below in the constructor of the BaseFunctions class:
Running Service Components Locally
At run-time, when we host these service components in the cloud they have access to AWS resources through an assumed execution role. They also have access to the environment variables defined in the deployment through the system runtime. But for development, we also want to run these components locally even if they are accessing cloud resources when doing so.
To enable these different run-time scenarios, the context interface in the sample code is injected into the service components to provide access to configuration and to the target AWS resources. When running the services locally, we can inject a local implementation of this interface. The code for this local context is shown below:
The context code shown above reads configuration values from the local file system, and gains access to cloud resources by authenticating client interfaces with access keys. When the service components are run within Lambdas or within containerized APIs in the cloud, they can be injected with the cloud implementation of this interface. The cloud version of the service context is shown below:
The cloud version of this service context reads configuration from the system environment, and is able to instantiate client interfaces for services without authentication because these clients assume the service role under which their hosts are running.
Using Orchestrators for Multi Step Operations
Lastly, you may have noticed the orchestrators in the DI code shown earlier. These are components in the sample code that Lambdas can call to encapsulate operations that involve multiple service calls.
Here’s the “CreateCustomer” method, for example, which makes use of an orchestrator component:
The orchestrator is used in this operation because the Lambda here is performing two different operations. As you can see below, it’s first creating an account for the customer in Cognito, and following that step, it’s saving an item in the Customer table in DynamoDB. The field mapping and multiple service calls could all be coded in the Lambda, but as noted, our goal is to put this kind of logic into service components:
This tutorial references the code in the aws-connectedcar-java-serverless repository. If you're new to this course, see the introduction for information about setting up your workstation and getting the sample code.
In the next two tutorials we’re going to look at writing code for Lambdas. In this first tutorial we’re going to focus on the service components that are called by the Lambdas in the sample code, and how they’re implemented in a way that maximizes portability. The next tutorial will focus on the event handling code in the Lambdas themselves.
Organizing Service Component Code
As outlined earlier in this section, our goal with service components is to define interfaces that can be called from different request-handling service hosts, such as the Lambdas we’re covering in this course, or API frameworks that run in containers. How we organize the code for these interfaces and components is the first step in achieving this goal. Here’s how this code is organized in the Java version of the sample code:
First, the code for the service components is in a “core” repository that’s separate from that used for the Lambdas. Second, within this core repository, the shared project that contains the interfaces and their exported data classes is separate from (and has no dependencies on) the services project that contains the implementation code for the service components.
Let’s look at these projects to see how their project references enforce the code portability we’re aiming for. Below, you can see the core.shared pom.xml file from the Java sample code. As noted above, this shared project contains the interfaces and their exported data classes. Note that there are no references to any AWS libraries for Lambda or DynamoDB:
Here’s the equivalent file for the core.services project, which does include references to libraries for the three downstream AWS services as well as the core.shared project. Note the absence of any references to upstream Lambda libraries:
Writing Service Components
As we’ve outlined above, the interfaces for the service components mustn’t contain references to anything that will preclude their being called from different request-handlers running on different frameworks. Let’s see what this interface code looks like, as a result.
Here’s the interface for the Dealer service, from the core.shared project, showing no AWS-related library imports:
Continuing in the same vein, here’s the Dealer class exported by the interface shown above. And once more, note the absence of AWS-related library imports:
Now, when we look at the Dealer service implementation code from the core.services project, we can see that the Dealer data class shown above is translated to and from a DynamoDB-specific DealerItem class, as shown below on line 40:
Here’s what the DealerItem class that’s referenced in the service code above looks like. In contrast to the Dealer class, this is only used internally by the service to access the target DynamoDB table, and of necessity has dependencies on the DynamoDB frameworks:
Using Dependency Injection
The data service components in the sample code all have dependencies on a service context component that provides access to client interfaces for AWS services as well as access to configuration values. The data services also depend on a translator component that converts data objects to and from the internal data classes that we’ve looked at above.
Not surprisingly, the sample code uses dependency injection to connect these services together at runtime. This is done using the guice DI framework, as shown below in the constructor of the BaseFunction class:
Running Service Components Locally
At run-time, when we host these service components in the cloud they have access to AWS resources through an assumed execution role. They also have access to the environment variables defined in the deployment through the system runtime. But for development, we also want to run these components locally even if they are accessing cloud resources when doing so.
To enable these different run-time scenarios, the context interface in the sample code is injected into the service components to provide access to configuration and to the target AWS resources. When running the services locally, we can inject a local implementation of this interface. The code for this local context is shown below:
The context code shown above reads configuration values from the local file system, and gains access to cloud resources by authenticating client interfaces with access keys. When the service components are run within Lambdas or within containerized APIs in the cloud, they can be injected with the cloud implementation of this interface. The cloud version of the service context is shown below:
The cloud version of this service context reads configuration from the system environment and is able to instantiate client interfaces for services without authentication because these clients assume the service role under which their hosts are running.
Using Orchestrators for Multi Step Operations
Lastly, you may have noticed the orchestrators in the DI code shown earlier. These are components in the sample code that Lambdas can call to encapsulate operations that involve multiple service calls.
Here’s the “createCustomer” method, for example, which makes use of an orchestrator component:
The orchestrator is used in this operation because the Lambda here is performing two different operations. As you can see below, it’s first creating an account for the customer in Cognito, and following that step, it’s saving an item in the Customer table in DynamoDB. The field mapping and multiple service calls could all be coded in the Lambda, but as noted, our goal is to put this kind of logic into service components: