An independent guide to building modern software for serverless and native cloud

Writing Code for DynamoDB Data Access

This tutorial references the code in the aws-connectedcar-dotnet-core repository. If you're new to this course, see the introduction for information about setting up your workstation and getting the sample code.

The next three tutorials cover writing .Net client code for DynamoDB. In this first tutorial, we’ll look at the high-level DynamoDB data access interface, show how to write data-mapping classes and type converters, and see how to instantiate client connections and perform basic read and write operations.

Data Service Basics

Let’s start by looking at the GetDealer method in the DealerService class. This method, shown below, is a good example of basic data retrieval using what’s called the “high-level” client interface (which we’ll explain shortly):

Basic though it is, this method highlights some key points about how these services are implemented in the sample code.

First, the method returns a database-agnostic Dealer class from the ConnectedCar.Shared project. This follows the portability strategy we outlined previously in the section on Lambdas. Along similar lines, the dbContext variable that connects with DynamoDB in this method is obtained from the IServiceContext interface. This interface has implementations in the sample code that enable both local and cloud runtime scenarios. Lastly, the interactions with DynamoDB within this service use the DealerItem class, which is mapped to the Dealer table. Instances of this class are translated back to those of the Dealer class when returned to clients.

That’s a quick overview. Now let’s dig into the details.

Writing Data Mapping Classes

For .Net and Java, DynamoDB supports two different interface models, one high-level and another low-level. When using the low-level interface, your client code reads and writes individual attributes directly to tables. With the high-level interface your code reads and writes records (or “items”) using mapped data classes. In the sample code for this course you’ll only see the high-level interface, for which there’s a set of mapped data classes that you’ll find in the /src/ConnectedCar.Services/Data/Items folder.

Here’s an example of a mapped data class from the sample code:

As you can see from the code above, there are no class-level annotations, or special interface or base classes required. This is just a plain old C# class that extends the BaseItem class from the same source folder. The hash key for the table is identified in this class with the DynamoDBHashKey annotation, shown on line 10. (Note that with these annotations, we’re explicitly setting the serialized attribute name and letter-casing to ensure consistent formatting in the table if the serialization libraries change.) Non-key attributes use the DynamoDBProperty annotation, as you can see for the Name property, shown on line 13. Line 16 shows the use of a converter class, which we’ll cover below. You'll also note, looking at line 20, that enumerators don't require any special handling.

Now, looking at the parent class for the data services, shown below, you can see how this DealerItem class is mapped to the table in DynamoDB at run time. Note that on line 13 the code is looking up the name of the table from the configuration. As we saw back in the CloudFormation chapter, the names of the deployed tables will include interpolated service and environment names, which are not known to this code at build time. So this call reads the table name from an environment variable created during deployment.

In the earlier data modelling lesson we showed the two global secondary indexes that are associated with the Appointment table. Now, here's a look at the AppointmentItem class to see how the properties are annotated:

To understand what you’re seeing in this code, you need to remember that with global secondary indexes you have the option to "project" some or all of the fields from the source table to the index. In this case, all the fields from the Appointment table are projected to both indexes. By projecting all the fields in this way, this AppointmentItem class can be used to read data from the indexes as well as from the source table. Be aware that extra annotations are required in this mapped data class to support these different cases.

For the case where this class reads or writes data for the originating Appointment table, line 10 provides the regular DynamoDBHashKey annotation for the AppointmentId attribute. The other two attributes for this case are annotated with regular property annotations, shown on lines 15 and 19.

For each of the two indexes, in contrast, one of the two non-hash-key attributes in the table functions as a hash key, while the AppointmentId attribute functions as a range key. This remapping of keys is what enables the indexes to query for different combinations of data. In the code above, this remapping can be seen in the extra annotations used. Lines 14 and 18 show the two remapped hash keys for the cases where the class is used for the indexes. Likewise, line 11 shows the use of the AppointmentId as a range key for the index cases instead of as a hash key for the table case.

Writing Converter Classes

Sometimes you have a table attribute that requires custom serialization, and for these cases you can write a converter class. You declare this class in the annotation, as for the Address attribute of the DealerItem class, shown below on line 17:

Here’s what the code looks like for the AddressConverter class referenced in the annotation above:

Converter classes implement the two IPropertyConverter interface methods you see above. The first method converts the attribute to the DynamoDBEntry type that’s required when writing data. The second method converts it to the exported attribute type when reading data. If you look in the Converters folder in the source code, you'll find examples that serialize date/time values and the TimeslotKey and RegistrationKey compound-key types to custom-formatted strings when stored in DynamoDB.

Providing a Service Context

As covered previously, the sample code uses dependency injection in the DynamoDB data services to help make them as flexible and portable as possible. We want data services that can be run from the command line locally, or run within Lambdas or containers in the cloud. We do this by injecting a service context interface into our data services for which we have both local and cloud implementations.

Here’s the code for the service context interface. As you can see, this interface provides access to configuration values as well as to client interfaces for Cognito, SQS, and DynamoDB:

Here’s a code excerpt from the implementing CloudServiceContext class, showing the configuration that's read from environment variables:

Here’s the method from the same class that returns the DynamoDB dbContext client. Note that when instantiated by this class, the client runs with access privileges inherited from the Lambda or Container execution roles at runtime. This is why you don’t see any authentication code in this method:

Performing Read & Write Operations

Switching back to the data services, here’s the EventService class, showing an example method that shows how to write an item to a DynamoDB table using the high-level interface:

Line 29 in the code above is where the cached DynamoDBContext is accessed from the service context interface. Line 30 then shows the use of this database context to save the event item. Note that the DynamoDB "save" operation is equivalent to an HTTP PUT in that if the keys are not found it will perform an insert, otherwise it will overwrite an existing item.

The code below shows the corresponding “create” method in the DealerService class. What’s different in how this method is implemented is that it blocks until a consistent read of the new item has been achieved after being saved. This is a technique you can use if a client application will perform a hand off to another application following this operation, and it wants to be sure that the newly created item will be visible:

In this example, the code on line 34 instantiates the "operationConfig" variable that instructs the item retrieval code in line 37 to perform a consistent read. The effect of this consistent read is to block this method at line 37 until the data write has been fully replicated.

This tutorial references the code in the aws-connectedcar-java-core repository. If you're new to this course, see the introduction for information about setting up your workstation and getting the sample code.

The next three tutorials cover writing Java client code for DynamoDB. In this first tutorial, we’ll look at the high-level DynamoDB data access interface, show how to write data-mapping classes and type converters, and see how to instantiate client connections and perform basic read and write operations.

Data Service Basics

Let’s start by looking at the getDealer method in the DealerService class. This method, shown below, is a good example of basic data retrieval using what’s called the “high-level” client interface (which we’ll explain shortly):

Basic though it is, this method highlights some key points about how these services are implemented in the sample code. First, the method returns a database-agnostic Dealer class from the core.shared project. This follows the portability strategy we outlined previously in the section on Lambdas. Along similar lines, the dbContext variable that connects with DynamoDB in this method is obtained from the IServiceContext interface. This interface has implementations in the sample code that enable both local and cloud runtime scenarios. Lastly, the interactions with DynamoDB within this service use the DealerItem class, which is mapped to the Dealer table. Instances of this class are translated back to those of the Dealer class when returned to clients.

That’s a quick data service overview. Now let’s dig into the details.

Writing Data Mapping Classes

For .Net and Java, DynamoDB supports two different interface models, one high-level and another low-level. When using the low-level interface, your client code reads and writes individual attributes directly to tables. With the high-level interface your code reads and writes records (or “items”) using mapped data classes. In the sample code for this course you’ll only see the high-level interface, for which there’s a set of mapped data classes that you’ll find in the /core/services/data/items folder.

Here’s the DealerItem, which is one of the example mapped data classes:

As you can see from the code above, there are no class-level annotations, or special interface or base classes required. This is just a plain old Java class that extends the BaseItem class from the same source folder. The hash key for the table is identified in this class with the DynamoDbPartitionKey annotation, shown on line 21. (Note that with these annotations, we’re explicitly setting the serialized attribute name and letter-casing to ensure ensure consistent formatting in the table if the serialization libraries change.) Non-key attributes use the DynamoDbAttribute annotation, as you can see for the Name property, shown on line 30. Line 40 shows the use of a converter class, which we’ll cover below. You'll also note, looking at line 50, that enumerators don't require any special handling.

Now, looking at the parent class for the data services, shown below, you can see how this DealerItem class is mapped to the table in DynamoDB at run time. Note that on line 53 the code is looking up the name of the table from the configuration. As we saw back in the CloudFormation chapter, the names of the deployed tables will include interpolated service and environment names, which are not known to this code at build time. So this call reads the table name from an environment variable created during deployment.

In the earlier data modelling lesson we showed the two global secondary indexes that are associated with the Appointment table. Now, here's a look at the AppointmentItem class to see how the properties are annotated:

To understand what you’re seeing in this code, you need to remember that with global secondary indexes you have the option to "project" some or all of the fields from the source table to the index. In this case, all the fields from the Appointment table are projected to both indexes. By projecting all the fields in this way, this AppointmentItem class can be used to read data from the indexes as well as from the source table. Be aware that extra annotations are required in this mapped data class to support these different cases.

For the case where this class reads or writes data for the originating Appointment table, line 23 provides the regular DynamoDbPartitionKey annotation for the AppointmentId attribute. The other two attributes for this case are annotated with regular property annotations, shown on lines 33 and 44.

For each of the two indexes, in contrast, one of the two non-hash-key attributes in the table functions as a hash key, while the AppointmentId attribute functions as a range key. This remapping of keys is what enables the indexes to query for different combinations of data. In the code above, this remapping can be seen in the extra annotations used. Lines 34 and 45 show the two remapped hash keys for the cases where the class is used for the indexes. Likewise, line 24 shows the use of the AppointmentId as a range key for the index cases instead of as a hash key for the table case.

Writing Converter Classes

Sometimes you have a table attribute that requires custom serialization, and for these cases you can write a converter class. You declare this class in the annotation, as for the Address attribute of the DealerItem class, shown below on line 40:

Here’s what the code looks like for the AddressConverter class referenced in the annotation above:

Converter classes implement the two AttributeConverter interface methods you see above. The first method converts the attribute to the AttributeValue type that’s required when writing data. The second method converts it to the exported attribute type when reading data. If you look in the converters folder in the source code, you'll find examples that serialize date/time values and the TimeslotKey and RegistrationKey compound-key types to custom-formatted strings when stored in DynamoDB.

Providing a Service Context

As covered previously, the sample code uses dependency injection in the DynamoDB data services to help make them as flexible and portable as possible. We want data services that can be run from the command line locally, or run within Lambdas or containers in the cloud. We do this by injecting a service context interface into our data services for which we have both local and cloud implementations.

Here’s the code for the service context interface. As you can see, this interface provides access to configuration values as well as to client interfaces for Cognito, SQS, and DynamoDB:

Here’s a code excerpt from the implementing CloudContext class, showing the configuration that's read from environment variables:

Here’s the method from a non-tracing (i.e. doesn't enable CloudWatch X-Ray) implementation of the above class that returns the DynamoDB dbContext client. Note that when instantiated by this class, the client runs with access privileges inherited from the Lambda or Container execution roles at runtime. This is why you don’t see any authentication code in this method:

Performing Read & Write Operations

Switching back to the data services, here’s the EventService class, showing an example method that shows how to write an item to a DynamoDB table using the high-level interface:

Line 37 in the code above is where the DynamoDbTable client is accessed from the service parent class, which in turn instantiates it using the context interface. Line 42 then shows the use of this table client to save the event item. Note that the DynamoDB "save" operation is equivalent to an HTTP PUT in that if the keys are not found it will perform an insert, otherwise it will overwrite an existing item.

The code below shows the corresponding “create” method in the DealerService class. What’s different in how this method is implemented is that it blocks until a consistent read of the new item has been achieved after being created. This is a technique you can use if a client application will perform a hand off to another application following this operation, and it wants to be sure that the newly created item will be visible:

In this example, the code on lines 52-55 instantiates a "get item" request that includes a "consistent read" flag, which you can see being set on line 53. The effect of this consistent read is to block this method at line 57 until the data write has been fully replicated.