A Startup Founder’s Guide to Smells in Outsourced App Development

In my past roles of CTO and Chief Architect, I have been involved in scores of code and architecture reviews. One of the services that I offer as part of my CTO-as-a-Service consultancy is to perform code architecture reviews of the apps that my clients are trying to go to market with. Some of the code has been developed by in-house staff. But, more often than not, I am called in to review code that has been developed by offshore development shops, the kind of shops that employ reams of “commodity” developers.

One of the reasons why I started CTO as a Service is because people used to come up to me and ask me how they turn their ideas into a product. Because of the time constraints of my full-time jobs, I would offer a little bit of advice in exchange for a beer. I would usually point these people to a number of development shops that I know, hoping that the development shop would be able to take the idea and turn it into a proper app. After that initial meeting, I did not involve myself any longer with the development of the app. I didn’t concern myself with the architecture that the development shop came up with, whether they were using a cloud provider, whether the code was clean, etc.

Startup founders who are non-technical usually have a little bit of money to develop the MVP of an application. Ideally, they would like to have a technical co-founder who would develop the app in exchange for equity, but there are a lot more ideas than there are CTOs with available time who will work solely for equity. So, the startup founder will search Upwork for a remote developer or will gamble with an off-shore development shop that has junior-level resources available for $25/hour. The startup founder usually has some wireframes and a written description of what the application is supposed to do. They throw it over the fence to the remote developers and wait with bated breath for the MVP to be delivered at some time in the future.

The problem that I constantly see is that there is nobody technical who is sitting on the side of the startup founder, representing the interests of the founder to the developers. There is nobody who is sketching out the architecture of the application. There is nobody making sure that the setup on AWS or Azure or Heroku will not incur massive cost overruns. There is nobody who is doing code reviews to make sure that the developers know what they are doing. 

This issue is not confined to startup founders. There are well-established small companies who decide, for some reason or another, that they would like to create an app. For example, a venerable law firm that decides that they want to provide legal advice to their clients on a mobile app, or an old company that specializes in tutoring that decides that they want to offer an electronic version of their test-prep methodology. These companies often do not have anyone technical on their side to interface with the remote development shop.


A List of Code and Architecture Smells

You hopefully go to your doctor every year for a check-up. And, sometimes, you take your car into the mechanic for a yearly tune-up where the mechanic will see if the car is in good shape.

Similarly, you can take your app’s code to an experienced IT professional to see if it is well-designed and resistant to bugs. During this process, the IT professional will examine the code for certain “smells”.

A “code smell” can be defined as:

“Smells are certain structures in the code that indicate a violation of fundamental design principles and negatively impact design quality.”

Someone with many years of experience developing systems (such as myself) can take a look at a code base or an architecture, and instinctively detect if there are funny smells around it.

The list of code and architecture smells comes from real-life reviews that I have done over the years. I will be adding to this list as I do more reviews for my clients. The list will never contain any mentions of my clients, my past employers, any specific products, or any development shops.

This list is designed for non-technical startup founders. As such, I will explain each smell and why it is bad for your app.


Code Smells

Too Many Hands in the Code

Someone’s coding style is like their fingerprint. When you give your app to a remote development shop to work on, you have no idea how many different people are going to write the code. Transitioning between different sets of developers takes time, and knowledge transfer can be spotty. Too many hands in the code might mean that the development shop has a lot of turnovers, it could mean that they are taking developers and transitioning them to higher-paying projects, or it could mean that they want to get your app done as quickly as possible and are attacking your app in a highly parallel fashion.

Lack of error checking and exception handling

There is nothing worse than having your app crash all of the time. If an app is unreliable, users will reticent to use it. Have you ever experienced the “spinner of death” in an application, where the app seems to be stuck? Nothing will drive users away faster than having their computers or phones lock up.

Every function call should be checked for null arguments, bad values, null return values, and other unexpected situations. Edge cases should be tested (ie: a negative number is used where a positive number is expected). A consistent exception-handling policy should be implemented.

Lack of Comments and Documentation

During the lifetime of your application, the source code will pass through many hands. Remote development shops can rotate different developers in and out of your code base. Although the lack of comments is not strictly a “code smell”, the lack of comments will result in a greater transition time for new developers to learn your application’s code.

At the very least, there should be comments for every module or class, and there should be comments for every public function and property.

Comments will not only assist developers in learning a codebase; it can also be used to generate system documentation. There are tools that will scan the source code and generate various types of documentation. 

If your application has APIs that are meant to be used by other third-party developers, the API documentation can be automatically generated. The API documentation should adhere to the OpenAPI (aka Swagger) specification. Once the API documentation is in OpenAPI format, it can be presented in an easy-to-read format on a web page.

In addition to the comments in the code, all architectural decisions should ideally be memorialized. A Wiki, such as Confluence, is ideal for keeping records of design decisions. You should insist that your remote development shop delivers to you documentation around all major design decisions, and what alternatives were considered and discarded.

Copying of Code

Development companies who are under time pressure to implement apps can find themselves inserting copies of code into multiple places within the application. There can be multiple problems associated with copy-copying. First, if a bug is found in that piece of code, it needs to be fixed in multiple places. Second, the code can possibly “leak” responsibilities. 

A good design will establish a firm “division of responsibilities” between different parts of the code. For example, there might be only one place in the code that is responsible for debiting and crediting a customer’s bank account. If you copy that code into different places within an app’s codebase, then the responsibility of debiting and credit a user’s account will have “leaked” into other parts of the code, making the app more difficult to maintain.

Breaking the Separation of Concerns

The code of an application should be composed of different layers. The typical layers include User Interface, Services, Repository/Persistence, Models, Controllers, Framework, Communications/Messaging, etc. Each layer has a specific responsibility. This rule is called “separation of concerns”.

A developer should not let responsibilities leak from one layer into another. I have seen code where a specific vendor’s user-interface library was referenced in the Persistence Layer. This not only creates a tight binding between the User Interface and the Persistence Layer, but it makes it more difficult to switch to another User Interface toolkit.

Another way of doing “separation of concerns” is to divide the application into various microservices. During an architecture review, we might want to consider the effort and the benefits of refactoring the codebase into microservices.

Improper Class Hierarchies

Well-designed code is akin to beautiful poetry. One thing that we look for in a code review is a sensible hierarchy of classes, and well as adherence to well-known object-oriented techniques. Common functionality might be put into a base class, which other classes inherit from.

A common base class for business objects is useful in order to implement common functionality such as validation, flagging if a model has been changed, shared properties (such as ids and audit information), etc.

Single Points of Failure

One of the most important things that we look for in an architecture evaluation is the single points of failure within an application. If a single service fails, or the connection to an important external service fails, will the entire application be hosed?

Your developers have no control over third-party systems that your application depends on. But you need to see if there are other third-party systems that can provide the same information that your application can use as a secondary source.

If your application is broken up into microservices, then there are various high-availability patterns that your developers can use in order to make your application more resilient to failures. 

Key Person Risk

You want to avoid situations where a single developer on the remote development team is the only person who knows a key technology that your application is built upon. Likewise, any knowledge about “tricky” or “complex” parts of the codebase or the architecture should not lie in the hands of a single person. If it is, then you have “key person risk”. The consequences are that if this person leaves the company, you may not be able to fix errors or make improvements in the app.

It’s important to memorialize all important architectural decisions. A wiki such as Confluence is good for capturing all of the information about the architecture and the development process.

Resume-Oriented Development

Sometimes, the only reason why a developer chooses a certain technology is that they want to put that technology on their resume, no matter if that technology is not particularly right for the application. A choice of a nascent technology by a single developer could possibly result in key man risk, especially if that developer decides to leave the team.

In particular, I have seen many occurrences of a developer choosing the wrong database technology simply because the developer wants to gain experience with NoSQL databases at the expense of the client.

It is more difficult to back an application out of resume-oriented development. The best time to catch this is at the design stage. An experienced architect will be able to evaluate various choices for certain technology and identify if any of those technologies could present a key man risk.

Building when you should Buying

The decision to create your own software from scratch vs buying an existing product is always a difficult one. 

When you incorporate someone else’s product into your architecture, you are beholden to the whims of that company to fix bugs and to release new features that you might need. You might also have the issue of “vendor lock-in”, where it is impossible to move away from a vendor’s product. On the positive side, you save money and effort by having your developers use something that is fully-baked.

During the design phase of a product, an experienced architect can recommend existing third-party products and frameworks that you can use within your application in order to get faster time-to-market without an excessive amount of risk.

One of the advantages of using a cloud platform like AWS or Azure is that there are new platforms coming out all the time, and these platforms are fully supported by cloud vendors. Almost every need that an app has an XXX-as-a-Service solution.

There is also a world of open source software that can be leveraged. There are certain rules to follow when choosing open source frameworks, but that is the topic of another article.

SDLC Smells

The Software Development Lifecycle (SDLC) describes how software is supposed to be developed. The chart below shows all of the steps that comprise a proper SDLC practice. In reality, many remote development shops do not follow each and every step, mainly out of concerns about cost and time.

Lack of Unit Tests and Integration Tests

Developers are changing code all of the time, fixing bugs and adding new features. As they are creating new capabilities for your app, you have to feel comfortable that the existing code will continue to work. 

This is where unit testing and integration testing come in. A unit test will test certain functionality in a piece of code. Ideally, every function within a module (or class) should have an associated unit test. Ideally, every code path in the application should be tested by a unit test suite. The percentage of all code paths that have unit tests is called “code coverage”. Ideally, an application should have 100% code coverage, but unless your project is developed using Test Driven Development (TDD), that percentage often falls short of 100%.

It’s extremely important to not only test the “happy path”, but to test that the code will not break when it encounters bad data. This means that you need to write unit tests that pass bad data into functions. You also need to be able to cause the code to generate an exception and write tests to make sure that the code is generating these exceptions when encountering error conditions.

Of course, you need to make sure that the code actually has error checking and exception handling, which is a huge code small if it doesn’t.

An integration test will test the interaction between multiple modules. For example, a function that is supposed to debit a bank account should result in a decrease in the customer’s “available balance” column in the database. So, not only do you need to make sure that the “DebitCustomer()” function is tested, but you need to read the database to make sure that the available amount has changed.

A regression test will compare the results of two separate runs of the test suite to make sure that the values produced by running a test suite are not different (or different within a certain tolerance) than the results of the previous run of the test suite. In addition to comparing actual values that are produced by the app, we can also test the performance time of various parts of the app. We want to make sure that a new version of the code is not noticeably slower than the previous version. If the app starts to crawl, then users might get frustrated and might abandon the app.

There are tests that can be written to test what happens to your system when it is under heavy load, which is what will happen when Oprah Winfrey mentions your app in an interview. These are called soak tests and stress tests.

No Continuous Integration

Continuous Integration is important in the Software Development Life Cycle. When a developer checks the code into a source code repository (like Github or BitBucket), all of the unit tests are automatically run. A failing unit or integration test will identify problems immediately before those problems seep deep into the app and find their way into the production version of the app.

You might have several developers working on your app. For example, in many development shops, you would have one developer working on the backend (the database and the server), you would have one developer working on the front-end (maybe a website), you might have one developer working on the iPhone version of the app, and finally, you may have another developer working on the Android version.

All of these developers would have the code “checked out” from the source code repository, maybe in a separate branch. When it comes time to release a new version of the app, all of the developers would have to merge their code back into the main codebase. The longer a developer has a branch checked out, the more prone the app is to errors when the developers check in their code. If things don’t go smoothly, as they often don’t, it can result in what is known as “integration hell”. This costs you time and money.

Continuous Integration should be performed frequently, at least once per day, if not more. The mantra is “Check in early, check in often”.

Some remote development shops do not use continuous integration. Why is this? Writing unit tests and integration tests are tedious and costly. Development shops make money on churning out as many apps as possible in a given timeframe. When you do not have testing and continuous integration set up, you incur “technical debt”. And, sadly, tech debt always comes back to bite you.


Data Access Smells

No Caching

In order to access and store your app’s data, you have to make a call into a database. However, database calls are relatively expensive. There is the time it takes for the data to be transferred over the network. Certain database operations can take a relatively long time to execute. And there can be “deadlocks” that occur when multiple callers try to access the same data.

One of the common architectural smells is the absence of caching. There can be no better performance-killer than hammering a database with a lot of operations, especially the operation of putting data into the database. 

Good application design will employ “caching”. With caching, data is stored in memory within the application. The cache is checked for the data your app needs, and only if that data is not found will the database be accessed.

I can’t tell you how many code reviews I have done where I have detected an absence of caching. And the introduction of even a small in-memory cache has resulted in dramatic improvements in the performance of certain parts of an app.

Multiple Avenues To Update a Database

When we use caching, it’s important to keep the cache in sync with the database. Imagine if the cache contains a value that is the customer’s available balance, and some other app goes directly to the database and updates available balance in the database. The cache will not know about this update, and as a consequence, the original app might have a “stale” value for the customer’s balance.

During an architecture review, it is important to look for all code, services, and applications that can change a database, and if possible, we need to force all database access to go through a single gateway. This way, we can ensure that any caches that the app maintains will be totally in sync with the values in the database.

If this is impossible to do, then there should be a mechanism where database-update events are broadcast to various services in the app, so that these services know that they need to refresh part or all of their cache.

Bad Use of ORMs

An Object Relational Mapping framework (ORM for short), is a way that developers can translate between high-level objects and a relational database. If a developer writes the app in an object-oriented language like JavaScript, C#, or Java, it can be tedious to deconstruct an object and store each field in appropriate tables in a database. An ORM will translate that object into SQL calls to the database.

The problem sometimes with ORMs is that the SQL code that is generated can be slow. This has been a long-running complaint with ORMs.

Careful examination should be made of the database interaction that is controlled by the ORM. There are various tools for databases like SQL Server that will monitor the calls to the database and may suggest improvements. You should consider moving frequently-used ORM-generated calls into SQL stored procedures, and calling those stored procedures directly.

Data Type Mismatches Between Database and Code

There are applications who store certain data as a non-optimal data type within the database. For example, dates and currency values should not be stored as text fields. This makes it easy to store incorrectly-formatted values inside of the database. Storing dates and numbers as text strings make it more difficult to do arithmetic on the values. The developer first has to convert the text field into the correct data type, hope that the conversion succeeds, then perform the arithmetic on the new value, then convert the result back into a text field. These needless conversions are tedious and error-prone.

Special care also needs to be taken to make sure that the code will handle “nullable columns” correctly. These are values that are optional within the database. The code should never assume that a nullable column contains a valid value.

Out-of-Sync Views

If there are multiple people working on the same case, and there is a change that one person makes, the other person will not see that change until a fresh call to the database is made. That can lead to situations where there is stale data on somebody’s screen.

Many web applications use real-time messaging to notify the user interface that something in the model has changed. For ASP.NET MVC, there are frameworks like SignalR that will help implement real-time notifications between modules of an application.


Security Smells

Storing PII in Plaintext

Personal Identifiable Information (PII) should never be stored in plaintext. I have seen cases where configuration files have sensitive password information that can easily be compromised by a hacker. A full scan of the codebase should be made to ensure that no passwords are stored in plaintext anywhere.

Test data should never include any PII. I have seen cases where a test database contains social security numbers and password in plaintext form.

Passwords and PII should never be stored in a source code repository like Github. It is common to stored passwords in the .env file in Node js applications. Be sure that the .env file is never stored in a repo.

If you are in Europe, all apps have to be GDPR compliant. If it is not, it can mean big fines and possible shutting down of the app. So take PII very seriously.

No Authentication in the public API

Exposing an API layer for your app is a great way of encouraging third-party developers to create new extensions for your app, thereby making your app even more powerful. But, you do not want to make your app the “Wild West”, with unfettered access to your platform.

Make sure that APIs that your app exposes have proper authentication. You may want to make some APIs truly free and public, but anything that writes data to your database or changes the state of your application should have proper authentication around it.


Performance Smells

Frequent Polling of Services

An application can have a  connection to various internal and external services that contain data that is critical to the app. For example, your real-estate app might be connected to an external Multiple Listing Service (MLS) server that contains information about new homes that came on the market. What your application may do is to connect to the MLS service every few minutes, download data, see if any of that data changed since the last time the app connected, and notify the user that something changes.

If there are multiple internal and external services that our app has to connect to and “poll” for changes, then our application can take a performance hit, especially if there are a lot of services to poll.

A much better way of getting data updates is to let the external services “push” data to your application. Your app basically subscribes to updates, sits back, and lets all of the services push events when something interesting happens.

Applications should attempt to migrate from the “pull” model to a more modern “push” model of updates. This is usually accomplished by using webhooks, which are HTTP-based POST calls to a URL when an interesting event occurs.

No Provision for Scaling

My favorite thing to say to startup founders is: “What happens if Oprah mentions your product? What happens if you have 10,000 people hitting your app servers at the same time? Can your app and infrastructure handle the load, or will your app crash and burn?”

A good architecture will be able to let an app scale up seamlessly. Cloud providers like AWS and Microsoft Azure and Google Cloud Platform provides services that will automatically let you scale your application upwards when you need it, and downwards during slow times.

An architecture review, along with proper soak and stress testing, will make a startup founder more comfortable that their application will be able to handle the “Oprah Moment”.


The list of various smells that this article contains is only a small portion of the smells and anti-patterns that a trained senior technologist will look for when evaluating the architecture and codebase of applications. CTO as a Service has over 30 years of writing systems, leading development teams, and doing architecture reviews. Please consider CTO as a Service to give your applications a health-check from time to time.

Marc Adler

CTO as a Service

July 2019

Writing a Redis Module on the Mac

You can extend the functionality of your Redis 4.x installation by writing custom modules in C using the Redis Module SDK. Since Redis 4.x is only available on Unix-based systems, you need to write your Redis modules on a Unix-like system such as MacOS and use compilers like gcc. (Redis for Windows is only supported up until Redis 3.2.) Your Redis module must be a Unix shared library. This shared library can be loaded into Redis when Redis is first started or can be loaded dynamically into an already-running instance of Redis.

I have attempted to document the process of writing a Redis module using gcc and using Visual Studio Code as my development environment. The example shown below comes right out of the Redis Module SDK.

Note that the Redis Module SDK is still under development. For example, it does not yet have an API that supports SET-based functions.

Prerequisites

Make sure that the Gnu gcc compiler is installed on the Mac. Open up a terminal and just enter the command

gcc

Open Microsoft’s Visual Studio Code. It’s helpful to install the official Microsoft C/C++ extension. 

Download the Source

Clone the Git repo for the Redis Module SDK. The main Github site is here. In a Terminal window, navigate to the directory where you want the Git repo to be downloaded to. Then enter the command

git clone https://github.com/RedisLabs/RedisModulesSDK.git

Modify the Source

After the source code is downloaded, edit the file rmutil/sds.h and change line 82 to

#define SDS_HDR_VAR(T,s) struct sdshdr##T *sh = (struct sdshdr##T*)((s)-(sizeof(struct sdshdr##T)));

(Change the “void*” to “struct sdshdr##T*” in order to silence the Mac’s gcc compiler)

Build the Source and the Example Module

In the Terminal, go to the root directory of the Redis Module SDK, and just enter the command

make

This will build the single library (librmutil.a) that you need to link your custom modules with. It also builds the example that comes with the Redis Module SDK. It will also build the shared library (module.so) that is the custom module that you will load into Redis.

Using Visual Studio Code

Run Visual Studio Code. Open the main directory that the Module SDK is in. We need to create JSON-based configuration files that tell Visual Studio Code how to build the application and how to run/debug the application. These configuration files go into the .vscode subdirectory under your project.

The tasks.json file will tell Visual Studio Code how to run the make command.

To run the example, you need to launch the command

/usr/local/bin/redis-4.0.6/bin/redis-server –loadmodule ./module.so

launch.json

{
 "version": "0.2.0",
 "configurations": [
   
     "name": "(lldb) Launch",
     "type": "cppdbg",
     "request": "launch",
     "program": "/usr/local/bin/redis-4.0.6/bin/redis-server",
     "args": ["--loadmodule", "./module.so"],
     "stopAtEntry": false,
     "cwd": "${workspaceFolder}",
     "environment": [],
     "externalConsole": true,
     "MIMode": "lldb"
   
 
}

tasks.json

{
 "version": "0.1.0",
 "command": "make",
 "isShellCommand": true,
 "tasks": [
     
         "taskName": "Makefile",

         // Make this the default build command.
         "isBuildCommand": true,

         // Show the output window only if unrecognized errors occur.
         "showOutput": "always",

         // No args
         "args": ["all"],

         // Use the standard less compilation problem matcher.
         "problemMatcher": {
             "owner": "cpp",
             "fileLocation": ["relative", "${workspaceRoot}"],
             "pattern": {
                 "regexp": "^(.*):(\\d+):(\\d+):\\s+(warning|error):\\s+(.*)$",
                 "file": 1,
                 "line": 2,
                 "column": 3,
                 "severity": 4,
                 "message": 5
             
         
     
 
}

Running the Module

In Visual Studio Code, run the debugger. This will launch a copy of Redis with your new module loaded. You can put breakpoints into your module’s code and watch Redis execute the module.

While the debugger is running a copy of Redis, open up a Terminal and run the redis-cli program. In redis-cli, enter the commands:

127.0.0.1:9979> EXAMPLE.HGETSET foo bar baz
(nil)
127.0.0.1:9979> EXAMPLE.HGETSET foo bar vaz
“baz”
127.0.0.1:9979> EXAMPLE.PARSE SUM 5 2
(integer) 7
127.0.0.1:9979> EXAMPLE.PARSE PROD 5 2
(integer) 10
127.0.0.1:9979> EXAMPLE.TEST
PASS

Creating a Slackbot on AWS using Golang – Part 3 – AWS Lambda Functions

Marc Adler

CTO as a Service

In the previous article of the series, we created a Quote Alerter for Slack and AWS. The Quote Alerter will notify a user on Slack if the price of a stock went above or below a certain target price. The Golang-based code runs on AWS and uses a Postgres database on RDS in order to store all of the alert subscriptions and the list of current stock prices.

In this article, we will migrate the quote-checking logic to an AWS Lambda function.

(There is an article on the CTO as a Service blog that discusses using Lambda with Visual Studio Code and C#/.NET. That article has some good intro material on Lambda functions on AWS, so you are encouraged to glance over it if you have any basic questions about Lambda.)

Why migrate the Slack Stock Bot to Lambda functions? Mainly for illustrative purposes for this series of articles. In reality, there might be some relatively time-consuming business logic that you might want to take out of the main code path of an application and run asynchronously with a Lambda function. In the domain of equities and quotes, you might want to have a separate serverless function that will compute some Greeks and either store those values in a database, or enrich our Slack notification messages with those Greek values (like the delta and gamma).

In the migration path that we are going to undertake, we will just start off with a simple Go-based “Hello World” lambda, and then slowly drag in the parts of the Slack Stock Bot that we need in order to implement the alert mechanism.

The source code to this project can be found here

https://github.com/magmasystems/SlackStockSlashCommand

https://github.com/magmasystems/SlackStockSlashCommand-Lambda

Overview of the Migration

  1. Create a new Lambda Function using the AWS Lambda dashboard
  2. Create a new CloudWatch trigger that will cause the new Lambda Function to run
  3. Create a very simple Go-based Lambda using the Go/AWS SDK, and test it out
  4. Change the existing Slack Stock Bot code so that we can import packages from it easily
  5. Change the code to the new Lambda Function so that it replaces the ticker-based price break checking
  6. Deploy the new Lambda function
  7. Test the function by manually firing the CloudWatch event

Creating a New Lambda Function on AWS

The first step in the process is to create a new Lambda Function by using the AWS Lambda dashboard.

After clicking on the Create Function button, you will be presented with a form that you need to fill out with the information about your new function.

We call our new function priceBreachChecker. We make sure that the function has the Golang runtime installed, and we will use a previous execution role that we have set up. The execution role determines which AWS services the lambda function can access.

After creating the lambda function, we need to specify what kind of events will trigger the execution of the function.

Creating the CloudWatch Trigger

If you recall, the current Slack Stock Bot code creates an application-based ticker that will check for price breaches at certain intervals. This uses the Golang ticker package. At every tick, the code will call the quote service to retrieve all the current prices for the stock symbols that have alerts on them. It will then call some SQL that will ask the Postgres database to see which current prices have breached the price limits that were set up.

In order to simulate this ticker, we will use AWS CloudWatch events. You can set up CloudWatch to call a Lambda Function at regular intervals or on a cron-based schedule (ie: every weekday at 12:00 PM and at 5:00 PM).

Back in the Lambda dashboard, add a new trigger. Choose CloudWatch Events from the list of triggers on the left side of the page.

Now we need to set up the interval that the CloudWatch trigger will fire.

Click on the Add button. For the Rule Type, choose Schedule Expression, and use rate(60 minutes) as the expression. This will tell CloudWatch to fire the event every hour.

Click on the Add button. You will get confirmation that the new Lambda Function has been created.

Before we look at the CloudWatch dashboard, notice that there is a way that you can upload a ZIP file of your Go-based Lambda code. We will not be using this. Instead, we will be using the AWS CLI from within Visual Studio Code to deploy our code.

Changing the CloudWatch Trigger

Let’s look at the CloudWatch dashboard in order to verify that we have a trigger. On the left side of the page, find the Events / Rules menu item and click on it.

If you click on the name of the rule, you can see some further details.

Under the Actions button, choose Edit. You will see that there is a rule that controls the interval that the event will be fired. If you want to change the interval at which the Price Breach Checker will run, then adjust this interval.

You can also have this rule trigger additional Lambda functions. Let’s say that we have a separate price-fetching Lambda function for every different quote service we support. We can have this CloudWatch rule trigger each of the separate Lambda functions. If you want to do this, choose a new Lambda function, and click on the Add Target button.

Creating a Simple Go-based Lambda

The main docs on Lambda and Go can be found here:

https://docs.aws.amazon.com/lambda/latest/dg/go-programming-model.html

We need to download and install the AWS Lambda SDK for Go

go get github.com/aws/aws-lambda-go/lambda

Now let’s get busy with Visual Studio Code. We are going to create a new folder for our new Lambda function.

Creating Tasks for Visual Studio Code

We can create a list of tasks that Visual Studio Code will run to do the build and deploy of the Lambda function. In Visual Studio Code, go to Terminal / Configure Tasks, and edit the tasks.json file.

My tasks.json file looks like this:

{
   // https://code.visualstudio.com/docs/editor/tasks-appendix
   "version": "2.0.0",
   "tasks": [
       {
           "label": "Build",
           "type": "shell",
           "command": "go",
           "args": [ "build", "-o", "priceBreachChecker"],
           "options": {
               "env": {
                   "GOOS": "linux",
                   "GOARCH": "amd64"
               }
           },
           "group": {
               "kind": "build",
               "isDefault": true
           }
       },
       {
           "label": "Zip",
           "command": "zip",
           "args": [ "priceBreachChecker.zip", "priceBreachChecker", "appSettings.json"],
           "dependsOn":[ "Build" ]
       },
       {
           "label": "CreateAndDeploy",
           "command": "aws",
           "type": "shell",
           "args": [
               "lambda", "create-function",
               "--function-name", "priceBreachChecker",
               "--region",  "us-east-2",
               "--profile", "default",
               "--role", "arn:aws:iam::901643335044:role/service-role/woof_garden_canary",
               "--handler", "priceBreachChecker",
               "--runtime", "go1.x",
               "--zip-file", "fileb://./priceBreachChecker.zip"
           ],
           "options": {
           },
           "problemMatcher": [],
           "dependsOn":[ "Zip" ]
       },
       {
           "label": "UpdateAndDeploy",
           "command": "aws",
           "type": "shell",
           "args": [
               "lambda", "update-function-code",
               "--function-name", "priceBreachChecker",
               "--region",  "us-east-2",
               "--profile", "default",
               "--zip-file", "fileb://./priceBreachChecker.zip"
           ],
           "options": {
           },
           "problemMatcher": [],
           "dependsOn":[ "Zip" ]
       }
   ]
}

There are four tasks here.

One is the Build task, which compiles the Go code. Notice that we use two special environment variables that tell the Golang compiler about the platform that the code should be generated for.

“GOOS”: “linux”,
“GOARCH”: “amd64”

The AWS server that runs your Lambda function is a runs Amazon’s own version of Linux and runs Go code that is compiled to the amd64 chipset.

The next task will Zip up the compiled executable and the appSettings.json configuration file. AWS requires that a ZIP file contains the assets for your Lambda function. Notice that the Zip task has a dependency on the Build task, so if you run the Zip task, it will automatically do the build as well.

(Note: Instead of using a separate application settings file, you can set environment variables in the Lambda function dashboard, and then read those environment variables.)

The CreateAndDeploy task will not be used here, because we already created the Lambda function using the AWS Lambda dashboard.

The final task is UpdateAndDeploy. This is used to update AWS with new versions of the code. It will upload the ZIP file that was created by the Zip task. We made the UpdateAndDeploy task dependent on the Zip task so that the build. zip and upload processes can be done with a single command.

Writing a Sample Lambda Function in Go

We will create a simple Go package which will just echo the arguments to the log.

Here is the code:

priceBreachChecker.go

package main

import (
    "context"
    "fmt"
    "log"

    ev "github.com/aws/aws-lambda-go/events"
    "github.com/aws/aws-lambda-go/lambda"
    "github.com/aws/aws-lambda-go/lambdacontext"
)

func main() {
    lambda.Start(priceBreachChecker)
}

func priceBreachChecker(ctx context.Context, event ev.CloudWatchEvent) (int, error) {
    lambdaContext, _ := lambdacontext.FromContext(ctx)
    log.Println(fmt.Sprintf("In priceBreachChecker handler: context is %+v", lambdaContext))
    log.Println(fmt.Sprintf("In priceBreachChecker handler: event is %+v", event))
    return 0, nil
}

Notice the arguments for the priceBreachChecker function. There are several different function signatures for the entry point, and somehow, the AWS Lambda runtime is able to figure out how to marshal the various triggers to the functions. The CloudWatchEvent is the struct that contains all of the information that the CloudWatch trigger generates.

Testing the Lambda

The first step to testing out this simple Lambda function is to build it, zip it up, and deploy it to AWS. To do this, I ran the Zip and the UpdateAndDeploy tasks from within Visual Studio Code.

I went into the CloudWatch Event Rules and temporarily changed the trigger interval to 30 seconds.

Then I went into the CloudWatch logs and waited until the trigger fired. Here is what the log looked like:

Success!!! The two log messages that the function generated can be seen in the CloudWatch log.

(Don’t forget to change the trigger back to 60 minutes, or else your Lambda function will run every 30 seconds)

Packaging the Slack Stock Bot

Most programming environments support the use of packages. In the world of C#, we use NuGet to import third-party packages. In the Node.js world, people use npm, and Java, most developers use Maven.

When we write our new price-checking Lambda function, we would like to import the code from our existing Slack Stock Bot. We have seen that you can import packages from Github using the go get command.

Since our existing code is already up on Github, let’s import it:

go get github.com/magmasystems/SlackStockSlashCommand

Easy enough, right? But look at the various error messages that Go Get gives us. These error messages all look like this:

../../go/src/github.com/magmasystems/SlackStockSlashCommand/stockbot/stockbot.go:11:2: 
       local import "../configuration" in non-local package

What does this mean?

In the file stockbot.go, we have a bunch of imports that look like this:

import config "../configuration"

It seems that Go Get does not like any relative references in the code that it imports. By “relative reference”, we mean an import whose directory is relative to any other directory. These references usually start with the dot character, like “../” or “./”.

So what do we need to do? We need to find all relative references in the import statements in our code, and turn them into references into our Github repository.

import "github.com/magmasystems/SlackStockSlashCommand/configuration"

You can read more about this issue here.

Now that we have fixed all of these references, and we have checked the code back into Github, we can now run the command

go get github.com/magmasystems/SlackStockSlashCommand

Merging the Slash Command Code in with the Lambda

We will import the parts of the Slack Stock Bot package that we need.

When the Lambda function is loaded, the init() function is called. This is a feature of Go. The init function is the place where you can do one-time initialization.

In the init() function, we read the configuration information (we will need the webhook part of the appSettings), we create the Stockbot (which is the interface to the quote services), and we will create the AlertsManager (which does the checking for the price breaches).

When the Lambda function is triggered, we call the function to check for the price breaches, and for every breach, we notify the user through Slack.

We insert a number of logging statements, just so we can trace the running of the code.

package main

import (
    "context"
    "fmt"
    "log"
    "time"

    ev "github.com/aws/aws-lambda-go/events"
    "github.com/aws/aws-lambda-go/lambda"
    "github.com/aws/aws-lambda-go/lambdacontext"

    "github.com/magmasystems/SlackStockSlashCommand/alerts"
    config "github.com/magmasystems/SlackStockSlashCommand/configuration"
    "github.com/magmasystems/SlackStockSlashCommand/slackmessaging"
    "github.com/magmasystems/SlackStockSlashCommand/stockbot"
)


var theBot *stockbot.Stockbot
var theAlertManager *alerts.AlertManager
var appSettings *config.AppSettings

func init() {
    // Put any one-time initialization code here
    configMgr := new(config.ConfigManager)
    appSettings = configMgr.Config()

    theBot = stockbot.CreateStockbot()
    // defer theBot.Close()

    // Create the AlertManager
    theAlertManager = alerts.CreateAlertManager(theBot)
    // defer theAlertManager.Dispose()
}

func main() {
    lambda.Start(priceBreachChecker)
}

func priceBreachChecker(ctx context.Context, event ev.CloudWatchEvent) (int, error) {
    lambdaContext, _ := lambdacontext.FromContext(ctx)
    log.Printf("In priceBreachChecker handler: context is %+v\n", lambdaContext)
    log.Printf("In priceBreachChecker handler: event is %+v\n", event)

    checkForPriceBreaches()

    return 0, nil
}

// checkForPriceBreaches - checks for price breaches
func checkForPriceBreaches() {
    fmt.Println("checkForPriceBreaches: Checking for price breaches at " + time.Now().String())

    theAlertManager.CheckForPriceBreaches(theBot, func(notification alerts.PriceBreachNotification) {
        log.Println("The notification to Slack is:")
        log.Println(notification)
        outputText := fmt.Sprintf("%s has gone %s the target price of %3.2f. The current price is %3.2f.\n",
            notification.Symbol, notification.Direction, notification.TargetPrice, notification.CurrentPrice)

        slackmessaging.PostSlackNotification(
                               notification.SlackUserName, notification.Channel, outputText, appSettings)
    })

    fmt.Printf("checkForPriceBreaches: Finished checking for price breaches at %s\n", time.Now().String())
}

That’s all we need to do for the new Lambda function. We are ready to deploy and test the code.

Deploying the New Lambda Function

Run the UpdateAndDeploy task from Visual Studio Code. You will see this output:

{
    "FunctionName": "priceBreachChecker", 
    "LastModified": "2019-06-20T13:14:25.885+0000", 
    "RevisionId": "7278be22-93ab-4bee-8c85-b4fea3a9857e", 
    "MemorySize": 512, 
    "Version": "$LATEST", 
    "Role": "arn:aws:iam::XXXXXXXXXXX:role/service-role/woof_garden_canary", 
    "Timeout": 15, 
    "Runtime": "go1.x", 
    "TracingConfig": {
        "Mode": "PassThrough"
    }, 
    "CodeSha256": "GVzIhBYObNJY4+ENZ78Emr081ApWxJPOS3KAD/AMbA4=", 
    "Description": "", 
    "VpcConfig": {
        "SubnetIds": [], 
        "VpcId": "", 
        "SecurityGroupIds": []
    }, 
    "CodeSize": 4848883, 
    "FunctionArn": "arn:aws:lambda:us-east-2:XXXXXXXXXX:function:priceBreachChecker", 
    "Handler": "priceBreachChecker"
}

This confirms that the new version of the code has been uploaded.

Testing the Lambda Function

In the Lambda Console, create a new test event.

Since our priceBreachChecker lambda function reacts to a CloudWatch trigger, we choose an Event Template that mimics a CloudWatch event.

After you click on the Create button to create the event, go back into the Lambda console and click on the Save button.

Now that the test event has been created and saved, click on the Test button in order to manually fire a CloudWatch event. You should see a Slack notification generated in the log.

Success!!! We successfully created a Lambda function that does the alerting on price breaches. And the notifications show up in Slack too.

All of the source code to this article can be found here:

https://github.com/magmasystems/SlackStockSlashCommand-Lambda

As always comments are welcome.

Possible Enhancements

Currently, the Lambda function runs and just notifies the Slack user when a price breach occurs. We can enhance the code to compute other values, and output those values to other AWS services. In the last article, we talked about the computation of Greeks. We can send those Greek values to an SNS topic, we can store them in DynamoDb, or we can feed them into a Kinesis stream. We can do this by calling other parts of the AWS-Go SDK from within the Lambda function.

Once you have a Lambda function running inside of AWS, the possibilities are many.

About Me

Marc Adler is the founder of CTO as a Service, a consultancy that provides senior-level technical services to companies who are in need of a CTO or Chief Architect on a “pay for what you use” basis. He was formerly the Chief Architect of companies like Citigroup, MetLife, ADP, and Quantifi. He likes to get himself in trouble with his CIOs by insisting on coding.

Creating a Slackbot on AWS using Golang – Part 2 – Price-based Alerting

Introduction

In the previous article, I talked about how to create a Slack Slash Command that would return the current price of a stock. So, you could enter the command /quote MSFT into a Slack message field and it would return the current price of Microsoft stock.

The Golang-based server was first run locally and then migrated to AWS using Elastic Beanstalk.

The article ended with a list of features that I would like to eventually implemented in my little Go/AWS/Slack application. This article, and subsequent articles, will focus on developing some of these features.

For this article, I wanted to implement an alerting feature in the Stockbot. With this feature, someone could enter a target price of a stock and be alerted when the current price of the stock went above or fell below the target. Maybe AMZN stock fell below $1000 a share and you want to rush to your financial advisor and buy a share or two?

This new feature requires creating a database which will be used to hold both the alerting subscriptions and the current prices of all of the stocks that all users are interested in. This will let us introduce how to set up a database in AWS and how to talk to that database from a Go application.

A New Branch

Let’s go to our Git repository for a second. We would like to create a feature branch for the new alerting feature. So let’s create a new branch on our local machine.

You can create a new branch through the command line

git checkout -b alerting

or the branch can be created from inside Visual Studio Code:

Business Requirements – Designing the New Slash Command

The requirements of the new command are simple.

A user will tell the Stockbot that they want to be notified asynchronously through Slack whenever the current price of a stock goes above or below a certain price target.

The Stockbot will poll the quote service at regularly-scheduled intervals and will retrieve the current prices of all of the stocks that users want to be alerted on. Whenever a price breaches the alerting price, the Stockbot will send a message to the user.

By default, the user will be notified in Slack by a Direct Message (DM). The user can also choose to be notified through a specific channel. Usually, that channel is a private channel that the user has set up, just for price alerts, but it can also be a public channel.

As far as additional user interactions go, we would also like a way to list all of the alerts that a user has, a way to delete a specific alert, and a way to delete all alerts.

When the user creates an alert, we want to make sure that the symbol is a valid stock. If not, an error should be returned. If the user already has an alert for this symbol, the alert will be updated with the new price target (and possibly with the new direction).

Given these requirements, we can design the new slash command.

/quote-alert [symbol price [below]] [symbol delete] [deleteall] [#channel]

Examples:

/quote-alertlists all of the alerts you have
/quote-alert HELPprints a help message
/quote-alert MSFT 130sends an alert when Microsoft stock reaches $130
/quote-alert MSFT 130 #myalertssends an alert to the #myalert channel when MSFT stock reaches $130
/quote-alert MSFT 130 BELOWsends an alert when Microsoft stock goes below $130
/quote-alert MSFT deleteremoves the existing alert on MSFT stock that you have subscribed to
/quote-alert deletealldeletes all alerts that you have

The Alerts Database

Given these requirements, we can design the schema for a database that will hold the subscriptions. The database can also hold current prices.

Amazon’s RDS service gives the developer a choice of several different databases to create. For this exercise, let’s choose Postgres since it is one of the databases available on the RDS Free Tier.

Every alert should have at least the following properties:

  • A unique id
  • The id of the Slack user that created the alert
  • The symbol of the stock that the user wants to monitor
  • The target price of the stock
  • The “direction” of the check (above or below the price)
  • The Slack channel that the user wants to be notified in
    • If the channel is empty, then the user should be sent a direct message through Slack
  • An indication that tells us whether this alert has been triggered
    • In case the sending of the alert is slow, we don’t want alerts to pile up

We also would like a simple table that holds the current price for each symbol that has an alert on it.

Let’s look at the SQL that will be used to create the database. Since we will be creating a Postgres database, the SQL below has the Postgres dialect.

create type slackstockbot.direction as enum ('ABOVE', 'BELOW');

alter type slackstockbot.direction owner to magmasystems;

create table slackstockbot.alertsubscription
(
  id serial not null
     constraint alertsubscription_pk
        primary key,
  slackuser varchar(128) not null,
  symbol varchar(16) not null,
  targetprice double precision not null,
  wasnotified boolean default false,
  direction slackstockbot.direction default 'ABOVE'::slackstockbot.direction,
  channel varchar(128) default ''::character varying not null
);

alter table slackstockbot.alertsubscription owner to magmasystems;

create unique index alertsubscription_id_uindex
  on slackstockbot.alertsubscription (id);

create table slackstockbot.stockprice
(
  symbol varchar(32) not null,
  price double precision not null,
  time timestamp
);

alter table slackstockbot.stockprice owner to magmasystems;

create index stockprice_symbol_index
  on slackstockbot.stockprice (symbol);

In addition to the two tables shows above, we may want to think of having a table with administrative info, such as the time that the last polling was done, the frequency of the polling, and the name of the quote service to pull from. We will leave this for a future exercise.

Finding Price Breaches using SQL

We can join the AlertSubscriptions with the current prices and find all rows that have prices that are either above or below the price target.

SELECT a.slackuser, a.webhook, a.symbol, a.targetprice, a.direction, p.price
  FROM slackstockbot.alertsubscription a, slackstockbot.stockprice p
  WHERE a.wasnotified = false AND a.symbol = p.symbol AND p.price > 0 AND
     ( (a.direction = 'ABOVE' AND p.price >= a.targetprice) OR 
       (a.direction = 'BELOW' AND p.price <= a.targetprice) )

Creating the Database in AWS

If you recall from the previous article, we created a development environment for the Slack Stock Bot on Elastic Beanstalk.

If you click on the green box, you will see the dashboard for the SlackStockBot environment.

In the side panel, click on Configuration. Then scroll down until you see a panel for the Database. You will notice that it is empty.

After you click on the Modify link, you will see a list of databases that are associated with this environment. Click the Create Database button.

You will be presented with a list of database engines. If you are looking to save money, make sure that you check the box at the bottom which will only present you with options that are eligible for the RDS Free Tier. We will choose Postgres.

Name the database and pick the authentication credentials.

In the Network and Security section, I like to make the database publicly accessible so that I can administer the database from my local machine using tools like DbVisualizer or DataGrip.

After the database is created, I will use something like DataGrip to create the tables using the SQL that was shown above.

In addition to setting up this Postgres database in AWS, I also set up a local version of the database for local testing. If you recall from the first article, we used localtunnel in order to have Slack interact with a local version of the Slack Stock Bot.

Progress So Far

We designed the API for the new /quote-alert command. We also created a database and created the two tables that will hold the alert subscriptions and the local prices.

The next stage is to create a new Slash Command in Slack and hook it up to the new version of our server. Then we will write the Golang code which implements the AlertManager.

Adding the New Slash Command to Slack

In the previous article, we saw how to add a new Slash Command to Slack. Let’s do the same thing again. We will create the new /quote-alert slash command.

Once the Slash Command has been created, we need to give it permission to send a message to a user directly and to a specific channel. Click on the OAuth & Permissions link on the side panel.

Then pull down the dropdown under Select Permission Scopes and choose the two permissions.

Click on the Save Changes button.

Now that the permissions have been granted to perform certain actions, we need to set up two Webhooks for the communication. First, enable Webhooks in your command.

In the side panel, click on Incoming Webhooks, and make sure that the webhooks are activated.

Scroll down a bit and create two new webhooks.

At the end of this process, you should have two Webhooks, one for posting to a channel and one for sending a message to a user.

By default, a /quote-alert will send a price alert directly to the user with one of the webhooks. If you enter the command

/quote-alert MSFT 130 #myalerts

then the alert will be sent to the #myalerts channel, using the other webhook.


Modifying the Golang Source

Now that all of the environmental stuff has been set up, we can write some Golang code.

The Source code is located here (alerting branch)
https://github.com/magmasystems/SlackStockSlashCommand

In order to access the Postgres database, we use the pq package. You need to install this package from github.com/lib/pq, and then reference it within the application.

go get github.com/lib/pq

Major Changes to the Code

There have been several things added to the version of the Slack Stock Bot that was developed in the previous article. We are not going to cover each change in this article. But, at a high level, those changes include:

  • The introduction of environment-specific configuration files (appSettings[.env].json), plus a configuration manager
  • A logging manager
  • A Slack Messaging package that encapsulates all interactions with Slack
  • An AlertManager that encapsulates all of the price breach alerting logic
  • Integration with Postgres (either local or RDS)

Changes to the Configuration File

A new Database section has been added to the appSettings.json file. This contains the standard database connection information that will be used to connect to Postgres. There are two new fields for the webhooks that the alerting mechanism will use to send messages back to Slack. Finally, there is the quoteInterval, which is the number of seconds that will elapse between price checks. Bear in mind that the free quote services will limit the number of quotes that you can request per day, so you do not want your price checker running too frequently.

{
   "apiKeys": {
       "quandl": "[Your Quandl API Key]",
       "worldtrading": "[Your World Trading Data API Key]",
       "alphavantage": "[Your AlphaVantage API Key]"
   },
   "driver": "alphavantage",
   "slackSecret": [Your Slack App's Secret Key]",
   "webhook": "https://hooks.slack.com/services/[Your Webhook for Channels]",
   "dmwebhook": "https://hooks.slack.com/services/[Your Webhook for direct messaging]",
   "port": 5000,
   "database": {
       "host": "slackstockbot.XXXXXXXX.us-east-2.rds.amazonaws.com",
       "port": 5432,
       "dbname": "slackstockbot",
       "user":  "[Your database user name]",
       "password": "[Your database password]",
       "SSL": true
   },
   "quoteCheckInterval": 600,
   "disablePriceBreachChecking": false
}

Polling for Price Breaches

In application.go, a Ticker is created using an interval which is set in the appSettings.json configuration file. Every time the ticker elapses, a function is called to check the prices.

// Create a ticker that will continually check for a price breach
if !appSettings.DisablePriceBreachChecking {
    priceBreachCheckingTicker = time.NewTicker(time.Duration(appSettings.QuoteCheckInterval) * time.Second)
    defer priceBreachCheckingTicker.Stop()

    // Every time the ticker elapses, we check for a price breach
    go func() {
        for range priceBreachCheckingTicker.C {
            onPriceBreachTickerElapsed()
        }
    }()
}

The responsibility for the price checks is in the AlertManager. We pass a callback function that the AlertManager calls for every price breach. This callback will create an informative message and will post it to Slack using a webhook.

// onPriceBreachTickerElapsed - This gets called every time the Price Breach Ticker ticks
func onPriceBreachTickerElapsed() {
    theAlertManager.CheckForPriceBreaches(theBot, func(notification alerts.PriceBreachNotification) {
        outputText := fmt.Sprintf("%s has gone %s the target price of %3.2f. The current price is %3.2f.\n",
            notification.Symbol, notification.Direction, notification.TargetPrice, notification.CurrentPrice)
        postSlackNotification(notification, outputText)
    })
}

The check for price breaches works like this:

  • Get a list of all of the stocks that have alerts on them
  • Call the quote service to get the current prices for all of the stocks
  • Save the prices to the database
  • Use SQL to check for price breaches. The SQL code for the check is shown at the start of this article.
  • For each alert that was triggered, set a flag that “logically deletes” the alert so that we do not check again.
    • We can enhance the /quote-alert command so that an alert can be reset
  • Call the passed-in callback function, which is responsible for alerting Slack.
// CheckForPriceBreaches - gets called by the application at periodic intervals to check for price breaches
func (alertManager *AlertManager) CheckForPriceBreaches(stockbot *stockbot.Stockbot, callback func(PriceBreachNotification)) {
    // Get the latest quotes
    prices := alertManager.GetQuotesForAlerts(stockbot)
    if prices == nil {
        return
    }

    // Save the prices to the database
    alertManager.SavePrices(prices)

    // Check for any price breaches
    notifications := alertManager.GetPriceBreaches()

    // Go through all of the price breaches and notify the Slack user
    for _, notification := range notifications {
        // Set the wasNotified field to TRUE on the alert
        alertManager.setWasNotified(notification.SubscriptionID)

        // Do the notification to slack synchronously
        callback(notification)
    }
}

A Word About Architecture and Strategy

By this time, you must be wondering why we used a SQL-based function to detect price breaches, especially if we were ever going to support real-time streaming quotes. After all, making calls to the database is costly in terms of performance, latency, and (in the case of RDS), monetary cost.

Wouldn’t we be much better off using some in-memory collections? For example, we could use a map where the keys are the list of stocks that have active alerts, and each value could be a sorted collection of alerts.

One of the reasons that I chose the database-centric way of doing the comparison is just so that I could find a way to introduce a database in this series of articles. I wanted to give the reader exposure to uses databases both in Golang and in an Elastic Beanstalk environment.

If we wanted to be architecturally flexible, we could introduce a Strategy Pattern. We could have a strategy for database-based comparisons and a different strategy for memory-based comparisons.

We can implement the Strategy Pattern by using a factory to create the quote comparator, and we can assign the comparator to a field within the AlertManager struct.

type QuoteComparator interface {
    findPriceBreaches(alerts AlertMap, currentQuotes []QuoteInfo)
}

type AlertManager struct {
    . . .
    quoteComparator QuoteComparator
    . . .
}

func createAlertManager() {
    . . . 
    alertManager.quoteComparator = quoteComparatorFactory("memory")
    . . . 
}

func quoteComparatorFactory(strategy string) (comparator QuoteComparator, errs error) {
    switch strategy {
    case "database":
        return &amp;DatabaseQuoteComparator{}, nil
    case "memory":
        return &amp;MemoryQuoteComparator{}, nil
    default:
        return nil, errors.New("the strategy cannot be found")
    }
}

Deploying to Elastic Beanstalk

We need to change the Buildfile so that it fetches the pq library for Postgres, and so that the app’s configuration file is located in the same directory as the binary. The new Buildfile is:

go get github.com/nlopes/slack
go get github.com/lib/pq
go build -o bin/application application.go
cp ./appSettings.json bin

An important thing to note is that, by default, Go applications on Elastic Beanstalk use Port 5000. If you change the port from within the configuration file, then you should also tell Slack that the Stock Bot command uses the new port.

Another thing that we might want to consider is, instead of using RDS, using Docker and deploying our own Postgres database. Elastic Beanstalk fully supports setting up Go applications using Docker. We can leave Docker to a future article.

Testing the new Slash Command

Let’s put in an alert for Johnson and Johnson’s stock.

/quote-alert JNJ 140.0

We see that the Slack Stock Bot works

That message looks a lot nicer than just printing out plain old text. Slack allows you to format output in different ways.

attachment := slack.Attachment{
    Color:    "good",
    Fallback: "You successfully posted by Incoming Webhook URL!",
    Text: outputText,
    //Footer:        "slack api",
    //FooterIcon:    "https://platform.slack-edge.com/img/default_application_icon.png",
    Ts: json.Number(strconv.FormatInt(time.Now().Unix(), 10)),
}

msg := slack.WebhookMessage{
    Attachments: []slack.Attachment{attachment},
    Username:    slackUserName,
    Channel:     slackChannel,
}
    
slack.PostWebhook(getWebhook(slackChannel, appSettings), &amp;msg)

Slack also supports something called Blocks, which allow more complex formatting and options for the user to interact with your message. Conceivably, we can use Blocks to present a “Buy” or “Sell” button, which would generate an order to the user’s financial advisor.

Merging the Alerting Branch Into Master

Now that we are done implementing the alerting feature, we can merge the alerting branch back into the master.

Go to the Github repository.

Click on the green button that is labeled Compare & pull request.

Type in some comments and then click on Create pull request.

You will see that there are no merge conflicts. Click on the Merge pull request button.

Confirm the merge

You will get the confirmation that everything was merged successfully, Now you can pull the new branch to your local machine.

Summary

In this article, we enhanced the original Slack Stock Bot code so that the user could subscribe to alerts. The alerts were stored in an AWS RDS database, and we used a SQL-based strategy to detect any price breaches. We came up with a new Slash Command called /quote-alert which allows a user to create or delete a price breach alert. Finally, we deployed the new code to Elastic Beanstalk and successfully tested it out.

In the next article, we will make a few more enhancements. One of the things that I am thinking of is making the price comparison into an AWS Lambda function. We can also use the new Slack Messaging to implement a simple workflow. We should start putting in unit tests, and we can start taking advantage of AWS CodeBuild and CodeDeploy.

Stay tuned for the next article.

Appendix

Trouble Connecting to the Postgres Database from Elastic Beanstalk

If you find that you are having problems connecting the Slack Stock Bot to RDS, go into the EC2 instance that hosts the database and change the Incoming Connection rules.

  1. Go into the RDS dashboard and find your database. Then click on the name of your database.
  2. In the dashboard for your database, go to the Security Group Rules section, and find the Security Group that is associated with Inbound connections. Click on that.
  3. In the Security Group, look at the Inbound tab. Make sure that port 5432 (Postgres) is open to your application.

Creating a Golang-based Slackbot on AWS

Marc Adler

CTO as a Service

Introduction

Since leaving the corporate workforce and starting CTO as a Service, I have been slowly learning some things that have been on my TODO list for a while. Not having full-time management duties frees up your time, and every day, I find that there is so much more to learn. So, as I wind my way down the TODO list, I figure that I would start documenting some of my learnings so that it might be of use to others.

Even though I have been a Chief Architect and CTO for the last 15 years, I have still kept myself very technical, and I still code for pleasure, and occasionally, for my CTO as a Service clients. I am pretty good at C#, Java, C++, and NodeJS/TypeScript. I can also stumble around in Python and Scala.

One of the languages that I have been meaning to teach myself in Golang. I kept hearing that Go is a great language for writing distributed systems, and I certainly have written my fair share of distributed systems. I started life way back when as a C programmer, and with Golang, I feel that I have come full-circle. The nice thing about Golang is the support for writing multi-threaded applications.

I always like to write something useful when I learn about new technologies. I have been spending an increasing amount of time in Slack, and I come from the world of finance. So I figured that I could combine the two for my first application in Go

The source code to this project can be found here

https://github.com/magmasystems/SlackStockSlashCommand

Outline of the Steps We Will Take

  1. Create a console-based Go program that gets the price of a stock
  2. Make the application run in a web server
  3. Run the application using a local tunnel
  4. Change the code so it uses the Go-based Slack API to support a Slack Slash Command
  5. Create a new Slack application that has a Slash Command
  6. Point the new Slack application to the application that is running on the local server
  7. Test the Slash Command from within Slack
  8. Migrate to AWS by creating a new Elastic Beanstalk-based application
  9. Migrate our existing code so that it runs on Elastic Beanstalk
  10. Deploy the code to Elastic Beanstalk
  11. Change our Slack application so that it points to the new Elastic Beanstalk server
  12. Test the Slash Command again from within Slack

First Steps

The first thing that I wanted to do is just to write a simple Go program that retrieved the price of MSFT stock and printed it out on the terminal. Easy enough, right? Just a simple HTTP GET request to the website of a quote provider.

It used to be as simple as making a call to the Yahoo Finance API. However, Yahoo deprecated their API, so I had to do a search for other quote providers who had up-to-date quote data that you could access for free. I did a search on Quora and found this discussion. I decided to try three quote providers: Quandl, AlphaVantage and World Trading Data.

In order to be flexible in choosing a specific quote provider, I implemented a driver factory in the code. I also put the authentication information for each quote provider within the application’s configuration file.

I used Visual Studio Code as my IDE for this project. VSC has extensions that support Golang and provides a very light way to just dive right in and write Golang code.

The code below shows the simple main loop. You are prompted to type the name of a symbol, and then the quote provider is called to retrieve the price of the stock.

package main

import (
   "bufio"
   "encoding/json"
   "errors"
   "fmt"
   "io/ioutil"
   "log"
   "os"
   "strings"

   av "./alphavantageprovider"
   quandl "./quandlprovider"
   q "./quoteproviders"
   wtd "./worldtradingdata"
)

var quoteProvider q.QuoteProvider

func main() {
   appSettings := getConfig()
   driver := appSettings.Driver
   apiKey := appSettings.APIKeys[driver]

   quoteProvider, _ = quoteProviderFactory(driver, apiKey)

   scanner := bufio.NewScanner(os.Stdin)
   print("Enter the symbol: ")
   for scanner.Scan() {
       symbol := scanner.Text()
       if len(symbol) == 0 {
           break
       }
       price := quote(symbol)
       fmt.Println(price)
       print("Enter the symbol: ")
   }
}

The quoteProviderFactory method simply returns the driver whose name was specified in the appSettings.json file.

// quoteProviderFactory - a factory that creates a quote provider
func quoteProviderFactory(providerName string, apiKey string) (q.QuoteProvider, error) {
   var provider q.QuoteProvider

   switch strings.ToLower(providerName) {
   case "alphavantage":
       provider = av.CreateQuoteProvider(apiKey)
   case "worldtradingdata":
       provider = wtd.CreateQuoteProvider(apiKey)
   case "quandl":
       provider = quandl.CreateQuoteProvider(apiKey)
   default:
       return nil, errors.New("the Quote Provider cannot be found")
   }

   return provider, nil
}

In the C# world, I would probably have put the full .NET type name of the driver within the config file, and used Activator.CreateInstance() to instantiate the driver. I don’t like having to explicitly reference the namespace of the individual drivers in Golang. I just have to get used to the fact that Golang does not have the same “power” as C#.

The Quote Provider

The quote provider package just provides a simple way of requesting the prices of a stock. We basically do the following steps:

  1. Format a URL for the specific quote service. That URL contains the name of the stock.
  2. Make an HTTP GET call to the quote service’s API.
  3. Marshal the returned payload into a Golang struct.
  4. Return the value of the field in the struct that has the stock’s current price.

All of the quote providers “inherit” from a “base class” called BaseQuoteProvider. I use quotes around the terms “inherit” and “base class” because Golang has no concept of classes and inheritance. Golang uses composition instead of inheritance.

const quoteURL = "https://www.alphavantage.co/query?function=GLOBAL_QUOTE&symbol={symbol}&apikey={apiKey}"

// AVQuoteProvider - gets quotes from the provider
type AVQuoteProvider struct {
   qp.BaseQuoteProvider
}

// CreateQuoteProvider - creates a new quote provider
func CreateQuoteProvider(apiKey string) qp.QuoteProvider {
   quoteProvider := new(AVQuoteProvider)
   quoteProvider.APIKey = apiKey
   return quoteProvider
}

// FetchQuote - gets a quote
func (provider AVQuoteProvider) FetchQuote(symbol string) float32 {
   url := provider.PrepareURL(quoteURL, symbol)
   payload, err := provider.FetchJSONResponse(url)

   if err == nil {
       data := new(quoteData)
       json.Unmarshal(payload, &data)
       //fmt.Println(data)

       f, _ := strconv.ParseFloat(data.GlobalQuote.Price, 32)
       return float32(f)
   }

   return 0
}

Now that everything was working, it was time to start thinking about integrating the quote provider with Slack.

Integrating the Quote Provider with Slack

Since I spend so much time within Slack, I would like the ability to manually check the current price of a stock from within a Slack channel. I want to issue a Slack Slash Command like “/quote symbol”, and have Slack print out the name of the stock and its current price.

Note: I called this project a Stock “bot”, but in Slack vernacular, a bot and a slash command are two different things. A bot is like a Slack user, and it has full access to the Slack message stream.

There are many enhancements that can be made to this “quote server”, such as being delivered the most recent prices at a regularly-scheduled interval, or altering a user when a stock crosses some sort of limit. But, for now, I want to keep things very simple and just be able to see the price of a single stock when I want it.

This article was helpful in outlining the steps that you need to take in order to create a new Slack application that supports Slash Commands.

The Slack API and Golang

The first thing that I needed to do was to find a Golang version of the Slack API. There seems to be one Github project that is popular among Go developers. This package can be found here:

https://github.com/nlopes/slack

The Golang/Slack API has some structs and methods that marshal HTTP requests to and from Slack. All that I needed to do was read an HTTP GET request that comes from Slack, parse the request into a SlashCommand object, call the QuoteProvider to retrieve the price of the stock and return that data back to Slack. A fairly simple enhancement.

We have to import the Golang/Slack package. In C# you would use NuGet, and in Java, you might use Maven. In Golang, you need to download the package to your local machine. In a terminal, run the command:

go get github.com/nlopes/slack

This command will download the package and put it into a directory that is in your GOPATH. On my MacBook, it is placed in the directory ~/go/pkg/darwin_amd64/github.com/nlopes/slack

Inside the program, you can import directly from a URL.

import “github.com/nlopes/slack”

In the code, we will start a web server to process the requests from Slack. We need to first retrieve the signing secret that Slack gives us when we create a new Slack app. More on that below.

We have an HTTP request handler. The request is marshaled into a SlashCommand object, and the quote provider is called. The data is formatted and returned to Slack.

// Get the signing secret from the config
signingSecret := appSettings.SlackSecret
if signingSecret == "" {
   log.Fatal("The signing secret is not in the appSettings.json file")
}

// The HTTP request handler
http.HandleFunc("/quote", func(w http.ResponseWriter, r *http.Request) {
   slashCommand, err := processIncomingRequest(r, w, signingSecret)
   if err != nil {
       return
   }

    // See which slash command the message contains
   switch slashCommand.Command {
   case "/quote":
       getQuotes(slashCommand, w)

   default:
       // Unknown command
       w.WriteHeader(http.StatusInternalServerError)
       return
   }
})

The incoming request is first verified against the signing secret, just to make sure that there are no man-in-the-middle attacks. The new SlashCommand object is returned to the caller.

func processIncomingRequest(r *http.Request, w http.ResponseWriter, signingSecret string) (slashCommand slack.SlashCommand, errs error) {
   verifier, err := slack.NewSecretsVerifier(r.Header, signingSecret)
   if err != nil {
       w.WriteHeader(http.StatusInternalServerError)
       return
   }

   r.Body = ioutil.NopCloser(io.TeeReader(r.Body, &verifier))
   slashCommand, err = slack.SlashCommandParse(r)
   if err != nil {
       w.WriteHeader(http.StatusInternalServerError)
       return slashCommand, err
   }

   // Verify that the request came from Slack
   if err = verifier.Ensure(); err != nil {
       w.WriteHeader(http.StatusUnauthorized)
       return slashCommand, err
   }

   return slashCommand, nil
}

The getQuotes() function parses the slash command in order to get the multiple stock symbols. We call the quote provider as a Goroutine, and wait on a channel for the quote provider to retrieve all of the quotes.

We format the symbols and the prices into a single text block, and we create a SlackMsg that will contain the response. We then send the JSON-encoded message back to Slack.

func getQuotes(slashCommand slack.SlashCommand, w http.ResponseWriter) {
   outputText := ""

   symbols := strings.Split(slashCommand.Text, ",")
   go func() {
       theBot.QuoteAsync(symbols)
   }()

   select {
   case quotes := <-theBot.QuoteReceived:
       for _, q := range quotes {
           outputText += fmt.Sprintf("%s: %3.2f\n", strings.ToUpper(q.Symbol), q.LastPrice)
       }
       // Create an output message for Slack and turn it into Json
       outputPayload := &slack.Msg{Text: outputText}
       bytes, err := json.Marshal(outputPayload)

       // Was there a problem marshalling?
       if err != nil {
           w.WriteHeader(http.StatusInternalServerError)
           return
       }
       // Send the output back to Slack
       w.Header().Set("Content-Type", "application/json")
       w.Write(bytes)

   case <-time.After(3 * time.Second):
       w.WriteHeader(http.StatusInternalServerError)
   }
}

As you can see, all we did to integrate the quote provider with Slack is to read the request from Slack, get the prices, marshal the data into a response that Slack can understand, and send the response back to Slack.

Now we have to create a new Slack application and a Slash Command and hook our code up to Slack.

Hooking up Slack to the SlashCommand

The first thing we need to do is to tell Slack how to access the quote server. But first, we will talk about local development for testing.

Local Tunnel

For a first step, I want to have my quote server run on a local web server on my laptop. But how will Slack know how to “reach in” and communicate with my local web server? The answer is to use a local tunnel proxy.

There are a few frameworks for establishing a local tunnel between Slack and your laptop. Such frameworks include:

For now, I am going to use localtunnel. To install it, go into a Terminal and run the command

npm install -g localtunnel

Running the Stockbot Locally

Start Stockbot normally.

go run application.go

Then launch localtunnel.

$ lt –port 5000 –subdomain slackstockbot
your url is: https://slackstockbot.localtunnel.me

As you can see, Stockbot can be accessed at https://slackstockbot.localtunnel.me:5000. Remember this URL because we need to tell Slack the address where it directs the /quote command to.

Creating a New Slack Application

The first thing to do is to create your own private Slack workspace in which you can experiment. I created a new Slack workspace for my CTO as a Service consultancy. This new workspace can be found at ctoasaservice.slack.com.

The Slack API homepage allows you to create a new Slack app.

Slack will automatically assign you various secret codes that you will use for authorization and verification purposes.

Then you will choose the box that says “Create a Slash Command”. You will be presented with another form in which you will specify the syntax of the new Slash Command, along with the Request URL (remember I told you to remember that localtunnel URL).

The /quote command will take a comma-separated list of strings, where each string is the symbol of a stock.

After saving the form, Slack will confirm that it knows about the new Slash Command.

Now we can set up OAuth and permissions so that Slack can finish connecting your Slack workspace to the new app.

Click on the Install App to Workspace button.

You will get your OAuth token. Also, input the redirect URL.

When all of this is done, Slack will ask you to install the app within your Slack workspace.

Click on the Install button. When this is done, Slack will show you that the Stockbot app is now installed within your workspace.

Testing the Stockbot

Run the Stockbot app and run the localtunnel

go run application.go
lt –port 5000 –subdomain slackstockbot

Now go into the Slack workspace, and in the message area, type a slash. Slack will begin to show you the list of slash commands that are available. As you type more, Slack will further filter the list of available commands. Finally, when you type /quote, Slack will show you the Stockbot command.

Type /quote AMZN. Slack should then come back with the current price of Amazon’s stock.

Success !!!!!!

Next Steps – Moving to the Cloud

Now that we have everything running on a local web server, the next step is to move it to an external host. For that, we will use Amazon Web Services (AWS). There is a service on AWS called Elastic Beanstalk that makes the process of setting up a web application very simple. There are a few small files that we will need to add to our application in order to have it work properly within Elastic Beanstalk.

In order to move the Stockbot to Elastic Beanstalk, I am going to take a slight detour. I will set up the Elastic Beanstalk web server, download the sample Go-based application code that Elastic Beanstalk generates, merge the Stockbot code into the generated code, and then deploy the merged code up to Elastic Beanstalk.

An Outline of What We Will Do

  1. Create the Elastic Beanstalk-based Go application for the Slack Stock Bot. This application will come with some sample code that Elastic Beanstalk generates.
  2. Set up a directory on our local machine for our Slack Stock Bot source code, and initialize that directory.
  3. Set up SSH
  4. Copy the generated sample code to our local directory so that we can have a jump-start.
  5. Modify the generated code so that it implements all of the logic in our Slack Stock Bot.
  6. Deploy the new code to the Elastic Beanstalk environment

Setting up Elastic Beanstalk on AWS

Prerequisites

  • Download the Elastic Beanstalk command line utility to your local computer.
  • On AWS, create a new key pair, and call it aws-eb.
    • After the key pairs are created, the files aws-eb and aws-eb.pub should be located in your ~/.ssh directory.

Create the new ElasticBeanstalk Application

Go to ElasticBeanstalk. We are going to create a new Go application called Slackstockbot.

I chose the option to create a sample application, just so I have some AWS config files that I know will work.

After clicking on the Create New Application link, we will see this

Create the New ElasticBeanstalk Environment

An application can have multiple environments. For example, one environment might be “production” and another environment might be “development”.

In the dashboard, find the Actions button and choose the Create New Environment menu. Then create a new Web Server Environment.

After you click on the Select button, you will get a dialog that lets you configure the new environment.

After you click on the Create button, ElasticBeanstalk will start to create a new environment. This takes a few minutes.

When the new environment has been created, you can see it in the ElasticBeanstalk dashboard.

If you navigate to this new site using Chrome, you will see the following website:

Getting the Source Code Ready

Create a new directory that will hold the source code of the new Slackstockbot. I created a new directory in ~/Projects/SlackStockBot.

We need to initialize the source code directory for ElasticBeanstalk to use. We created SlackStockBot in the us-east-2 region.

Run the command

eb init

You will see a list of AWS regions. After we choose the proper region, you should see SlackStockBot come up in the list of available applications.

Setting up SSH on the New Environment

We will eventually need to SSH into the new EC2 instance that is associated with the new environment. From the same directory that you used above, enter the command:

eb ssh –setup slackstockbot-dev

SSH into the New Server

Run the command

eb ssh slackstockbot-dev

You should see something like this:

The newly-deployed Go app will be in the /var/app/current directory.

To find out the IP address of the new instance, you can go into the EC2 dashboard and find the machine that was just created for the new instance.

The Go Source Code to the Sample Website

When we first set up the EB application, we chose to have sample code generated for us. The sample code is shown below.

A log file is set up, and the HTTP server listens on port 5000 for GET and POST requests. If a GET / request is received, it serves up index.html. If POST is received, the body of the post is echoed. If a POST /scheduled is received, then some info from the request headers is logged.

package main

import (
  "io/ioutil"
  "log"
  "net/http"
  "os"
)

func main() {
  port := os.Getenv("PORT")
      if port == "" {
          port = "5000"
      }

      f, _ := os.Create("/var/log/golang/golang-server.log")
      defer f.Close()
      log.SetOutput(f)

      const indexPage = "public/index.html"
      http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
          if r.Method == "POST" {
              if buf, err := ioutil.ReadAll(r.Body); err == nil {
                  log.Printf("Received message: %s\n", string(buf))
              }
          } else {
              log.Printf("Serving %s to %s...\n", indexPage, r.RemoteAddr)
              http.ServeFile(w, r, indexPage)
          }
      })

      http.HandleFunc("/scheduled", func(w http.ResponseWriter, r *http.Request){
          if r.Method == "POST" {
              log.Printf("Received task %s scheduled at %s\n",
                     r.Header.Get("X-Aws-Sqsd-Taskname"), r.Header.Get("X-Aws-Sqsd-Scheduled-At"))
          }
      })

      log.Printf("Listening on port %s\n\n", port)
      http.ListenAndServe(":"+port, nil)
}

We can change this source code so that the logic for the Slack Stock Bot is in there.

In the source directory is a Procfile. It’s just a single line:

web: bin/application

It specifies the name and path of the program to start. In this case, the compiled Go file named application should be run.

There is also a Buildfile that tells ElasticBeanstalk how to build your application. In this case, it’s just a single-line file:

build: go build -o bin/application application.go

Copying Files To and From the New EC2 Machine

Now we can use scp to recursively copy all of the files from the EC2 machine to the current directory on our local machine:

scp -r -i ~/.ssh/slackstockbot-dev.pem ec2-user@3.13.171.203:/var/app/current/* .

Notice that the key is in a file called slackstockbot-dev.pem. When I first set up SSH on the new server, it created a private key file called aws-eb (without an extension), because there was already a file called aws-eb.pem in the ~/.ssh directory. I copied aws-eb to a file named slackstockbot-dev.pem because it’s a more descriptive name.

Note that we can also use FileZilla instead of using scp.

Modify the Source Code

Now that we have download the Elastic Beanstalk-generated source code to our laptop, we need to merge the Slack Stock Bot code that we already wrote with the code that Elastic Beanstalk expects. Luckily, there isn’t too much to do.

There are a bunch of configuration and build files that were generated that Elastic Beanstalk needs. One is a directory called .elasticbeanstalk that contains some configuration files that Elastic Beanstalk needs. One is a file named Buildfile that tells Elastic Beanstalk how to build the source code that you deploy. The final file is named Procfile, and it tells Elastic Beanstalk how to run your Go application.

I want to mention how we need to change the Buildfile so that it does what we need.

The Slack Stock Bot is written in Go, and in order to interact with Slack, we use a package that is found on Github. This package is found at github.com/nlopes/slack. In order to import this package, we have the following line in our application.go file:

import “github.com/nlopes/slack”

Before Elastic Beanstalk builds the code, it has to install this package locally. We usually do this by issuing the command:

go get github.com/nlopes/slack

We need to make this command part of the build process. So, we will create a small shell file named build.sh that has the commands in it that will build the Stock Bot.

build.sh

go get github.com/nlopes/slack
go build -o bin/application application.go
cp ./appSettings.json bin

We also need to change the Buildfile to this:

Buildfile

build: build.sh

Deploying a New Version of the Application

We have to ZIP the source of the application up. Zip up everything from within the root. (Important: Do not run the ZIP command from the root folder of the project)

We can upload and deploy the new code from the Elastic Beanstalk console.

When you click on the Deploy button, you will see Elastic Beanstalk stop the environment, upload and build the new code, and restart the environment with the new code deployed in the /var/app/current directory.

If you see that the application did not start up properly, you will have to examine the log files. In the event that the github.com/nlopes/slack package did not download correctly, you may need to SSH into the server and pull it down yourself using the command go get  github.com/nlopes/slack.

Pointing to the new URL

If you recall from above, our Slack Stock Bot still points to our local web server through the local tunnel. Now that we are being hosted on AWS, Slack needs to know about this new location.

You need to go back into the Slack API website and change the URL of the Slack Stock Bot so that it points to the new Elastic Beanstalk environment.

Click on SlackStockBot

Click on Add features and functionality

Click on Slash Commands

Click on /quote and enter the URL of the Elastic Beanstalk environment:

http://slackstockbot-dev.us-east-2.elasticbeanstalk.com/quote

The Future

We have accomplished our mission, which was to write a first Golang application, integrate it with a Slack SlashCommand, and run the server on AWS.

There are some enhancements which I would like to make in the future.

  1. Make this available to other Slack workspaces besides my own. See item 2 below on why this is not feasible right now. (Hint – we are in danger of exhausting the quota of free quotes very quickly)
  2. Free unlimited real-time quotes. The three quote services all have limits around the number of quotes that you can request. Ideally, I would like to use a quote service that provides an unlimited number of quotes for free. Maybe if you are from Bloomberg or Reuters and you are reading this, how about giving me access to free quotes in exchange for attribution 🙂
  3. Alerting. I would like to have a user input a stock symbol and a target price, and be alerted through Slack when the stock reaches that target. This means using a database and storing a list of users, their webhooks, the symbols and the target prices. We could check stocks against their targets on a daily basis, or we can schedule the checks on a more frequent basis. It would also be ideal if the quote service provided alerts and could call into our server when a stock hits the target.
  4. More advanced analytics. We can deliver more information about the stock other than its current price.
  5. Graphs and better formatting. We can use Slack’s Blocks to provide a richer user interface.
  6. Trading. Wouldn’t it be cool to hook up an interface from Slack to your broker? Of course, there are all sorts of compliance and legal issues, but nevertheless, we can dream.
  7. Serverless. We can easily transform the quote-retrieval process into a lambda function.

About Me

Marc Adler is the founder of CTO as a Service, a consultancy that provides senior-level technical services to companies who are in need of a CTO or Chief Architect on a “pay for what you use” basis. He was formerly the Chief Architect of companies like Citigroup, MetLife, ADP and Quantifi. He likes to get himself in trouble with his CIOs by insisting on coding.

Onboarding Senior Developers – Keys to Success

I have hired a bunch of senior developers for some of my clients as part of my CTO-as-a-Service consultancy. I have also hired many senior developers in my past lives in large corporations.

I think that the main thing that I need to ensure is that the new developer does not experience a sense of regret and frustration when they walk in the door. I remember the things that have frustrated me in the past, and I make sure that these situations are not repeated with the new developer.

Here is what I try to have set up on the day that they join:

1) New laptop with enough power that a heavy-duty developer needs.

2) All accounts have been set up. Nothing more frustrating than having the developer sit around for a few days, waiting for access to email and Github.

3) All of the software has been licensed and (maybe) pre-installed. This includes all third-party frameworks and tools that require subscriptions.

4) Up-to-date Wiki and Jira (or whatever project-management software the company is using). Make sure that the architecture and system documentation is up to date.

5) Clear tasks defined for the first few weeks. Maybe there is a small feature that the app needs right away? Give it to the new dev to get them warmed up to the codebase.

6) All HR and Payroll-related items are done. If the person needs a company credit card, the card (or the application for the card) is waiting for them.

7) Proper introductions to the senior team. Does everyone know that the senior developer is joining? Do they already know how the senior developer aligns to the success of the company? If the senior developer is aligned to a certain business unit, do the people in that business unit know what the senior developer will be working on?

8) Make sure that there are people around to answer questions. Especially if the codebase has tricky parts that are difficult to understand. Make sure that all important architecture decisions have been memorialized on the Wiki.

There is nothing more satisfying to the new developer than hearing someone say “XYZ really hit the ground running, and has made an immediate impact”. Do everything you can to make sure that the new developer gets to hear that sentiment expressed by your senior staff.

Getting Started with Visual Studio Code, .NET Core, and AWS Lambda

The way that I have usually written C# .NET Core applications is to use Visual Studio 2017/2019 on my Windows laptop. Since my primary laptop is now a Mid-2012 MacBook Air, I have been using Visual Studio for Mac as my primary IDE for writing C# apps, mainly using Xamarin. I have been a big fan of Visual Studio Code for writing Node and Python apps, but I never tried to write a .NET Core app that targeted AWS Lambda. This article details the steps that I took in order to write my first C#-based Lambda function on my MacBook and deploy it to AWS.

One of CTO-as-a-Service’s clients current has a synchronous function that is used to generate Microsoft Word documents from data that is stored in an SQL Server database. Currently, the document generation process runs on a single HP DL360 server. When many people need to generate documents at the same time, the performance of the server degrades so severely that it impacts the company.

As part of the migration to AWS that I am doing, I wanted to take this document-generation process and move it to a Lambda function on AWS. This way, when the company has to generate a lot of documents at month-end, we can kick off a Lambda function that would generate a single document. The generated document will then be stored on S3, and a notification would be broadcast on SNS.

I wanted to write refactor the code as a Lambda function, but I wanted to do so using Visual Studio Code on my lightweight MacBook instead of using my much-heavier Windows machine. Of course, I could have used my MacBook to remote into my Windows machine, but out of curiosity, I wanted to see if I could do everything on my MacBook.

The first step was to read the AWS documentation on writing Lambda functions using C#. The AWS reference article on .NET development and Lambda functions is here:

https://docs.aws.amazon.com/lambda/latest/dg/dotnet-programming-model.html

Prerequisites on your Computer

Install the AWS CLI

Install the extensions to the dotnet command line. These extensions will let you deploy and invoke a Lambda function from the command line.

dotnet tool install -g Amazon.Lambda.Tools

Install the AWS Lambda Templates extension to the dotnet command line, and ensure that the AWS templates have been installed

dotnet new -i Amazon.Lambda.Templates

Make sure that the new templates have been installed by running this command:

dotnet new -all

In order to generate the code, you need to know which profile you will be using when the Lambda function is deployed and executed. You can find the name of your profile by viewing the file ~/.aws/credentials. The profile should contain your access key, your secret key, and optionally, the region and the output format.

Also, before you start, go into the IAM console on AWS and make sure that the IAM role that you use has policies that will let you access Lambda functions, as well as letting the Lambda functions access certain AWS services (like S3, SNS, Dynamo, etc).

In Visual Studio Code, you should do the following:

  • Install the AWS Toolkit for Visual Studio Code extension
  • Make sure that the various C# extensions have been installed, most notably C# for Visual Studio Code
  • Install the NuGet Package Manager extension

Generate the Project and Code

Generate a simple skeleton project

Open us a Terminal. Create a new directory, and cd to that directory. For example,

mkdir MyFirstLambda

cd MyFirstLambda

We want to generate the skeleton project and code. Run the command:

dotnet new lambda.EmptyFunction –name DocGenerator –profile default –region us-east-1

This will create a directory called ./MyFirstLambda/DocGenerator.

Notice that we are using a profile named default. This should be an entry in the ~/.aws/credentials file.

In Visual Studio Code, open the folder containing your new project. Note that you should open the folder below the new directory you created. In the case above, open the DocGenerator folder, not the MyFirstLambda folder.

When you open this folder in Visual Studio Code, you will be prompted to restore some files. In addition, the .vscode directory might be created for you.

Add build, deploy, and invoke commands to the tasks.json file. (see the Appendix below)

Once the tasks.json file has been set up, you have three commands available to you. Build, Deploy, and Invoke. I just cycle through these commands using Visual Studio Code’s Terminal/Run Task menu. At the end of this cycle, you should have your new Lambda function built, deployed on AWS, and tested.

A simple Lambda function will be generated for you. This function looks like this:

// Function.cs

using Amazon.Lambda.Core;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace DocGenerator
{
  public class Function
  {
      public string FunctionHandler(string input, ILambdaContext context)
      {
          return input?.ToUpper();
      }
  }
}

The entry point is defined in the file named ./src/DocGenerator/aws-lambda-tools-defaults.json

"function-handler" :              
"DocGenerator::DocGenerator.SNSFunction::FunctionHandler"

Once your Lambda function is running, you can use the AWS Explorer panel to view the Lambda.

Adding SNS Support

In Visual Studio Code, go to the Command Palette, and use the NuGet Package Manager:Add Package function to install the Amazon.Lambda.SNSEvents package.

Write the new SNS function handler.

using Newtonsoft.Json;

namespace DocGenerator
{
public class Function
{
public void SNSMessageFunctionHandler(SNSEvent snsEvent, ILambdaContext context)
{
var jsonEvent = JsonConvert.SerializeObject(snsEvent);
var jsonContext = JsonConvert.SerializeObject(context);

context.Logger.Log(jsonEvent);
context.Logger.Log(jsonContext);
context.Logger.LogLine("-----------------------------------------");
}
}
}

In ./src/DocGenerator/aws-lambda-tools-defaults.json, change the function handler:

“function-handler” :  “DocGenerator::DocGenerator.Function::SNSMessageFunctionHandler”

Build and deploy the new code.

Testing the Code

Go into the SNS Console and create a new topic. Let’s call it Simple-Lambda-Notification.

In the SNS Console, create a new subscription for this topic. For the protocol, choose AWS Lambda. For the endpoint, choose the DocGenerator function.

In the SNS Console, publish a message on the topic. Then look at the CloudWatch log. You should see the log messages that indicate that the message was received from SNS.

The Lambda Context

The LambdaContext is passed into the handler function and contains information about the environment that the function is operating in. You can use the LambdaContext to perform logging to CloudWatch, to determine who called the function, and to get the unique request id in case you need to notify the caller asynchronously that the function has completed. The LambdaContext looks like this:

{
"FunctionName": "DocGenerator",
"FunctionVersion": "$LATEST",
"LogGroupName": "/aws/lambda/DocGenerator",
"LogStreamName": "2019/04/30/[$LATEST]10dd5bcf08994166b84a1d3189f2f18b",
"MemoryLimitInMB": 256,
"AwsRequestId": "c533b777-e333-45ca-a78e-0b12d63c513d",
"InvokedFunctionArn": "arn:aws:lambda:us-east-1:901643335044:function:DocGenerator",
"RemainingTime": "00:00:27.7060000",
"ClientContext": null,
"Identity": {
"IdentityId": "",
"IdentityPoolId": ""
},
"Logger": {}
}

Adding Support for the API Gateway

As illustrated in the architecture diagram above, the entry point to our Lambda function should be a REST call emanating from the AWS API Gateway.

You need to import the NuGet package named Amazon.Lambda.APIGatewayEvents in order to be able to use the C# classes that support the AWS API Gateway.

Create a new class called APIGatewayFunction. Here is the code:

using Amazon.Lambda.Core;
using Amazon.Lambda.APIGatewayEvents;
using Newtonsoft.Json;
using System.Collections.Generic;
using System.Net;

namespace DocGenerator
{
  public class APIGatewayFunction
  {
      public APIGatewayProxyResponse FunctionHandler(APIGatewayProxyRequest request,

ILambdaContext context)
      {
          var jsonEvent = JsonConvert.SerializeObject(request);
          var jsonContext = JsonConvert.SerializeObject(context);
         
          context.Logger.Log(jsonEvent);
          context.Logger.Log(jsonContext);
          context.Logger.LogLine("-----------------------------------------");

          return this.CreateResponse(request);
      }

      private APIGatewayProxyResponse CreateResponse(APIGatewayProxyRequest request)
      {
          int statusCode = (request != null) ?  (int) HttpStatusCode.OK

: (int) HttpStatusCode.InternalServerError;

          PostPayload payload = JsonConvert.DeserializeObject<PostPayload>(
           request.Body ?? "{\"message\": \"ERROR: No Payload\"}");
         
          // The response body is just the upper-case version of the string

// that was passed in
          string body = (payload?.message != null)

? JsonConvert.SerializeObject(payload.message.ToUpper())
                              : string.Empty;

          var response = new APIGatewayProxyResponse
          {
              StatusCode = statusCode,
              Body = body,
              Headers = new Dictionary<string, string>
              {
                  { "Content-Type", "application/json" },
                  { "Access-Control-Allow-Origin", "*" }
              }
          };
 
          return response;
      }
  }

  public class PostPayload
  {
      public string message { get; set;}
  }
}

In ./src/DocGenerator/aws-lambda-tools-defaults.json, change the function handler:

"function-handler" :  "DocGenerator::DocGenerator.APIGatewayFunction::FunctionHandler"

Build and deploy the new code. Now it’s time to create a new API Gateway that will be used to handle the REST integration to the Lambda function.

Create the API Gateway

In the first step, we go into the API Gateway dashboard and create a new REST API called DocGeneratorAPI.


We will create a single resource called Document. Any API calls for this resource should contain /document in the URL path.


We will hook up the API to the new DocGenerator Lambda function that we just created. Notice that we check the Use Lambda Proxy Integration option.


After saving the API, we will just go to the Lambda dashboard for a second to make sure that the API Gateway is a new input source for the Lambda.


Back in the API Gateway dashboard, we want to test the new API. Click on the blue lightning bolt in order to run a test.


We will POST a request that has a simple message body. If successful, we should get a response that has a capitalized version of the message.


We see that the response is indeed the capitalized version.

We can examine the APIGatewayProxyRequest that our function was invoked with.


{
   "Resource": "/document",
   "Path": "/document",
   "HttpMethod": "POST",
   "Headers": null,
   "MultiValueHeaders": null,
   "QueryStringParameters": null,
   "MultiValueQueryStringParameters": null,
   "PathParameters": null,
   "StageVariables": null,
   "Body": "{\n    \"message\": \"This is a document\"\n}",
   "RequestContext": {
       "Path": "/document",
       "AccountId": "XXXXXXXXXXXXXXXX",
       "ResourceId": "ec0wv3",
       "Stage": "test-invoke-stage",
       "RequestId": "88182293-6c2a-11e9-9e20-09edba43f9b6",
       "Identity": {
           "CognitoIdentityPoolId": null,
           "AccountId": "XXXXXXXXXXXXXXXX",
           "CognitoIdentityId": null,
           "Caller": "XXXXXXXXXXXXXXXX",
           "ApiKey": "test-invoke-api-key",
           "SourceIp": "test-invoke-source-ip",
           "CognitoAuthenticationType": null,
           "CognitoAuthenticationProvider": null,
           "UserArn": "arn:aws:iam::XXXXXXXXXXXXXXXX:root",
           "UserAgent": "aws-internal/3 aws-sdk-java/1.11.534 Linux/4.9.137-0.1.ac.218.74.329.metal1.x86_64 OpenJDK_64-Bit_Server_VM/25.202-b08 java/1.8.0_202 vendor/Oracle_Corporation",
           "User": "XXXXXXXXXXXXXXXX"
       },
       "ResourcePath": "/document",
       "HttpMethod": "POST",
       "ApiId": "brpqzm8gdj",
       "ExtendedRequestId": "ZAtsDEw2oAMFu6Q=",
       "ConnectionId": null,
       "ConnectionAt": 0,
       "DomainName": "testPrefix.testDomainName",
       "EventType": null,
       "MessageId": null,
       "RouteKey": null,
       "Authorizer": null
   },
   "IsBase64Encoded": false
}

Enhancements

You might notice that the FunctionHandler in the C# code does not examine the HttpMethod and the Path of the request in order to implement different behaviors. The code assumes that a POST request and a specific payload are being passed in. Of course, the FunctionHandler needs to be made bullet-proof so that it will handle different methods and paths.

Appendix

tasks.json

The tasks.json file is located in the .vscode directory of a project and contains a JSON-formatted list of tasks that Visual Studio Code can invoke.


{
   "version": "2.0.0",
   "tasks": [
       {
           "label": "build",
           "command": "dotnet",
           "type": "process",
           "args": [
               "build",
               "${workspaceFolder}/test/DocGenerator.Tests/DocGenerator.Tests.csproj"
           ],
           "problemMatcher": "$tsc"
       },
       {
           "label": "deploy",
           "command": "dotnet",
           "type": "process",
           "args": [
               "lambda",
               "deploy-function",
               "DocGenerator",
               "--region",
               "us-east-1",
               "--profile",
               "default",
               "--function-role",
               "woof_garden_canary"
           ],
           "options": {
               "cwd": "${workspaceFolder}/src/DocGenerator"
           },
           "problemMatcher": []
       },
       {
           "label": "invoke",
           "command": "dotnet",
           "type": "process",
           "args": [
               "lambda",
               "invoke-function",
               "DocGenerator",
               "--region",
               "us-east-1",
               "--profile",
               "default",
               "--payload",
               "Just Checking If Everything is OK"
           ],
           "problemMatcher": []
       }
   ]
}

Thoughts about the role of Chief Architect

Marc Adler, CTO as a Service

The following observations about the role of Chief Architect come from my four previous positions as Chief Architect. I have been Chief Architect of the Equities Division of Citigroup (350,000 in the company, 30,000 people in Equities), of MetLife (65,000 employees), ADP (65,000 employees), and of a small software vendor named Quantifi (roughly 50 employees). I have a mixture of large company and small company experience that has shaped my perceptions. In addition, I owned my own small software business for roughly ten years, and that experience has also shaped my thinking.

I may be slightly biased in my thinking, but in my opinion, the role of Chief Architect is one of the hardest roles out there, both in terms of defining the role and in terms of what is expected of you. Each company has their own idea of what a Chief Architect’s role is, and some companies don’t even know why they need a Chief Architect.

The role of Chief Architect is an extremely “vertical” one. I always like to say that a Chief Architect has to be able to  interact at the CxO level, and be able to talk to the lowest-level coder. The Chief Architect has to be comfortable in executive-level meetings, and must be equally comfortable sitting next to a developer and doing pair-programming or code reviews. Often times, the Chief Architect is called on to explain technology to the CxO-level people. The Chief Architect is often called on to face off with the senior-level technical people at vendors and partners. I have been directly opposite CIOs and CTOs of vendors on many occasions. The Chief Architect should know the business and well as the technology.

In my opinion, there should really be only a single Chief Architect in a division or an enterprise. There should be a single point from where all architectural decisions are made. This jives with the list of responsibilities that a Chief Architect should have (see the list below). If there are multiple Chief Architects, then there is more room for disagreements and to get into “analysis paralysis” mode.

Management

Many times, the Chief Architect has to manage a team of architects. At Citigroup and MetLife, I managed multiple groups which fell under the Architecture organization, and these groups included performance optimization engineers, project managers, and business analysts.

Since I have typically managed teams of architects, I have had to slice my architects into different domains. For example, at Citigroup, I had an architect responsible for Cash Trading Systems, an architect responsible for Derivatives Trading, an architect responsible for Risk, etc. I also had architects devoted to SDLC, architects devotes to performance optimization, etc. Each architect was responsible for a vertical slice of the entire organization, but the architects had to work horizontally as well.

Since the Chief Architect was regarded as a senior or area manager, the Chief Architect often reported to the CIO or CTO of a division. The Chief Architect would often be the CxO’s “right-hand man” when it came to technology. It was unimaginable that a Chief Architect would report to a development manager, since part of the architect’s responsibility was guiding development practices and doing architecture and code reviews (something that would sometimes put the development manager and architect at odds).

Hands-On Participation

A constant question revolves around how hands-on the Chief Architect should be. I advocate for a hands-on architect for various reasons.

The Chief Architect should not be known as a “yellow pad architect” if the CA is to get the respect of the development organization. The developers want to know that the Chief Architect has walked in their shoes before they accept his advice. A Chief Architect should be able to speak to any CxO or sit down and pair-program with the lowest-level developer.

Since the Chief Architect is also responsible for exploring new technologies and delivering proof-of-concepts, the Chief Architect should be able to code these POCs personally.

Therefore, I strongly advocate that the role of Chief Architect not prohibit the CA from diving into a coding role on occasions.

Roles and Responsibilities

The list below is a union of all of the responsibilities that I have had as Chief Architect.

  • Manage the Architecture Organization
  • Attend meetings with current and potential vendors
    • Be able to face off with the CTO or Chief Architect of a vendor and do technical vetting
  • Evaluate vendors and technology using an Architecture-team-developed scorecard
  • Keep abreast of new technologies that might be beneficial to the organization
    • Run the Innovation Lab
    • Meet with vendors
    • Do proof-of-concepts, often involving coding from me or a member of my architecture team
    • Attend industry conferences in order to be able to see what new technologies are hot, and to even see what our competitors are doing.
    • Occasionally give talks or sit on panels at conferences
  • Software development
    • Sometimes, the architecture organization will develop a POC or MVP in order to prove out ideas or to relieve pressure on the normal development organization
  • Give architectural approval for “big ticket” projects
  • Attend executive meetings
    • Often called on to decipher technology or render opinions on technology
  • Liaise directly with senior business stakeholders
    • Find out what kinds of business problems the stakeholders wanted to solve and propose solutions based on their needs
  • Guidance
    • Come up with reference architectures
    • Serve as a general technical resource for the development organization
  • Roadmaps
    • Provide roadmaps that show how the organization will move towards a certain future state
  • SDLC process
    • Do Architecture and Code Reviews
    • Attend Sprint kickoff and retrospectives
    • Provide approvals for code to be deployed into production
    • Advice on coding practices
  • Performance Optimization
    • Help the engineering team with the optimization of certain systems
  • Governance
    • Ensure that the organization is using the appropriate technology and software
    • Ensure that the organization is not using End-of-Life software
  • Architecture Review Boards
    • Run or participate in architecture reviews and approvals
  • Socialization of architecture across the enterprise
    • In the instances where I have been CA of a specific division, I would make the effort to find out what other CAs in other divisions were up to

AWS CodeBuild and Access to RDS

One of my clients whose application runs on AWS had no Continuous Integration (CI). The code is stored on Github, and I had just gotten the developers to write Unit Tests and Integration Tests. There are different tools that you can use for CI, including Jenkins, Travis CI, Circle CI, and more. But, since my client is a heavy user of AWS, I wanted to try AWS CodeBuild, as it seems to be tightly integrated with a lot of other AWS PaaS products.

I set up CodeBuild to pull the code from Github, and to run the Integration Tests using knex mocks. Everything worked smoothly.

The next step was to set up a Postgres/RDS database that was devoted to CI testing, and to switch from using mocks to using a real database.

The problem was that the application code could not access RDS. All access from CodeBuild to RDS was blocked.

The solution that I came up with was as follows:

Note which AWS region your CodeBuild instance is running in. For example, mine is in us-east-1 (select the build and look at the details to find the region).

• In your browser, go to https://ip-ranges.amazonaws.com/ip-ranges.json, and look for the entry for CODEBUILD in the region mentioned above.

• Note the IP address associated with your region of CODEBUILD. For example, the IP address for my instance is 12.345.6.789/28. (Of course, this is a fictitious address)

• Now go to the RDS AWS Dashboard and find the instance of RDS that you want to access through CodeBuild.

• Find the Security Group that the instance of RDS is using

• Navigate to that Security Group

• Go to the Inbound Rules, and add a new rule for CodeBuild. I added a new TCP rule for 12.345.6.789/28, using port range 0-65535.

• Go back to CodeBuild and run the build. CodeBuild should be able to access Github (like before) and now it can access your private RDS instance.


The number of questions found on Google around CodeBuild users accessing RDS are such that I would think that the AWS team would make this into some kind of point-and-click visual interface.

Distributed Systems Meetup in NYC

This a part of the continuing education process that a good CTO/Chief Architect has to maintain. This is a photo from a regular Meetup that I attend, where we discuss Distributed Systems. In this particular event, we were discussing Fault Tolerance in Distributed Systems with our host, Andrew from Venmo.

For people who want to learn about the theory behind distributed systems:

I feel that you can get your Master’s Degree in CompSci solely by going to meetups every week. So much great information is exchanged, and you meet others who are using interesting technology in their day jobs.