Log In

[Inside Apto] How we built Apto Maps

May 10, 2018

Here in Apto’s engineering department, we’re completely focused on creating a product that helps brokers manage their workflows in more efficient ways.

When working with property data, for example, it’s valuable to be able to visualize the relationships of where properties are on a map. So we built Apto Maps.

Here, we’ll give you a peek behind the curtain so you can see how we created this tool and how it can help brokers be more efficient.


What is geocoding?

So how does a property get displayed on a map? It starts with a concept called geocoding.

Geocoding is the process of taking a postal address—like 350 5th Ave, New York, NY 10118 (the Empire State Building)—and converting it to a location on the earth's surface with coordinates (latitude and longitude). The challenge here is that the process of geocoding is not permanent. Addresses and coordinates fluctuate as construction goes up or the landscape changes.

For Apto, this means we rely on third party services to supply this geocoding data since they constantly track and calculate these changes for us.

The challenges in bringing that into the product

So we use third party sources to geocode properties. Now how do we bring that into the product so that Apto customers can see their data on a map?

This turned out to be an interesting problem for a few reasons:

  • The back end of Apto is built on Salesforce. This means every customer is technically on a different database.
  • To search and filter areas on a map requires constant querying for the geocode.
  • The providers we use have daily and concurrent limits when querying for the geocode.

This meant we had to build an architecture that satisfied those constraints. To accomplish this, we needed to be able to query for the geocode from a central cache that we owned so searching properties on a map was a seamless and fast experience.

How did we accomplish this?

map-blog



If you take a look at the diagram, here’s what you’ll see.

  1. The first piece we tackled was sending property addresses from Salesforce to a datastore we owned, which would eventually store the extra latitude and location information as well allowing us to quickly query for them on a map. We did this by creating a Salesforce trigger on any CRUD (Create, Read, Update, Delete) operation to send updated property location information to our Postgres database backed by a Node.JS api.

  2. When our API receives this update if the address changed or if we don't have latitude or longitude for this property, we send it off to AWS SQS (Simple Message Queue) to wait it's turn to get geocoded. The queue acts as a buffer so that we can process a large amount of geocodes. Sometimes this queue has gotten as large as 500,000 addresses to get geocoded in a day!
  1. and 4. Once it is sitting the queue, we need to then process the street addresses sitting there with our providers, and we chose AWS Lambda to accomplish this. With Lambda we have two different functions. The Queue lambda and the Geocode lambda. The Queue lambda's sole purpose is to check if messages are in the queue, and create Geocode lambdas for every message it can pull out of the queue.

One caveat to the Queue lambda is that it will stop creating Geocode lambdas if we have reached our API limits with the provider for the day to keep costs low. What the Geocode lambda purpose is to take the address string and then send it to a provider. The provider returns to us the latitude and longitude.

5. When we get the latitude and longitude we send that back to our api and store it in our cache.

6. It then removes the message from SQS, which essentially marks the geocode as completed.

What the broker sees

maps-blog2

  1. The list view which shows your properties
  2. Filtering logic on your properties
  3. The actual map displaying the properties using their latitude/longitude from the geocode process. It also lets you draw polygons or a radius which filters off of the latitude/longitude.
With the geocoding process complete, we can now discuss how we actually built the map you see in our product. The numbered pieces are all part of a library we built for the map using the Angular 2 framework. It sits in a different github repository from the main application with it's own versioning, build process, and example app that we can develop independently from the primary application. When we want to share new features of the map, we bump the version of the library in our main application which uses Webpack to compile the mapping library we built in with it.


Lessons learned

One thing I want to highlight is at Apto we have a few different Angular applications / libraries and a few lessons on some of the do's and don'ts. When developing the mapping library, here are some of those considerations we took into account:

1. Separate your data layer from your library.
The application should decide how to fetch its own data and pass that data back down to the library. The reason for this is so that your library could potentially live in more than one application. At Apto, we had to consider this in case we decided to use this library in the mobile app or the Salesforce application as well. Applications have different requirements for how they fetch and store data, so your library should be agnostic to those decisions and provide proper loading entry points for data.

2. Build the library in your main application and then pull it out.
The reason for this is rapid iteration and library version revving can be a time-consuming process. If you build the library with the intention of pulling it out from the get-go, you can make all the decisions up front to make sure it is easier to pull out of the application later. To handle this we used Angular's module system to separate out the library portion up front so it was basically exactly the same for how it would look as a library when we pulled it out.

3. Create an example application for your library.
When finally pulling your library out of your primary application, you should create an easy environment to test changes in isolation from the application. This means a simple implementation of the library that allows you to see changes, test, and showcase the concepts that you would use in all the applications this library would potentially live in. This is a significantly easier experience than symlinking the library in the main application, because as you build more, that means symlinking several libraries to your main application. This gets messy fast.

For what it's worth, we have also discussed the monorepo strategy with something like Lerna but haven't dived deep into it yet. (Maybe there will be a future blog post titled "How we migrated our nightmare 100 angular libraries to a monorepo"!)

Thanks for tuning in and a shout out to all the team members who contributed to this project! James Olson (Senior Developer), Josh Haas (Product Manager), Travis Stiles (Lead UX)!

Samuel Toriel (Engineering Manager)

Written by Samuel Toriel (Engineering Manager)

I am a software developer with an avid appreciation for tools and automation. My ideal work environment is one focused on product development, non-siloed workflows, pull request based code reviews, and developers owning DevOps / QA. I believe in creating development environments that help developers get up to speed and working as fast as possible. I like testing my code and using CI/CD for any project I work on. I have worked on the majority of the webstack doing Backend, Frontend, DevOps, and QA. I have worked in environments that were multi-datacenter with long lived infrastructure and I have experienced immutable infrastructure with microservices.

Post a Comment