Android Rollback

We recently released an update to the Android app that, among other things, slightly changed how device responses are recorded. Unfortunately, this caused a bug for some of our users where the app crashed when responding to an alert. We pushed out an update to try and fix the bug, but it hasn’t worked for everyone. Since alert response is one of our core features, we’ve rolled back the Android app until a proper resolution is found. The update we rolled back had a couple of minor fixes aside from the responses change:

  • There was a bug where sometimes opening an alert from the cadpage-style popup would open the wrong alert data. This has been reintroduced. If you experience this bug you can re-open the app from the home screen, or switch to a different notification style.
  • ?The map layers interface was polished. It has gone back to a separate screen instead of a pop-up drawer. This is a cosmetic change and shouldn’t affect anything else.

We’re currently beta testing a version of the app that fixes the response issue. If you experienced crashes when responding to alerts after the latest update (or ever), and would like to help test the fix, contact support.

Android version 1.4.0.0

We recently released a new version of the Android app with a few changes, most of them relating to alerts. We’ve cleaned up the create alert view: by default the only visible fields are ‘Title’, ‘Place’, ‘Address’, and ‘Notes’. The other fields are still accessible, but hidden by a ‘More’ button.

Screenshot_20160301-132030_framed

The geocoding behavior of the create alert view has been fixed to match iOS as well. Now when you’re creating an alert from a map point the app will use Google to look up the address for that place and fill out the ‘Address’, ‘City’, ‘State’ and ‘Country’ fields automatically. The ‘Place’ field will contain the GPS coordinates of the point. Creating an alert from the menu will follow the same behavior with the devices current location, except it’ll leave the ‘Place’ field blank.

We’ve also added the ability to delete and edit alerts from the app. To edit an alert, just view that alert and go to the ‘Edit Alert’ option in the hamburger menu. Once you’ve made your changes, you can re-send it to everyone who got it the first time. Your device must have the ‘Send Alert’ capability to edit alerts. To delete an alert simply press and hold that alert in the alerts list. A dialog box confirming the deletion will pop up.

Untitled

Finally we’ve fixed a bug relating to GCM so alerting should be more reliable for some users, added Swedish, French, and Spanish translations, and made various other bugfixes and performance improvements.

 

Active911 – Spillman Interface Released

We?ve released our Spillman interface. It passively queries the Spillman database for new alarms, automatically formats them, and sends them through the active911 SNPP interface. This initial release will only be able to send to one department and page group. Multiple departments and page groups will be supported in the near future; we?re looking for feedback on the best way to implement it.

The app is a Node.js program that runs in the background and queries the Spillman server for new calls every 3 seconds (this time is configurable. When it finds a new call with an associated alarm,?it gets that alarm from Spillman, takes certain fields like ?DescriptionOfAlarm? and? ?InfoForCall? (these are configurable), puts them into a message format that Active911 can parse and sends that message to us through SNPP. For more information, check out the wiki.

If you would have any feedback or would like to start using this app, please contact support?at support@active911.com.

On Technical Debt, Code Reuse, and our client APIs

There are several types of debt a business can incur. One is financial debt, which is pretty straightforward and well known – you take out a loan, then you owe money. A less well known form of debt is technical debt, which is essentially borrowing time from yourself in the future.

Accruing technical debt is much easier than accruing financial debt – all you have to do is sacrifice technical correctness for speed, cost, simplicity, etc. This is much easier to do in programming than in other forms of engineering. If you’re building a bridge, you design it, it gets built, and that’s pretty much it. The bridge needs maintenance, but you don’t continually add features to improve the bridge. This means you don’t really have to worry about future modifications when designing your bridge – you don’t have to design your suspension bridge with the possibility of it being converted to a drawbridge in mind. Furthermore, you can probably make some sacrifices as far as technical correctness goes, but if you make too many your bridge won’t pass inspection.

These things are not true for programming. In programming, once a feature is released it can and probably will be continually modified and improved, often to the point of complete overhaul (at least from the users perspective). Features can work just fine for years even if the code implementing them is poorly designed, with problems only arising when it needs to be modified. There’s (usually) no outside inspection of code to ensure it’s technically sound – it’s up to the individual or company to have a good review process (and even a solid review process doesn’t ensure correctness – just look at the 1950 Tacoma Narrows Bridge).

All this means technical debt is incredibly easy to accrue, but what does that actually look like? It really depends on the specifics of the project, but a common form of technical debt is code reuse. Code reuse is bad not only because it’s inelegant and time consuming, but because it makes a project much harder to maintain. If there’s a chunk of code that is repeated throughout your project, and you need to change it, you have a bunch of places to touch and it’s not always obvious where they are. Furthermore, it makes the code longer and harder to think about, and it can introduce bugs if you fail to correctly modify all the chunks. So generally it’s best to design things in such a way that you don’t have to repeat yourself often. If you find yourself repeating code often, it’s probably an indication that you need to change your design, or at least put that code in a place where it can be reused.

This brings me to our client APIs. Currently, we have an API for Android, two for iOS (one for Active911 versions 1.2 and older, which isn’t updated, and one for the newer versions), one for Windows phone, one for the Webclient, and our open API. This is not ideal, as large parts of those APIs are completely identical, and in theory they should all be exactly the same. This means there is a lot of code duplicated across multiple files. If we want to add a new feature (for example assignments), we have to write basically the same code about 4 times to add it to all the clients. Therefore, as part of a larger push to reduce our technical debt in the coming weeks, we will be consolidating all of the APIs into the open API. This won’t be immediately noticeable for users – some bugs might get cleared up, but the real benefit in doing this is we’ll be more stable and able to push features at a much faster rate than before.

Until next time, thanks for reading!

It’s Programming with Paul!

Hello, I’m the newest developer here at Active911. I started about 3 months ago and since then I’ve more or less taken over Android development from Jon, implemented the calendar interface you’ll see in the PAR board update, and worked to verify an upcoming change in how we manage your data. I’m going to write about that last one today.

But first a little about me. I’m nearly 21 and I’ve been interested in programming since I was about 11 when I started writing games on my mom’s old laptop. I spent a large part of my free time in my teens learning about computers, programming and generally nerdy things. I went to Pacific University for 2 years, but it felt like a waste of time – I wasn’t really learning anything new. I wanted to see how I could fare in the real world so I applied for a job, and here I am.

Right now all of the data lives in a database on one server. Fetching data in this system is really simple; there’s only one server with the data, so all the clients just need to know where that one machine is. Unfortunately, this approach has two big problems, which I’ll call the scaling problem and the single point of failure problem.

As Active911 grows, so does the load on that server – the scaling problem. Making the machine more powerful helps, but you can only add so much RAM and processing power to one machine, and it doesn’t do anything to fix the single point of failure problem.

Right now every single request for data from the database goes to this one server. If it goes down for any reason, every database request will fail – the server is a single point of failure. It doesn’t really matter how beefy the server is if a tree falls on it. We have a bunch of failsafes like backup generators and not having old trees next to our server room that make it extremely unlikely this will happen, but we’d rather have backups in place to limit our downtime even more.

To do this we’re breaking our database up into a bunch of databases – called shards – and distributing them across many servers. Sharding the database goes a long way to solving both the single point of failure problem and the scaling problem: one server going down isn’t the end of the world because we have a bunch, and many servers working together are more powerful than any one server can be. Of course, the problem with this approach is figuring out how all these servers are going to work together.

For the past couple of months Jon’s been doing exactly that. He’s designed and implemented a sharded database called Bard, and an interface to that database called GUA (Grand Unified API). When a client makes a request for a piece of data, it will make that request to a server running the GUA. That server will send the request to Bard, which will figure out which server that piece of data lives on, fetch it, and return it. The GUA takes the data, does some basic sanity checks, and returns the data to the client.

Jon’s designed the system so data can be located near the people who use it. This means that in the future we’ll have a server in (for example) New York that holds data for users in New York, but doesn’t require any special setup for devices in New York – our service will see where they’re located and automatically store data in the nearest shard. This means our service will be faster, more reliable, and more scaleable.

We’re going to roll out the sharded database in the coming months, starting with the upcoming PAR board features. If all goes according to plan you won’t see any changes as a user, but we’ll be providing a much more reliable, efficient service on the backend, even as we continue to grow.