Ryan McGrath is an Entrepreneur, Engineer, Designer, Author, and Teacher. Buzzwords, eat your heart out.

Ramblings, code, and more. Find me around the world for free coffee.

Thu 07 March 2019

Vim, ALE, Docker, and Per-Project Linting

I've been using Vim for a little over ten years now. Up until Vim 8, I'd go so far as to say little changed for me... but Vim 8 actually changed the game in a pretty big way. The introduction of asynchronous jobs that can run in the background enables functionality like code linting and completion that don't block the editor, a rather stark contrast to the old days. In fact, I outright didn't bother with code completion and the like prior to Vim 8 - it was never fast enough and just left me too annoyed to care.

Anyway, thanks to this new functionality, we can use projects like ALE to provide smooth linting, autocompletion, fixing, and so on. Setting up this and other Vim plugins is a bit outside the scope of this post, but I'd highly recommend it if you haven't tried it. ALE includes support for LSP (language-server-protocol), originally developed by Microsoft and now supported by a litany of editors and IDEs. This support has made Vim feel like far less of a black-box at points!

ALE and Docker

One thing I ran into when setting ALE up was that various projects have their own rules. For example, a Japanese company I help has some rather peculiar style rules for their Python codebase. I recently converted their infrastructure to a Docker-based architecture, and as a result all Python code executes inside a virtual machine (at least, insofar as MacOS/Windows are concerned - Linux users might have a slightly easier time here!).

In most cases, this is not too big of a problem - however, this particular case means that tools like flake8 are running inside the VM, and not in the userspace where you'd be running Vim. In the issues I glanced over, the author of ALE recommends just running Vim over SSH into the VM, which can be an alright solution... albeit a bit clunky, given your setup for Vim runs on your local machine. We really just need a way to communicate between the two layers, right?

This is actually possible with just a bit of extra configuration work. We'll need two things before we can make it work, though:

  • If you haven't already, I recommend setting up your Vim installation so that it supports some kind of local .vimrc setup. I use embear/vim-localvimrc and whitelist the projects I know are safe, but you do you.
  • A custom shell script to act as the bridge between Docker and the host environment.

The Shell Script

This is much simpler than you'd think! Somewhere in your project, edit and throw the following:

#!/bin/bash
docker-compose exec -T {{ Your docker env name here }} flake8 $@

This is inspired by acro5piano's post over on qiita, but fixed up slightly to work with what I presume are recent changes in Docker and/or ALE. Notably, our command has to specify "-T" to stop Docker from allocating a pseudo TTY. Save this and mark it as executable, and ensure your Docker environment is running if you want ALE to report errors.

(I also figured I'd throw this post up in English, just so the knowledge is a bit more freely available)

Local .lvimrc Configuration

With the shell script in place, we just need to instruct ALE on how to call flake8. If you're using vim-localvimrc, you can throw a .lvimrc in your project root with the following:

let g:ale_python_flake8_executable = '/path/to/flake8/shell/script'

Provided you did all the above correct, flake8 should now be properly reporting to ALE. You'd need to do this setup per-project, but to be honest I don't find it that annoying, as I find Docker is worth it over the old-school Virtualenv solutions. If you know of a better way to do this, I'm all ears!

Mon 18 February 2019

Rust, Cargo, lcrypto/OpenSSL, Mac, and You

If you're trying to compile code on recent versions of macOS, and it tries to link to OpenSSL, you may find yourself driven a bit mad by how odd it all is. The long and short of it is that Apple, in a recent-ish release, removed the headers for their version of OpenSSL, and you need to install a modern version of OpenSSL via Homebrew. This is really straightforward... but won't, in some cases, automatically make your project compile.

This was the case for a project I was working on, which happened to be written in Rust. The resulting errors spewed from Cargo were, after parsing, pretty clear: it was trying and failing to find and link to an OpenSSL installation. This can be confusing to diagnose and fix, since over the years Rust moved pretty quickly, and there's a litany of strange GitHub issue threads devoted to the issue. It crosses over with some macOS issues, and... yeah.

Thus, I'm going to dump my .bashrc fixes here. Throwing the following in your bash profile, then running a cargo clean + build should get your project compiling.

export OPENSSL_ROOT_DIR=$(brew --prefix openssl)
export OPENSSL_LIB_DIR="$OPENSSL_ROOT_DIR/lib"
export OPENSSL_INCLUDE_DIR="$OPENSSL_ROOT_DIR/include"
export LDFLAGS="-L$OPENSSL_ROOT_DIR/lib"
export CPPFLAGS="-I$OPENSSL_ROOT_DIR/include"
export PKG_CONFIG_PATH="$OPENSSL_ROOT_DIR/lib/pkgconfig"
export LIBRARY_PATH="$LIBRARY_PATH:$OPENSSL_ROOT_DIR/lib/"

Exporting these flags ensures that Rust, Cargo, LLVM and crew correctly grok where to find OpenSSL to link against. Hopefully this helps someone else out there, since this can be annoying to diagnose! Some tweaking may be needed depending on how you have your system configured.

Thu 31 January 2019

Dynamic Images in iOS Push Notification Extensions

Some time ago, I read an interesting article from The Guardian about their work with a concept they call "Live Notifications". The general idea is being able to do more with push notifications, turning them into a rich experience with dynamically generated or updated assets. I experimented with this on my own when I wanted a simple way of charting some personal data; I have a server that periodically checks a source, and notifies based on updated findings (yes, this is as generic a description as it can get - it's personal). Rather than generating and storing images on a server that'd only be needed once, couldn't I just dynamically generate them client side?

Turns out, it's not super complicated in the grand scheme of things. Using an iOS Notification Content Extension or iOS Notification Service Extension, it's possible to have a lightweight process running that can act dynamically on received push notifications. This is the key - we'll send a lightweight payload via Apple's Push Notification Service (APNS), and then build and attach an image to the notification before it displays.

Limitations

There are, surprisingly, not too many limitations... but there's one or two to know about.

  • Usage of portions of UIKit is pretty much impossible - for instance, UIApplication is out, but you can use CoreGraphics et al as necessary.
  • The memory limitations are much smaller and the system is more aggressive in killing your extension if you're not careful, so it's best to keep this efficient. I'd highly recommend sending a default notification with usable title and text, and then customize it as necessary when you do the image.
  • If you want to access NSUserDefaults, you'll need to ensure you're using an App Group to communicate between processes properly, as the extension lives separately from your app.
  • Oh, and if you use Realm, it's a little tricky to read data in extensions (as of writing this, I don't believe it works properly). I've only used this in situations with NSUserDefaults, Core Data, or SQLite. I'm sure there's a method for Realm, but you're own your own for that.

Building the Extension

For this example, we'll assume you have an iOS app that's properly configured for push notifications. If you're unsure of how to do this, there's enough guides around the internet to walk you through this, so run through one of those first. The example below also makes use of the excellent Charts library by Daniel Gindi, so grab that if you need it.

We'll start with a standard iOS Service Extension, and wire it up to attempt producing an image in the didReceive(...) method. We'll implement three methods, and support throwing up the chain to make things easier - it's less ideal if an extension crashes, because getting it restarted is... unlikely. We'll simply recover "gracefully" from any error, but due to this it's also worth getting right in testing.

import UIKit
import UserNotifications
import Charts

class NotificationService: UNNotificationServiceExtension {
    var contentHandler: ((UNNotificationContent) -> Void)?
    var bestAttemptContent: UNMutableNotificationContent?

    override func didReceive(_ request: UNNotificationRequest, withContentHandler contentHandler: @escaping (UNNotificationContent) -> Void) {
        self.contentHandler = contentHandler
        bestAttemptContent = (request.content.mutableCopy() as? UNMutableNotificationContent)
        
        if let bestAttemptContent = bestAttemptContent {
            bestAttemptContent.title = "\(bestAttemptContent.title) [modified]"
            
            do {
                buildChartAttachment(request)
            } catch {
                // Assuming you sent a "good enough" notification by default, this should be
                // safe. We can log here to see what's wrong, though...
                print("Unexpected error building attachment! \(error).")
            }

            contentHandler(bestAttemptContent)
        }
    }
    
    override func serviceExtensionTimeWillExpire() {
        if let contentHandler = contentHandler, let bestAttemptContent =  bestAttemptContent {
            contentHandler(bestAttemptContent)
        }
    }

    // The three main methods we'll implement in a moment
    func renderChartImage() -> UIImage? {}
    func storeChartImage(_ image: UIImage?) throws -> URL {}
    func buildChartAttachment(_ request: UNNotificationRequest) throws {}
}

Rendering the Chart

For the sake of example, we'll make a very basic LineChart using bogus data. In a real world scenario, you'd want your data to fit into the space of a push notification (2kb - 4kb, which is actually a good amount of space). You could also use a different type of chart, if you wanted. The use cases here are pretty cool - imagine if RobinHood allowed you to, say, see a chart at a glance of how your portfolio is doing. Depending on the performance, that chart could change color or appearance to convey more information at a glance.

Granted, you might not want that much information being on a push notification. Maybe you have prying eyes around you, or something - privacy is probably good to consider if you're reading this and looking to implement it as a feature. The chart below has some settings pre-tuned for a "nice enough" display, but you can tinker with it to your liking.

func renderChartImage() -> UIImage? {
    let chartView = LineChartView(frame: CGRect(x: 0, y: 0, width: 320, height: 320))
    chartView.minOffset = 0
    chartView.chartDescription?.enabled = false
    chartView.rightAxis.enabled = false
    chartView.leftAxes.enabled = false
    chartView.xAxis.drawLabelsEnabled = false
    chartView.xAxis.drawAxisLineEnabled = false
    chartView.xAxis.drawGridLinesEnabled = false
    chartView.legend.enabled = false
    chartView.drawGridBackgroundEnabled = true
    chartView.drawBordersEnabled = false
    chartView.setScaleEnabled(false)
    chartView.contentScaleFactor = 2
    chartView.backgroundColor = UIColor.black
    chartView.gridBackgroundColor = UIColor.green

    let dataSet = LineChartDataSet(values: [
        ChartDataEntry(x: 1, y: 2),
        ChartDataEntry(x: 2, y: 5),
        ChartDataEntry(x: 3, y: 7),
        ChartDataEntry(x: 4, y: 12),
        ChartDataEntry(x: 5, y: 18),
        ChartDataEntry(x: 6, y: 7),
        ChartDataEntry(x: 7, y: 1)
    ], label: "")
    
    dataSet.lineWidth = 4
    dataSet.drawCirclesEnabled = false
    dataSet.drawFilledEnabled = true
    dataSet.setColor(UIColor.green)
    dataSet.fillColor = UIColor.green
    
    let data = LineChartData(dataSets: [dataSet])
    data.setDrawValues(false)
    chartView.data = data
    
    return chartView.getChartImage(transparent: false)
}

Note that the size of the chart is hard-coded, and that the scale is manually set. Both are critical for pixel-perfect rendering; the logic could certainly be better (e.g, larger phones really need the scale to be 3), but the general idea is showcased here.

Storing the Image

We now need to attach the image to the notification. We do this using a UNNotificationAttachment, which... requires a URL. Thus, we'll be writing this to the filesystem temporarily. This method attempts to create a temporary directory and write the PNG data from the chart image returned in our prior method.

func storeChartImage(_ image: UIImage?) throws -> URL {
    let directory = URL(fileURLWithPath: NSTemporaryDirectory(), isDirectory: true)
    try FileManager.default.createDirectory(at: directory, withIntermediateDirectories: true, attributes: nil)
    
    let url = directory.appendingPathComponent("tmp.png")
    try image?.pngData()?.write(to: url, options: .atomic)
    
    return url
}

Note that, in my testing, simply writing to the same URL over and over again didn't impact multiple notifications - i.e, you're not overwriting an old image that might be on the screen. I've no idea if this will change in later iOS revisions, though, so keep it in the back of your mind!

Putting it all together

With the image saved and ready, we can attach it to the notification and let the system display it to the user.

func buildChartAttachment(_ request: UNNotificationRequest) throws {
    let chartImage = renderChartImage()
    let url = try storeChartImage(chartImage)
    let attachment = try UNNotificationAttachment(identifier: "", url: url, options: nil)
    bestAttemptContent?.attachments = [attachment]
}

And voila, you now have a dynamically generated chart. No need to worry about rendering images server side, storing and caching them, or anything like that!

Your chart hopefully looks better than this demo image I found laying around from my test runs. :)

...surely there must be a catch...

Yeah, there's a few things to consider here.

  • You're technically pushing the processing requirements to the user's device, but in my testing, this didn't cause significant battery drain over time. If you opt to do this, consider the time interval that you're pushing notifications on.
  • As mentioned before, you should design notifications such that they're sent "good enough", in case a notification extension crashes or is killed by the OS for whatever reason. This means ensuring a good default title, body, and so on are sent.
  • If you use this for financial data, which would not surprise me as the chief interest here, you should consider making this feature "opt-in" rather than "opt-out". Charts can convey a lot more than text at a glance, and people might not want their information being blown out like that.

But with that all said, it's a pretty cool trick! Due credit goes to The Guardian for inspiring me to look into this. If you find issues with the code samples above, feel free to ping me over email or Twitter and let me know!

Wed 02 January 2019

Using a Custom JSONEncoder for Pandas and Numpy

Recently, I had a friend ask me to glance at some data science work he was doing. He was puzzled why his output, upon attempting to send it to a remote server for processing, was crashing the entire thing. The project was using a pretty standard toolset - Pandas, Numpy, and so on. After looking at it for a minute, I realized he was running into a JSON encoding issue regarding certain data types in Pandas and Numpy.

The fix is relatively straightforward, if you know what you're looking for. I didn't see too much concrete info floating around after a cursory search, so I figured I'd throw it here in case some other wayward traveler needs it.

Creating and Using a Custom JSONEncoder

It all comes down to instructing your json.dumps() call to use a custom encoder. If you're familiar with the Django world, you've probably run into this with django.core.serializers.json.DjangoJSONEncoder. We essentially want to coerce Pandas and Numpy-specific types to core Python types, and then JSON generation more or less works. Here's an example of how to do so, with comments to explain what's going on.

from json import JSONEncoder

class CustomJSONEncoder(JSONEncoder):
    def default(self, obj_to_encode):
        """Pandas and Numpy have some specific types that we want to ensure
        are coerced to Python types, for JSON generation purposes. This attempts
        to do so where applicable.
        """
        # Pandas dataframes have a to_json() method, so we'll check for that and
        # return it if so.
        if hasattr(obj_to_encode, 'to_json'):
            return obj_to_encode.to_json()

        # Numpy objects report themselves oddly in error logs, but this generic
        # type mostly captures what we're after.
        if isinstance(obj_to_encode, numpy.generic):
            return numpy.asscalar(obj_to_encode)
        
        # ndarray -> list, pretty straightforward.
        if isinstance(obj_to_encode, numpy.ndarray):
            return obj_to_encode.to_list()

        # If none of the above apply, we'll default back to the standard JSON encoding
        # routines and let it work normally.
        return super().default(obj_to_encode)

With that, it's a one-line change to use it as our JSON encoder of choice:

json.dumps({
    'my_pandas_type': pandas_value,
    'my_numpy_type': numpy_value
}, cls=CustomJSONEncoder)

Wrapping Up

Now, returning and serializing Pandas and Numpy-specific data types should "just work". If you're the Django type, you could optionally subclass DjangoJSONEncoder and apply the same approach with easy serialization of your model instances.

Looking for More?

I've been writing for quite some time! You may want to check out the Archives section for a full list.