Dear Reader,
Let me begin by giving a big thank you to all of you who messaged me following last month’s newsletter with your well wishes and encouragements. After losing a stone and significantly improving my fitness level, I’ve had another blood test which suggests I could already be in remission for diabetes. I need to have the same levels in three more tests over the next 9 months but it’s looking positive (or should that be negative?) so far.
Health aside, I’ve been struggling with motivation this past month. This can be attributed partly to a recent overload of client work but I think it’s mostly due to waiting. Whilst many people enjoy having something to look forward to, I absolutely hate waiting for anything. This is likely a symptom of the modern age in which instant gratification is around every corner1 but I find myself in a holding pattern when I have something I’m waiting on.
I’m reminded of this passage from Dr Seuss’ Oh, the Places You’ll Go:
You can get so confused
that you'll start in to race
down long wiggled roads at a break-necking pace
and grind on for miles cross weirdish wild space,
headed, I fear, toward a most useless place.
The Waiting Place...
...for people just waiting.
Waiting for a train to go
or a bus to come, or a plane to go
or the mail to come, or the rain to go
or the phone to ring, or the snow to snow
or the waiting around for a Yes or No
or waiting for their hair to grow.
Everyone is just waiting.
Waiting for the fish to bite
or waiting for the wind to fly a kite
or waiting around for Friday night
or waiting, perhaps, for their Uncle Jake
or a pot to boil, or a Better Break
or a string of pearls, or a pair of pants
or a wig with curls, or Another Chance.
Everyone is just waiting.
So what am I waiting for? This week it’s the release of The Legend of Zelda: Tear’s of the Kingdom, a game I have only got more excited about as time has gone by. As I type this I’m impatiently waiting for the review embargo to drop 😂. The week after, I’m away on a Disney Cruise (where I naively expect to sit on a sun lounger and play Tear’s of the Kingdom) and then a couple of weeks after that is WWDC 2023 where it’s almost certain Apple are going to show off their new VR/AR headset along with the usual updates to all of their platforms.
This last one is the real kicker as WWDC is the time everything gets upended. In the indie development space, the launch of a new version of iOS is the time that getting a coveted featured spot from Apple or the wider blogosphere is more likely if you can add some new feature to your app related to the iOS update. The problem with this is that you typically get about 2 months to come up with this idea and incorporate it into your app whilst dealing with the multitude of bugs in the beta software and the lack of concrete documentation. It can be a stressful time! This is further compounded by the fact that iOS 17 may end up breaking something in existing apps and that I don’t have to fix just my own but also my clients (who may also want my time to add new features to their apps before the launch date in September).
So, whilst I’m very much looking forward to seeing all the fancy new tech and announcements I’m also dreading the mad summer rush that will inevitably ensue and find my motivation a little low when I have a game and a holiday to enjoy in the near future followed by a week that could completely upend my summer.
The next issue will be on the 7th June, smack bang in the middle of WWDC so we’ll find out then how busy the summer is going to be!
— Ben
Contents
Hello, Audio Variants! (Music Library Tracker v3)
Debouncing with Search & Add screens
Apple TV to iPhone communication
Recommendations
Roadmap
Hello, Audio Variants!
I’ve been slowly plugging away at Music Library Tracker v2.1 but am not yet at a stage where I have a beta version, let alone a release version, ready to share. That’s partly due to lethargy and feature creep, but also because I’ve now decided2 this will be v3.0 as I’m planning to redesign some of the core navigational concepts and add a few big features 🎉.
For today, I thought I would detail what is going to be in this update and what issues I’ve encountered in my thinking so far.
Audio Variants
The headline feature is that the app won’t just tell you when tracks are upgraded to Spatial Audio but will also be able to tell you about every “Audio Variant” that Apple provides3 including:
Dolby Atmos (“an immersive audio experience that surrounds you with sound from all sides, including above”)
Dolby Audio (“a surround sound format that includes Dolby 5.1 and 7.1”)
Hi-Res Lossless Audio (“uses Apple Lossless Audio Codec (ALAC) for bit-for-bit accuracy up to 24-bit/192 kHz”)
Lossless Audio (“uses Apple Lossless Audio Codec (ALAC) for bit-for-bit accuracy up to 24-bit/48 kHz”)
The actual mechanics of how this works are effectively the same as how Spatial Audio is detected. I have a database that collects Apple Music tracks (both from playlists and from the libraries of users who opt-in) and the albums are then checked periodically against the Apple Music API to fetch their metadata including these audio variants. In fact, I’ve been storing the list of variants as tags on each track since I started the database over a year ago so if you’ve used Music Library Tracker already then the new version will be able to tell you instantly which tracks are Lossless or Hi-Res Lossless.
The problem with this feature isn’t the technical workings but how I present it within the UI of the app. I can narrow this down to two problems: naming and settings.
At present, the app has two tabs: Activity and Spatial Audio. My initial feeling was that I should replace the Spatial Audio tab with one named “Audio Variants” which would then list the 4 types of audio above (and possibly a 5th one for Spatial Audio which is basically a grouping of both Dolby Atmos or Dolby Audio; that’s how the app classifies it currently) and link you to a list showing all of the audio you have that matches that particular audio format. My problem with this approach is that “Audio Variants” is a boring name and not descriptive at all. It’s the name Apple uses internally but that’s not necessarily a compelling reason to use it as it certainly isn’t used in anything user facing.
I’m having a mental block thinking of anything else suitable to name this tab. Advanced Audio? Audio Formats? Spatial & Friends? 😂
If you’ve got an idea, then please do leave a comment!
Another possible option is to simply add an additional tab named “Lossless Audio”4 and provide some sort of filter within that so you can easily limit by either Hi-Res Lossless, plain old Lossless, or both together. This kind of filter could be extended to the Spatial Audio tab so users can filter between Dolby Atmos, Dolby Audio, or both as well.
After typing this out, I think I’m leaning more to the second option as I feel that three tabs looks better than two and I like the idea of them being a simple collection of that type of music but with filters for those that want just Dolby Atmos or Hi-Res Lossless.
(As a brief aside there is a common concept in software development known as “Rubber Duck debugging” whereby you explain your problem to an inanimate rubber duck as articulating your problem can sometimes help you solve it. That seems to work for writing newsletters as well!)
The second problem I’ve been pondering on for some time is how to sort out the settings page and onboarding with these extra audio variants. At present, the settings page has a toggle to enable anything to do with Spatial Audio which, when enabled, reveals push notification settings and an option to create a Spatial Audio playlist; when this option is enabled, you then get the option to re-generate that playlist. I don’t really want to duplicate this 5 times so instead it seems like this needs to be nested within a single option (which brings us back to the “Audio Variants is a naff name” problem).
Again, Rubber Duck debugging may have helped as I now, as I type this, think that this too could be broken down into just two options of Spatial Audio and Lossless Audio. Either one would take you to a more detailed settings page where you could choose to:
Enable the feature (i.e. Lossless Audio [off / on])
If enabled, it would then let you decide what qualifies (i.e. it would list “Hi-Res Lossless” and “Lossless” both with ticks next to them but you could untick “Lossless” to only care about “Hi-Res Lossless”)
The playlist and notifications would then use those settings
With this, it would be possible to mirror what the app already does with Spatial Audio thus making upgrading easier (in that it’s both Dolby Atmos and Dolby Audio which would be the default in this version but you could choose to disable Dolby Audio if you wanted).
The only problem I see with this is if somebody wanted to have distinct playlists for each audio variant i.e. they want a playlist for Dolby Atmos and a separate playlist for Dolby Audio. I’m not too concerned about this as it isn’t an option currently so I don’t feel duty bound to include that distinction when dealing with Hi-Res Lossless and Lossless; my assumption is that most people only care about everything or only the best quality ones (i.e. they either really care about audio and want just Hi-Res Lossless or they don’t care and want both types lumped together).
I think I may have just solved this by explaining it here but if you have strong feelings on either approach I’d love to hear your feedback!
Search and Redesign
The key thing that customers have been asking me for is a search system so you can find a particular track, album, or artist and view the changes. This should be relatively easy to put into place but it also leads me to want to redesign the way in which metadata is displayed. At the moment it’s just text where a heading gets highlighted if something changed. It’s been this way for over 6 years.
It works, but I’d prefer something a bit more modern. This will most likely be in the form of small tiles for each item complete with an icon. Rather than “Spatial Audio: No”, I’d have a tile for Audio Variants with each one shown as a tag. There would also be a direct link to the track, album, or artist on Apple Music and options to play the track.
Underneath these tiles would then be a historical list of metadata changes.
This is a profound difference as the app was designed as Activity > Date > Tracks > Metadata. The key thing was the change in metadata that happened on each day so the app was built around that concept. However, I think a better design is to be more akin to a music player where you drill down to tracks through other groupings and then see a single unified page for that track which can then list any changes. After all, most people are only interested that something changed and what that last change was. The new flow could therefore be thought of as Activity > Date > [Albums | Artists >] Tracks > Track.
With this in place, every part of the app can link to the same page regardless of whether they came from Activity, Spatial Audio, Lossless Audio, or Search.
Redesigning Spatial Audio
At the moment the Spatial Audio tab lists all of the tracks grouped by the date in which they were upgraded. Whilst you can sort the tracks within their dates by track title, artist name, album name, or album artist, the list is still only displaying tracks.
This page will be redesigned to add several new features:
Based on the outcome of the discussion with myself a few paragraphs up, there will be a filter to switch between Dolby Atmos, Dolby Audio, or All Tracks.
There will be a search bar so you can quickly filter the list
Rather than breaking down the list by date, it will just be a list of all of the tracks that are upgraded with no sectioning (the date a track is upgraded is already handled by notifications and the Activity tab so does not need to be replicated here)
The items in the list will be customisable between showing tracks, albums, or artists making it much faster to browse. Tapping an album or artist would then list the tracks. Tapping a track would take you to the new track view showing the last snapshot of metadata with the redesigned tile interface I mentioned earlier rather than just immediately playing the song in the background (which has confused a few users).
All of these changes would also apply to the new Lossless Audio tab.
Other changes
SwiftUI - As I’m redesigning most of the pages in the app, it’s now time to just migrate everything to SwiftUI. I’m already using it for the onboarding and settings pages but I’m comfortable enough with it now that I think it’ll be better for the whole app.
Navigate between days - If you are viewing a particular date then there will be arrows and shortcuts to let you jump forwards and backwards.
Improved sorting - Handling the removal of common prefixes like “the”, “an”, “a”, etc, and using disc / track numbers to arrange albums.
Exports and Shortcuts - I’d like to improve the shortcuts support as there are a few issues with it and the Shortcuts extensions have come a long way since I wrote the initial integration. I’d also like to add CSV exports so you can, for example, easily pull a report of which tracks have been deleted.
Phew, I think that’s enough for one update and will hopefully finish the modern design transition that started with v2. I was originally thinking this update would be done by now but as I’ve increased the scope somewhat I think August for launch is more optimistic. The aim would certainly be to have this done and dusted by early July to then give a few weeks time for reviewers to use the app. This will also give me some breathing room for iOS 17 and adding any new features that may arise from that in a v3.1 update in September.
Debouncing with Search & Add screens
One of my tasks for this month was to complete the main list feature of my upcoming app, Board Game Lists. The basic idea is that people will be able to create lists of the board games they own with the intention of being able to add notes to them and provide a simple export to share with friends when choosing what to play. For example, I might want to curate a list of games suitable for 4 players that I think my neighbours would enjoy.
Whilst the app will have a bit more nuance than that, a good first step is what I’ve come to term “Search & Add”; a system which lets you add an item to a list by performing a text search.
I’ve used this previously in my personal Game Track app which lets me log time spent playing video games:
The user taps on a search (🔍) or plus (➕) icon and is presented with a search screen. They type what they want, press “search”, and after a short loading screen the results are returned. The user can then tap a search result to perform the next action be that viewing a details page, adding to a list, create a time log, etc.
This all seems straightforward enough and is something you’ve no doubt seen in countless apps. As with most things, however, the bare minimum isn’t enough in this day and age; one can’t expect their users to have to press a button to search! No, the search should happen automatically as the user types. This adds a couple of wrinkles depending on how your data is retrieved:
If you’re fetching data from the network (as in this case) then you don’t want to send a request after every character is typed as then you’ll potentially end up with 10-20 requests of which only the last one is important. In this case we need to find a way to add a suitable delay to requests to ensure we aren’t sending too many or continuing with requests which are no longer needed.
On the other hand, if you’re fetching data from a local database, then the search may act more like a filter as it is possible to display results in realtime (i.e. as soon as you type a character you could feasibly show results for that search term). This method has it’s own problems as you may end up having the interface flicker wildly if you’re a fast typer and the search results become closer to what you’re actually typing (i.e. if I had a local dataset of games, the chance of “G” matching the game Ghostwire: Tokyo is minimal… it’ll probably start to match after the 5th or 6th characters). In this scenario you’ll likely want to slow down the filtering so it looks like the content is loading rather than being instantly available.
It is incredibly easy to get a search interface wrong. One of my pet peeves is when network requests are fired indiscriminately and then content is loaded regardless of whether further characters have been typed (i.e. seeing search results for “Gh” but the search bar is showing “Ghostwire” then a few seconds later the newer results pop in). This is compounded by search results that return images as you end up with two pieces of network latency; the search itself and then images beginning to load which then may not be required if the search progresses further.
So how do we fix all this? The best way is to use something known as debouncing.
Debouncing is removing unwanted input noise from buttons, switches or other user input. Debouncing prevents extra activations or slow functions from triggering too often.
The process is relatively simple. When somebody types a character, we start a timer which will call our search function. If another character is typed, we cancel the timer and start another one. For example, if we set our timer to be 1 second, then a user typing three characters per second (180cpm) would not see any requests until 1 second after they finished typing their query.
There are two problems with this: if we make our delay too short then slower typists may end up with requests after every character whereas if we make it too long then there will be an unnecessary delay after the search term has been entered. We can mitigate the latter point slightly by ensuring that if a search button or enter key is pressed that we cancel all timers and send a request immediately but the choice of delay will ultimately boil down to personal preference and your audience. If your app is going to be used by an older (or very young) audience then you’ll likely want a longer delay than an app used by teenagers.
It is also important to remember that cancelling the timer is only preventing the next request from firing; it won’t necessarily cancel any network requests that are already active. For example, if we had our 1 second timer and the user entered an extra character after 1.2 seconds then there wouldn’t be a timer to cancel as it’s already fired; instead we need to cancel any network requests.
This is a lot easier with Swift’s relatively new await/async concurrency as we can effectively merge our timers and network requests into a single task which, when cancelled, will cancel everything:
let client = IGDBAPIClient()
private var searchTask: Task<[RemoteGame], Error>?
func fetchSearchResults(for query: String, debounce: Bool) async throws -> [Game] {
searchTask?.cancel()
guard !query.isEmpty else {
return []
}
searchTask = Task { () -> [RemoteGame] in
if debounce {
try await Task.sleep(nanoseconds: 800_000_000)
}
let request = IGDBAPI.Search.post(query: query)
return try await client.send(request, andDecodeTo: [RemoteGame].self)
}
guard let result = await searchTask?.result else { throw APIError.unknown }
return try result.get().map { $0.convertToGame() }
}
When our fetchSearchResults()
function is called, we first cancel the previous searchTask
if there was one. After sanity checking that there is a search term, we then begin a new task. If we are debouncing (i.e. this has been called from typing rather than pressing “search”) we add a 0.8 second delay. Next we perform our actual API call and JSON decoding before returning the list of game objects for the UI to render.
The key thing here is that instead of using timers that we have to cancel independently of network requests, we instead use Task.sleep()
to create a delay within the task itself. If the task is cancelled, then everything is cancelled including any network requests or slow-running JSON decoding tasks.
With this newer concurrency version complete, I added it not only to my Game Track app but also to my Bookmark app that I talked about in Issue #3:
Previously I had been adding books to my personal database manually but it was becoming a bit of a hassle. The process would generally require me to do a Google Image search for the artwork, upload it to my server, then add the title and artist name manually in my database. There aren’t many APIs for book searches and those that do exist tend to have low quality images (*cough* GoodReads *cough cough*). That’s when I remembered that I had already solved this problem in 2013 with my iTunes Artwork Finder; it can return details of iBooks (or Apple Books as they have been known since 2019) and my artwork adjustments allow me to fetch the original, uncompressed cover art that the publisher provides.
The API request is incredibly simple and doesn’t even require authentication:
GET https://itunes.apple.com/search?term=grave%2Bexpectations&country=gb&entity=ibook&limit=25
This returns all the details of the books including artwork, title, author5, pricing, genre, descriptions, release date, etc. With this data, I save the URL for the image in my database and then have a CRON job which runs daily to download, resize, and upload the images to my server so I always have a static copy.
It may be that there will be some books in the future that are not available on this service for which I'll need to revert to my old manual input but it's working well for now. It’s always nice when something you built over a decade ago makes a new task a 2 minute job 😅
Anyway, after 1318 words lets get back to the topic at hand which was Board Game Lists. This, again, requires a “Search & Add” screen which will perform a search against the BoardGameGeek API and return a list of games; upon selection, the game will be stored in a local Realm database and added to the respective list.
The code for this is pretty much identical to the previous two apps thanks to using SwiftUI and it’s one-liner .searchable()
modifier to create the search bar along with an .onChange()
and .onSubmit(of: .search)
to perform our search with and without debouncing respectively. The problem with this app is that BoardGameGeek has the best data set of board games but has undoubtedly one of the worst APIs I’ve ever used.
Why is it so bad? Well, first of all it only serves XML. That’s not necessarily a dealbreaker but it does speak to a very old design. Secondly, some API calls won’t return a result immediately but instead queue one up and return a 202 Accepted
header requiring you to check again in a few seconds for the actual data 🤦🏻♂. Finally, the API will typically require you to make multiple batched calls to get any usable data.
For example, let’s take a look at a simple search for board games that match “Lord of the Rings”:
GET: https://boardgamegeek.com/xmlapi/search?search=Lord%20of%20the%20Rings
Rather than a paginated list of games sorted by popularity, you instead get an alphabetical list of every matching game’s identifier, title, and year of publication; in this case, that’s 261 games6. You then need to take that and pipe the output into something like this:
GET: https://boardgamegeek.com/xmlapi/boardgame/175947,154416,207903,235550,166713,170362,154414,135064,154427,164663,166715,235552,194840,218784,176267,175950,252891,154426,218785,252889,175946,189948,231759,154425,154419,166711,202178,175951,194838,164662,175945,235551,189949,218783,203755,181306,207901,164664,189952,160425,161069,163672,227264,194841,207615,102875,349067,357292,357293,357291?stats=1
Yes, you need to make a separate request to get the actual details of each game. Fortunately you can send multiple identifiers at once but this can still be very slow which has led me to chunk the results into groups of 50 that are then fired in parallel (so 6 additional requests in this example for a total of 7 overall). Due to the search being sorted alphabetically, it is necessary to get the details of every single game that is returned before you can filter or sort them appropriately.
This would obviously be a nightmare if you were sending these requests after every keystroke so hopefully this long-winded journey through debouncing now makes a lot more sense 😂.
It was this project that got me to finally knuckle down and learn the newer await/async concurrency model as doing all this via blocks and completion handlers would have been a nightmare. Instead, I can have something like this:
private func performBoardGameGeekRequest<R: APIRequest, T: Codable>(_ request: R) async throws -> [T] {
let response = try await client.send(request)
let code = response.httpStatusCode
switch code {
case 202:
os_log("Need to repeat request", log: OSLog.networking, type: .error)
let count = (cache[request.identifier] ?? 0) + 1
cache[request.identifier] = count
if count >= 3 {
throw APIError.noData
}
try await Task.sleep(nanoseconds: 5_000_000_000)
return try await performBoardGameGeekRequest(request)
default:
cache.removeValue(forKey: request.identifier)
}
guard let data = response.data else {
throw APIError.noData
}
return await parseBoardGameGeekXML(data)
}
This single function can be used in multiple places. Thanks to the use of generics, I can throw it an API request and the model I’m expecting to be returned and it will handle if the request needs to be retried (and limit that to a certain number of retries) and the conversion of XML to my own Swift models. At a higher level, there are functions like this:
private func fetchBoardGames<R: APIRequest>(with request: R) async throws -> [BoardGameGeekGame] {
let identifiers: [Int] = try await performBoardGameGeekRequest(request)
os_log("Fetched %d identifiers", log: OSLog.networking, type: .info, identifiers.count)
let chunkedIdentifiers = identifiers.chunked(into: 50)
return await withTaskGroup(of: [BoardGameGeekGame].self, body: { group in
var games = [BoardGameGeekGame]()
games.reserveCapacity(identifiers.count)
for ids in chunkedIdentifiers {
group.addTask {
let request = BoardGameGeekAPI.Game.get(identifiers: ids)
return (try? await self.performBoardGameGeekRequest(request)) ?? []
}
}
for await partialGames in group {
games.append(contentsOf: partialGames)
}
return games
})
}
This will take a request (be it for a search or looking at a specific user’s games) and then fetch just the identifiers before doing a batched lookup on each of those games in parallel and returning them as a usable model that can then be filtered locally.
Once again, the beauty of these async/await functions is that they can hide a complex chain of requests in a single line that can be cancelled simply. Network cancellation is relatively rare (how often do you see UI in an app to cancel loading something?) but it’s amazing to look at something as simple as a search page to find that you may need to cancel multiple requests with every key the user enters.
Unfortunately my journey with the BoardGameGeek API is not yet over as my next task for Board Game Lists is to create the concept of a “collection”. If the user provides a BoardGameGeek username, then the app will keep their online collection in sync with the app so if you add a game to your collection on their website it will be reflected within the collection list in the app; this will then be filterable via a number of smart lists so you can pull out something like “games suitable for 2-4 players with an average playtime of under 2 hours and a rating of over 4 stars”.
At least it will be something different; after adding it to 3 apps this month, I’m a little bored of the “Search & Add” screens now 🤣
Apple TV to iPhone communication
I mentioned earlier the dichotomy of being excited for the new things at WWDC whilst also dreading the amount of work it potentially generates. This is exacerbated at this time of year by looking back and seeing all the new APIs that were released in 2022 that I haven’t made use of.
One of those new features was the DeviceDiscoveryUI framework which allows an Apple TV app to connect and communicate with an iPhone, iPad, or Apple Watch. A good example of this would be how the Apple Watch communicates with the Apple Fitness app:
It’s not necessarily a fair comparison as whilst you might expect them to be the same, the DeviceDiscoveryUI framework has a number of restrictions:
It only works on tvOS (so you can’t communicate between an Apple Watch and an iPad like Apple Fitness can)
It only works on Apple TV 4K (Apple Fitness can work with Apple TV HD)
The tvOS app can only connect to one device at a time (i.e. you couldn’t make a game with this that used two iPhones as controllers)
The tvOS app can only connect to other versions of your app that share the same bundle identifier (and are thus sold with Universal Purchase)
This will not work on either the tvOS or iOS simulators. You must use physical devices.
The UI for the connection setup is also different to Apple Fitness as we will see shortly.
My use case for this technology is a bit convoluted as I was really looking for an excuse to use it rather than the best fit. I have a personal app named Stoutness that I use on my Apple TV every morning to give me a briefing on my day whilst I do my chiropractic stretches. Using shortcuts and various apps on my iPhone, I send a ton of data to my server which the Apple TV app then fetches and uses. The app also communicates directly with some 3rd party APIs such as YouTube and Pocket.
One of the main reasons for the app is to get me to work through my backlogs of games, books, videos, and articles by having the app randomly pick from my various lists and presenting them to me; I then know “out of the 4 books I’m currently reading, I should read x today”7. The problem is that later in the day I often forget what the app had decided I should use, a particular problem when it suggests 5 articles for me to read from a backlog of about 200 😬. Whilst I cache this information daily in the Apple TV app, it's a bit of a pain to fire it up just to skip through a few screens and remember what I should be reading. Surely this information would be better on my phone?
The obvious way to do this would be for the server to make the calls to Pocket and YouTube and then store the cache in my database along with the random choices of games and books. An iOS app could then download that in the same way the tvOS app does. This is true, but it’s not as fun as learning a new framework and having my phone connect to the Apple TV to a) send all the data that my shortcuts used to do directly and b) have the cache be sent back in response ready to be used on iOS.
After a brief look at the docs, I naively assumed this would be done in an hour as it looked vaguely similar to the way in which an iPhone app can talk to an embedded Apple Watch app or a Safari extension via two way messaging. After 4 hours, I finally got something working but it does not feel as solid as I would like…
Apple provide a developer article titled “Connecting a tvOS app to other devices over the local network” that sounds like it should be exactly what we need. It details how we present the connection UI (in both SwiftUI and UIKit), how to listen for the connection on iOS / iPadOS / watchOS, and how to initiate the connection. However, there are two issues with this article.
First of all, most of the code in it doesn’t actually compile or has been written incorrectly. The SwiftUI code references a “device name” variable which isn’t present8, fails to include the required "fallback" view block (for displaying on unsupported devices like the Apple TV HD), and presents the device picker behind a connect button failing to notice that the picker itself has it’s own connect button which sits transparently above the one you just pressed.
For the UIKit code, it references an NWEndpointPickerViewController
which doesn’t exist. The correct name is DDDevicePickerViewController
.
Once the actual picker is presented, things start to look very promising. You get a fullscreen view that shows your app icon with a privacy string that you define within Info.plist on the left hand side whilst any applicable devices are listed on the right hand side:
An important thing to note here is that the devices do not necessarily have your app installed, they are merely devices potentially capable of running your app.
When we initiate a connection to an iPhone, a notification is displayed. The wording can’t be controlled and will be different depending on whether the corresponding app is installed or not:
You seem to have around 30 seconds to accept the connection otherwise the tvOS interface goes back a step and you need to send a new request. If you do not have the app installed, tapping the notification will take you to the App Store page.
We now come to the second problem in Apple’s documentation:
As soon as the user selects a device, the system passes you an
NWEndpoint
. Use this endpoint to connect to the selected device. Create anNWConnection
, passing it both the endpoint and the parameters that you used to create the device picker view. You can then use this connection to send or receive messages to the connected device.
The emphasis above is mine. This is the extent of the documentation on how to actually use the connection to send and receive messages. It turns out that the connection uses classes from the In-Provider Networking that was introduced in iOS 9 specifically for network extensions. In fact, this is still the case according to the documentation:
These APIs have the following key characteristics:
They aren’t general-purpose APIs; they can only be used in the context of a NetworkExtension provider or hotspot helper.
There is zero documentation on how to use these APIs in the context of Apple TV to iOS / iPadOS / WatchOS communication 🤦🏻♂.
In terms of sending messages, there is only one method aptly named send(content:contentContext:isComplete:completion:)
. This allows us to send any arbitrary Data
such as a JSON-encoded string.
The real problem is how to receive those messages. There is a method named receiveMessage(completion:)
which, based on my work with watchOS and iOS extensions, sounds promising. Apple describes it as “schedules a single receive completion handler for a complete message, as opposed to a range of bytes”. Perfect!
Except it isn’t called, at least not when a message is sent. In a somewhat frustrating act, the messages only appear once the connection is terminated either because the tvOS app stops or because I cancel the connection. I tried for multiple hours but could not get that endpoint to fire unless the entire connection was dropped (at which point any messages that were sent during that time would come through as one single piece of data). I can only assume the messages are being cached locally without being delivered yet when the connection drops it suddenly decides to unload them 🤷🏻♂.
It turns out you need to use the more complex receive(minimumIncompleteLength:maximumLength:completion:)
which requires you to say how big you want batches of data to be. You also need to resubscribe to this handler every time data appears on it. The problem here is that whilst there is a “completion” flag to tell you if the whole message has arrived this is never true when sending from tvOS, even if you use the corresponding flag on the send method. In the end, I limited the app to 1MB of data at a time as everything I send is well below that. I’ve never run into a problem with only partial data being sent but it is a potential risk to be aware of.
If you were using this for critical data, I’d probably suggest only sending encoded text and providing your own delimiter to look for i.e. for each string that comes in batch them together until one ends in a “|||” at which point you will know that was the end of a message from tvOS.
On the positive side, the connection setup and data sending are near instantaneous and the user facing UI works well. However, as there were already low-level network solutions to send data between devices (including non-Apple devices) it’s incredibly odd to me that Apple went to the effort of creating a beautiful device pairing API and UI for both SwiftUI and UIKit but didn’t extend that to the basics of sending data. Local networking is hard. I have no interest in diving into the minutia of handling UDP packets; I just want to send some basic strings between devices!
Judging by the complete lack of 3rd party apps using this feature or articles detailing how to use this API I’m going to go out on a limb and say it’s unlikely we’ll see any improvements to this system in tvOS 17. Regardless, I’ve filed a few bug reports in the hopes that the documentation can be tidied up a bit. I’ve also written an instructional blog post on the subject and published a demo project on GitHub for those that are interested in implementing it. Just be aware that this is not the robust solution I was hoping it would be!
Recommendations
Video Games
Dredge - Sail around some islands doing some fishing to upgrade your boat which allows you catch more fish which allows a better boat which means more fish and so on. That’s only half of it though. The other half is the story. A Lovecraftian story. Excellent on the Steam Deck.
Laya’s Horizon - From the creators of Alto’s Adventure (probably my favourite mobile game) comes a new 3D adventure in a very similar mould. You glide down a mountain discovering new places to stop, picking up collectibles, and engaging in races and other aerial activities. The exploration is of particular interest to me with the map revealing itself as you descend the 360º area. Well worth a try. Available for free on iOS and Android with a Netflix subscription.
Pineapple on Pizza - This is an odd one. It’s free, it’ll last about 10-15 minutes, but it’s ending is very entertaining and has a catchy tune. I played it on Steam Deck. Give it a try without reading anything about it would be my suggestion!
Books
Grave Expectations - The debut novel of Alice Bell, host of one of my favourite podcasts on Rock Paper Shotgun (The Electronic Wireless Show). It’s a murder mystery in a big old Knives Out house attempting to be solved by a millenial medium and the teenage ghost that is attached to her. It’s hilarious but also has a great mystery at the heart of it. Highly recommended.
Movies
The Super Mario Bros. Movie - I completely forgot to mention this in the last issue which isn’t a glowing endorsement 😂. If you’re a Mario fan or you have kids then you should watch it (and probably have, twice), otherwise give it a miss. It’s not a kids film like The Lego Movie with mass appeal for both kids and adults but instead a relatively safe romp through The Mushroom Kingdom filled with nostalgic music and bright colours.
TV
Jury Duty - A documentary about a US jury. The twist is that every one – the jury, the guards, the judge, the witnesses, the lawyers – are all actors; except one juror who does not realise the whole thing is fake. Absolutely hilarious. I was worried it would be more cringe than amusing but it’s genuinely very funny. James Marsden gets a special mention for starring as himself. I hope there is a behind-the-scenes of this at some point as there must have been a phenomenal amount of work to pull this off.
Roadmap
The roadmap is my way of committing to what I’m going to do before the next issue:
13th April - 10th May
Complete v2.1 of Music Library Tracker ❌
Finish the main list feature of Board Game Lists ✅
It’s been a very busy month so I’m not surprised I didn’t get v2.1 completed. With my renewed enthusiasm after detailing what is going to go into the rebranded v3 update I’m now ready to have something more manageable ready for next time!
11th May - 7th June (Issue #10)
Complete the Lossless Audio integration in Music Library Tracker v3
Get the BoardGameGeek collection sync complete in Board Game Lists
Build something with the new iOS 17 SDK (or RealityOS 👀)
That wraps it up for this issue. I hope you found something of interest and that you’ll be able to recommend the newsletter to your friends, family, and colleagues. You can always comment on this issue or email me directly via ben@bendodson.com
Thanks Amazon Prime (the same-day delivery service, not the video streaming 😂)
When I say “now decided” I do literally mean “now”. I was just finishing off writing this section when I realised this really should be a bigger release.
OK, it’s not every Audio Variant. I’m pointedly ignoring “Lossy Stereo”.
So the tabs would be “Activity”, “Spatial Audio”, “Lossless Audio” or maybe shortened to just “Activity”, “Spatial”, “Lossless”…
The API was built for iTunes music before bolting on iBooks. For that reason, the fields for title and author are amusingly named trackName and artistName respectively 😂
BoardGameGeek has details of all the various expansion packs so a large property like Lord of the Rings will undoubtedly have hundreds of results especially when you get into living card games. A modern API would let you ignore these results but there’s no such luck here!
I go into why I do this in an article I wrote about the app back in 2021.
I have been unable to divine a way to get the name of the device you are connected to.