Daily coverage of Apple’s WWDC 2019 conference, by John Sundell.

A Swift by Sundell spin-off.

Developer interview: Meghan Kane on Apple’s machine learning strategy, and how it might evolve at WWDC

It’s time for the last pre-WWDC developer interview — a mini-podcast and article series in which we hear from some of my friends from around the Apple developer community — about their thoughts, hopes and dreams for WWDC.

For the grand finale, I’m talking to Meghan Kane — who is a prominent machine learning developer, university lecturer, and researcher — to get her take on Apple’s current machine learning strategy, and what kind of improvements we might see at this year’s WWDC.

Listen to the interview

You can also listen in your favorite podcast app, by adding the following RSS:

https://wwdcbysundell.com/podcast

Read the interview

John: Machine learning is something that we hear more and more about these days — it almost seems like every feature that Apple announces at their various events needs to be powered my machine learning somehow. What do you think we’ll see this year — and do you think that machine learning will keep expanding into more areas of Apple’s operating systems?

Meghan: So this will be the third year that we’ll be seeing machine learning updates from Apple, with developer tooling that’s very accessible. In 2017 they released Core ML, last year we had improvements to Core ML and we had Create ML — which is for training — and we got more from Turi Create, so that we can also use it for training.

This year, I’m sure that we’re going to have tons of updates — but there are so many different tools now that we can use for training, and there’s also Core ML for doing predictions — so I think the first thing we’re going to see is some cleanup by consolidating some of these systems, so that they’re a little bit easier to use together. Because there’s so many tools that Apple provides now, so it can be hard for people to know where to start — and some people sometimes conflate one tool with the other — so that’s something that I’m really looking forward to, and I’m looking forward to seeing how they’ll do it.

And then, some of these tools are in Swift, and some of them are in Python. In Swift we have Create ML and Core ML. Using Create ML you can train a network for a limited number of use cases, right in Xcode — and then you get a model exported that you can use with Core ML. That’s pretty seamless, but then Turi Create — which is an awesome tool that you can use for training more networks — that’s in Python right now. While I don’t expect them to change the language that Turi Create is implemented in, I’m interested to see how they’ll integrate it better with the whole ecosystem.

We also have Metal Performance Shaders, and that has some really awesome tooling, but it kind of seems like it’s “own thing” at the moment. We know that it’s used under the hood, so I’m looking forward to seeing how they’ll tie that together. I also think that we’ll see more updates in Apple’s apps themselves, and probably some improvements to Siri. I’m not sure about other apps, but I’m hoping that they’ll make some more health-related machine learning updates, and improve Photos as well. It’s been getting better and better over the years, but there’s still room for improvement.

John: Yeah, totally. It’s really exciting to see these technologies, like Siri, getting better — and how some of the tools that Apple is using can now also be used by us third party developers as well. So you mentioned some really great tools there — Core ML, Create ML, Turi Create, etc. It seems that Apple wants to make machine learning more accessible, but at the same time many developers still feel that it’s something a bit “foreign”, and somewhat hard to understand. What are some other steps that you’d like to see next week at WWDC in order to make it easier to integrate machine learning into any app?

Meghan: Apple has done a pretty good to make machine learning more accessible, especially with Create ML last year — which lets you train a neural network in Swift, in Xcode, with very few lines of code. But it still is very confusing which tool you should use for which purpose, and people have this fear of getting started. I don’t think that the fear is caused by Apple — I think that this is something that people just have about machine learning in general, which is unfortunate. But I think it’s breaking down over time, as people start to realize that machine learning is accessible for basically anybody — you don’t need to have a math background. If you have a basic programming background then you should be able to get started right away.

I would like to see Apple tie these tools together a bit neater, so that you know where to start. But I also think that they could do a better job educating the community about machine learning. They do a good job with Create ML, with getting people through the door — but then there’s definitely a barrier that I’ve seen when teaching in class, and when doing community work — people don’t know how to get over the initial “hump” of training a basic image classifier.

So I think that there’s some work to be done there, and some other companies — such as Google — I think are a bit stronger right now with their education initiatives. They’re educating the community, they have a Google machine learning crash course, which is excellent. Apple has these resources available in some places, hidden away in WWDC videos — there are some gems teaching you the basics of machine learning, but they’re not surfaced.

John: Right, you have to go spelunking through the slides.

Meghan: Yeah, exactly. I came across some really good educational material by the Metal team in a talk from last year’s WWDC, by Anna Tikhonova, who’s an engineer at Apple — but I don’t know how many people are looking as far as Metal if they’re just getting started with machine learning. But she did an excellent, very concise overview of the theory behind neural networks — that was very accessible, and I think that a lot of people would’ve appreciated it. So it would be cool if they surfaced these materials, so that it would be a little bit more clear and up-front for developers to utilize.

John: Yeah, absolutely. It seems like when you use something like TensorFlow from Google, like you mentioned, there’s a bit more of a flat learning curve — while if you use Core/Create ML you do get a lot of things that are super easy, like you can just drag-and-drop things into a Swift playground, but then if you want to dive a little bit deeper, then the learning curve ramps up really quickly.

Meghan: Yeah, I think so, and I think that’s where the pain points are right now. So I hope that they’re going to target fixing that this WWDC, but we’ll see. It’s also a problem with machine learning moving so quickly in the whole industry — there’s been an explosion of tools from both Apple, Google and others. In general, it’s hard to know which one to use for what purpose — it’s kind of like an “a la carte model” in some ways — so I think if they make this into a nicer “menu” or something, that lets you choose which one to use, but you get a little bit better intuition for it, that’d be very helpful.

John: Kind of like a “machine learning buffet” instead of a la carte [laughs].

Meghan: [laughs] Exactly.

John: So we’ve been comparing Apple’s and Google’s efforts in terms of machine learning — and one really big difference is that Apple has so far been really pushing on-device machine learning, instead of using cloud-based techniques — like Google, Amazon, and other companies are doing — mostly for privacy reasons. Do you think that they’ll continue sticking to that approach this year, and what are some of the pros and cons that you see with this on-device approach in general?

Meghan: Yeah, Apple has been taking that approach, which I think goes inline with their ethos — with privacy being a main concern that Apple really pushes — since if you’re doing machine learning predictions on-device then it all stays local. I think that they’re probably going to have to do some cloud-based services at some point, if they want to compete with Google — I’m not sure if they’re actually trying to compete with Google, and I guess that’s the main question. But I think that we’ll continue to see more improvements to doing machine learning on the device, and hopefully for a wider range of devices — because not everyone in your user base has the latest iPhone, and it can be difficult to run machine learning models on older devices.

But I’m really up in the air when it comes to which direction they’ll go — because Apple has hired a lot of really strong talent from Google in the past year, including Ian Goodfellow, who is the creator of generative adversarial networks, and who was a Google Brain researcher. They hired him as a director of machine learning for special projects. I’m not sure if that means that they’re going to go in the same direction as Google is going in — I guess we’ll have to wait and see.

John: Yeah, it’s very interesting to see how far they can push this on-device strategy — and even though they’re doing things like differential privacy — there seems to be some very hard limits as to what you can do with this strategy, and it’s going to be interesting to see how they’ll be able to work around those limits.

Meghan: It is quite limited for them to just push on-device machine learning — you can train models right now using some of the Apple toolchains, and try to use them elsewhere — but since they don’t have any cloud hosing part, it’s going to be limited in terms of who they’re going to be targeting. And they’re hiring a lot of researchers, so if they’re trying to target the research community, then they’re going to have to broaden the scope of who they’re targeting and what they’re providing — in order to actually capture that market. So I think there’s going to be some updates to that, I just don’t know to what extent.

John: Yeah, that makes sense. So you’ve been working with machine learning now for many years, and you’re not only working on iOS, and you’re not only using technologies from Apple — so when you’re using other technologies, like Google’s TensorFlow — what are some of the things that you’d wish that Apple would adopt, or be inspired by, from those technologies? Do you think that Apple is either behind or ahead in terms of machine learning at the moment?

Meghan: I think that Apple is still a bit behind — but they’ve closed the gap a lot quicker than I thought in the past year, specially since they’ve hired a lot of talent, which I think will result in announcements within the near future. So I already mentioned tying the tools together into a wholistic package — or buffet [laughs] — and education. Those are two parts that I really hope to see improvements in, and which I think that Google does a better job with right now.

Another thing that I think TensorFlow does a bit better right now — which might just be because it’s in a later stage than some of Apple’s tools, which are a bit newer — is debugging. TensorFlow has this tool called TensorBoard, which lets you investigate the training of your network. It’s a visual tool, you can use it in the browser, and you can see how your accuracy is improving over time — even down to the parameters that are being trained, how those are working — which is good if you run into issues. Apple doesn’t have anything that’s great for that right now.

Also, datasets. TensorFlow comes with some stock datasets included, and as far as I know that’s not provided by Apple right now — I could be wrong, it could be somewhere — but it would be cool if they’d also provide that for us, so that we could import datasets on demand, like you can in TensorFlow.

You know, the big question is actually what’s going to happen with Swift for machine learning — and Apple’s vs Google’s efforts in that regard. Because Chris Lattner works at Google now, and they had their TensorFlow Dev Summit in March, and there are some really awesome new improvements in Swift for TensorFlow. So they have forked the Swift language, and have added interoperability with Python, so that you can actually do your training in Swift — using TensorFlow — and there’s a lot of advantages to this over Python, with compile-time checks and debugging tools.

I’m not sure if Apple and Google are going to be working together at any point, but this is going to be a really interesting space to watch — and I think there’s going to be quite a lot of development in Swift for TensorFlow in the near future, so I’m not sure how this fares for Apple. But I would secretly — or I guess openly — like for them to work together a bit more.

John: Yeah, that’d be fantastic. Google has already been contributing a lot of changes back to the core Swift compiler, to the main repo, but it would be really great to see a more united effort here — in order to make machine learning easier to use with Swift. That’d make it a lot more accessible to iOS developers, since we already know Swift. It’d also make things more integrated and — you mentioned things like debugging and having stronger types — those are also really important things when it comes to lowering the barrier for entry. Because if you have a hard debugging experience, it’s going to feel even more daunting for someone who is a beginner.

Meghan: Right, and both Apple and Google are targeting app developers with Swift — and obviously everyone who’s using the Apple toolchain has done some sort of iOS or Mac development, but may not know that much about machine learning — so they need to make the tooling accessible for beginners.

But Google is also very much targeting researchers, and Apple has hired so many researchers, and some of those people may — even though they’re very competent researchers — have a little bit weaker programming skills. So using a statically typed language like Swift actually really helps the researchers, because it lets them catch errors earlier. So they’re trying to target two different groups in some ways, so it’ll be really interesting to see how they unify this in a way that’s nice for everyone to use — it’s really exciting to watch.

John: Yeah, absolutely, it’s super exciting to watch. I’m still getting into machine learning myself, but I’m really looking forward to see what’s going to happen at this year’s WWDC in this area.

Meghan: Yeah, me too. Another huge thing is open source. Everything Google is doing with TensorFlow is open source, and obviously Apple doesn’t have as many open source projects — but Apple has open sourced Turi Create, which is the training platform that’s in Python, and you can use it for a lot of different machine learning tasks. It’s really interesting to see how this is going to go — is Apple going to move more towards open source with machine learning, or are they going to stay on their traditional path of keeping everything a bit more closed?

John: Yeah, that’s super interesting, and especially when it comes to Swift tooling — and the whole Swift world — everything seems to be almost “open source by default”, so it’s really great to see that mindset and strategy leak out into other parts of Apple as well, like machine learning.

Meghan: Yeah, I’m a big fan of that.

Thanks so much to Meghan Kane for her predictions and insights into what Apple might have in store for us tomorrow when it comes to machine learning. You can follow her on Twitter @meghafon (such a great Twitter handle).

This was the last pre-WWDC developer interview, and I hope that you’ve enjoyed this mini-podcast and article series. I’d love to hear your feedback on it — feel free to either contact me, or find me on Twitter @johnsundell.

Thanks for reading/listening! 🚀