What the Google Pixel event means for Developers
It’s that time of year again…The streets lined with logos and stands, smartphones abuzz with notifications and hordes of people waiting in line for the next latest and greatest. What am I talking about? It’s conference season of course! With Apple kicking off this year by unveiling their Iphone ‘Ecks’ & 8, swiftly followed by Amazon introducing it’s latest line of Artificial Intelligence reliant smart home devices, the bar was now set. The techies and developers among us were already excited to toy around with our new devices come Christmas, that was until Google threw their hat into the ring.
For those who missed it, Google’s Pixel event yesterday debuted a number of exciting developer friendly gadgets and prospects raising the bar even further. So what’s different this year? Let’s break it down.
Rise of the (semi) sentient devices
Unless you’ve been asleep for the better part of the last year, you may have noticed that every big name in the industry has released their own variant of the smart speaker into the wild with varying degrees of success. Among those were the hugely successful Google Home, and Amazon Echo devices, and while both provided plenty of wiggle room for developers to toy around with their codes…Google remained the undisputed open source king. While the Amazon Echo/Alexa allowed for some modifications, including the ability to DIY your very own sort-of-Echo, it’s heavy reliance on the Amazon ecosystem and proprietary software ultimately became it’s failing. Then there’s the fact that Amazon’s other Echo/Alexa devices essentially run on a modified and handicapped version of Google’s Android OS.
While all those may sound like negatives, to the average home user they aren’t deal breakers! To developers however, they could mean the difference between getting their Apps working seamlessly with the Google Home and other (non Google) Android devices, or alternatively building something that works on the Echo and Amazon devices only. This year’s lineup is no different, with Amazon releasing a slightly more refined version of last years Echo devices (still very much baked into their ecosystem) and Google releasing two variants of the Home devices, with better compatibility with Android apps.
Which brings us to our next topic – Apps!
Tl;dr – If you’re a developer, you’re looking at a lot more open source courtesy of Google!
Cross development and the Pixelbook
Aside from dominating the voice search and tablet/smartphone market, Google has now launched an offensive in the laptop market in the form of the the Pixelbook, their first real mainstream attempt at bringing Chrome OS to a laptop. While Chrome OS and Google’s previous attempts at competing with the laptook market has been largely ignored and overlooked, the Pixelbook is looking to win favor with power users like developers with one little trick up it’s sleeve – Android Apps running in a Desktop environment. Now while it’s no Windows or Mac, it does allow developers, especially those developing for the Android platform to test out their apps natively on an Operating system that’s built with the same core code as the one’s running on other Android phones/tablets.
That said, all that is just hypothetical right now and the Pixelbook’s real ability still remains to be tested and as for the success of the cross compatibility of apps? Time will tell! For now to learn more, watch the video below:
Tl;dr – For devs, this not only allows you to test out your apps in its native environment but also let’s you develop the app for multiple platforms at once.
Software over Hardware
The Pixel phone, the Pixelbook and Home line, what do they all have in common? Aside from the trademark weird-and-white aesthetic, the driving force behind all three is the AI centric software. Google’s new line up and it’s obsessive focus on making the software it’s focus extends even down to it’s new accessories, like the Google Pixel Buds. Buds are their own brand earphones with translation software built right into them, which allows for seamless translation of languages. Taking this further is the machine learning built into the speakers to self-adjust volume based on environmental factors and the Pixel phone’s camera, which has a few AI assisted tricks like the recognition of depth or contextual information through the from of Google Lens.
Now if all that sounds like gibberish to you, here’s a breakdown of the phone too (Once again courtesy of the Verge) :
Tl;dr – It’s only a matter of time before someone cracks the code to getting their own Apps working with the new Buds voice/translation search and the contextual analysis aspect of Google Lens!