Waymo is ‘The best thing Since sliced bread’
Adela Uchida of CBS affiliate KEYE-TV in Austin reported this week about staff members at the Texas School for the Blind and Visually Impaired “finding freedom” in riding Waymo. The story centers on Marcus Cardwell, a receptionist at the school.
Cardwell staunchly believes Waymo is “the best thing since sliced bread,” as it helps him get around on his own with agency. He said, in part, reclaiming his independence is “absolutely important” because “you don’t ever like to feel like you have to wait on someone else when something’s important.” He added it’s normal for those in the Blind community to perpetually wait at the mercy of public transit, friends, etc.
“The future-looking thing to where you kind of feel that, even if people are busy and there’s not a person available to you, that there may be an autonomous car available,” Cardwell said.
Emily Coleman, the school’s superintendent, expounded on Cardwell’s sentiments.
“Mobility is probably one of the biggest barriers to employment, recreation,” she said. “And so having more ways to access independence is just always a positive change for your community.”
Coleman continued: “They had never gone anywhere by themselves, because you always need a driver,” she said. “You always have to be on a bus, like you’re never alone and in charge of your own independence. And it was the stories about like, wow, that was my first time ever alone in a car.”
The full interview with Cardwell, et al, was posted to KEYE’s YouTube channel.
I’ve said it numerous times, but it bears repeating: Waymo is unequivocally a revolutionary technology for accessibility. While it is obviously important to address problems like Waymo vehicles blocking traffic and becoming better, safer drivers, to unilaterally want to ban them, or dismiss their viability, is shortsighted and reeks of privilege. The fear of riding in an self-driving vehicle is one thing, and entirely understandable, but city leaders—in Austin, San Francisco, or anywhere else Waymo is running—should take a step back and seriously consider what Waymo enables for the disability community. Again, there are issues—if you’re a wheelchair user, for example, you’re excluded from the experience—but for people like Cardwell (and myself!), Waymo is a revelation of the highest order. That shouldn’t be minimized—but it is.
Apple’s Swift Student Challenge Winners Build apps Amplifying Accessibility
In a Newsroom post published on Thursday, Apple highlighted four winners of its annual Swift Student Challenge for their work in building apps with accessibility in mind. The brief profiles come 31 days until this year’s WWDC kicks off on June 8.
“Receiving real-time feedback while giving a presentation. Escaping a flood zone in Accra. Playing the viola, without the physical instrument. Drawing on iPad without worry of tremors. These are just four of the solutions that this year’s Swift Student Challenge Distinguished Winners created with their winning app playgrounds,” Apple wrote in the lede. “The annual Swift Student Challenge invites students from across the globe to bring their ideas to life through original app playgrounds built with Apple’s easy-to-learn Swift coding language. This year’s 350 winning submissions represent 37 countries and regions, and showcase a wide range of technologies.”
One of the students Apple spotlights is 20-year-old Gayatri Goundadkar. She spent her formative years drawing and painting with her grandmother in India, sharing a passion for a centuries-old painting technique called Warli. As Goundadkar’s elder began to age, however, her hands started shaking to the point she could not paint as she loved to do. Alarmed by the loss, Goundadkar embarked on developing an app for the iPad called Steady Hands, which Apple describes as “an app playground that uses Apple Pencil stabilization to support individuals with tremors in creating art.” The app, she said, is targeted at older adults, and uses the PencilKit API in iPadOS to “analyze stroke data and recognize tremors” and is capable of distinguishing between an intentional stroke versus an unintentional mark. “Every drawing is then displayed in a personal 3D museum, because I wanted them to feel like artists, not patients,” Goundadkar said. “When users saw the stabilization working, they felt more confident.”
For Apple’s part, the company is proud of the efforts from Goundadkar and the others.
“The breadth of creativity we see in the Swift Student Challenge never ceases to amaze us,” Susan Prescott, Apple’s vice president of worldwide developer relations, said in a statement included with today’s post. “This year’s winners found remarkable ways to harness the power of Apple platforms, Swift, and AI tools to build app playgrounds that are as technically impressive as they are meaningful. We’re incredibly proud to support their journey and can’t wait to see what they create next.”
Goundadkar and the others are invited to Apple Park to watch the keynote.
Sandwich Introduces Hovercraft for mac
Adam Lisagor of Sandwich has announced a new macOS app called Hovercraft.
The app, $19 for one Mac and $29 for two, uses hand gestures to control one’s slide decks during video presentations. There are keyboard shortcuts too, but the app’s raison d’être is it “hovers” around you as you speak, easily accessible via gestures.
“Pick Hovercraft as your camera in Zoom or wherever. Reach up, pinch the air, pull the window where you want it,” reads Hovercraft’s pitch on its website. “Your face stays on camera. The slide sits next to it. No screen share. No option-tab. No corner thumbnail.”
I discovered Hovercraft by way of this Jason Snell linked item on Six Colors. I find it really interesting because the gesture-based interaction model portends well for accessibility. Granted, it’s been eons since last I gave a presentation—in person, no less—but that doesn’t undercut the broader point here. Presuming one is able to learn and perform the gestures in the first place, to use one’s hands to move through a slide deck and reposition the window can remove a lot of friction with heightened cognitive load and excess clicking, etc. It can be difficult to cognitively stay on task and manage the “backend” of a presentation whilst also remembering salient talking points, not to mention using a mouse or trackpad or clicker to cycle through the slide deck itself and make sure screen-sharing is behaving itself. For people with certain disabilities, to do all this work in real-time may well be the technological equivalent of climbing Everest or Kilimanjaro; thus, Hovercraft could be a lifeline in successfully reaching the summit.
Overall, Hovercraft strikes me as highly evocative of Vision Pro’s gestures, not to mention the head gestures on AirPods Pro. As someone who quickly acclimated to visionOS I imagine learning Hovercraft wouldn’t be difficult. Lisagor and team ought to be commended not only for the damn clever name, but for the idea of using gestures.
Report: iOS 27 To allow pass creation in wallet
Mark Gurman reported for Bloomberg earlier this week Apple is purportedly going to allow people to create their own passes within the Wallet app come iOS 27. The feature let users “take a QR code and generate a custom pass around it,” he wrote on Monday.
“The capability is designed for situations where, for example, a gym or concert app provides a QR code for entry but doesn’t support the Wallet app,” Gurman said of the feature. “With the new tool, users can import that code and create their own pass.”
He continued: “Users can create a pass from scratch or rely on the iPhone’s camera to take a QR code and turn it into a digital ticket. The feature includes customization tools for styles, images, colors and text fields, allowing users to tailor the information displayed on each pass. Apple is testing three template options: standard, membership and event. Standard, in orange, is a default option for any type of pass, while membership, in blue, is geared toward entering places like gyms. The event pass, in purple, is meant for tickets to games, movies and other occasions.”
I have quite a bit stored in Apple Wallet: my debit card, health insurance card, even my California ID and US passport, alongside a Starbucks card, a Clipper Card, and the digital key for my house—and yes, I still carry my physical wallet everywhere I go. What piqued my interest about Gurman’s story was, if at all possible, if I could use the “create a pass” thing to build one for my CalFresh card. As it currently stands, I must use my physical card in stores when I want to use my SNAP benefits to get groceries; I can use the card terminal, but it’s a pain in the ass because (a) most I’ve encountered don’t accept the chip on the card; and (b) swiping the magnetic strip sorely tests my hand-eye coordination. It’d be far better, and more accessible, if I could simply tap my iPhone or Apple Watch as I’m using the self-checkout. Not only would this method be convenient and expedient, it’d remove a barrier to independently paying for my food.
There exist apps on the App Store for managing one’s SNAP benefits—the most popular one, across several states, is ebtEDGE and it’s truly terrible—which is good for the maintenance aspect. I recently discovered Propel and like it much better for management, finding resources, etc. I need someone to make Flighty for food stamps. But managing one’s benefits is a wholly different task from actually using those benefits in everyday life. In practice, California (and others) should consider making the CalFresh card compatible with Apple Wallet—if not for accessibility’s sake, for modernity’s. As a person with disabilities, such a move would make buying groceries in-store more accessible. I don’t know how technically feasible it is, but theoretically anyway, it sounds great I could maybe use the aforementioned pass creation feature to circumvent the Luddite bureaucracy and manually add my SNAP card to my phone.
iOS 27, et al, is widely expected to be unveiled by Apple at WWDC next month.
On the marvelousness of markdown
Paul Thurrott wrote last month he “may or may not” write a book on Markdown.
“I may or may not write and publish a short e-book about Markdown sometime this year, most likely as part of a monthly focus,” he wrote on his website back on April 5. “But l’ve written small parts of it already, as I do, and I figured it might be interesting for at least some readers. And so here’s an early draft of an introductory chapter that may or may not be called ‘On writing.’ We’ll see.”
As I said on Mastodon, it’s nearly impossible to overstate how meaningful Markdown has been to my journalistic career, as well as my writing in general. I have vivid memories of writing umpteenth essays in high school on a school computer running Windows 3.1; I used WordPerfect, and I have even stronger visceral memories of what a pain in the ass it was to format things in rich text. (For context, I was in high school from 1996–2000.) All I knew was a word processor, be it WordPerfect, Microsoft Word, or even Apple’s Pages. Then about a year or two before I pivoted in 2013 to doing tech journalism full-time, I would write about Apple on my own little WordPress blog—fundamentally not dissimilar to what I do here at Curb Cuts, but without the name recognition I’ve earned over my career. I remember not wanting to use rich text, and happily discovered Markdown. I taught myself the syntax, loved its simplicity, and the rest is history. With few exceptions, every single word I’ve written for the last 15 or so years destined for the web have been written in Markdown—albeit much to the chagrin of precious few editors who’ve emailed me perplexed at the markup of my freelance drafts. In fact, one of my earliest bylines was this June 2013 piece for TidBITS about how Markdown makes writing a more accessible endeavor for me. It was a thrill to see my friend (and Markdown creator) John Gruber link to said story on Daring Fireball and say the piece “made my day.” We would meet in person in 2014 at XOXO in Portland, OR.
What app do I use nowadays? My favorite is MarkEdit on the Mac, but I like iA Writer too.
Apple Announces Newest Pride Collection
Apple this week announced its latest Pride Collection for Apple Watch. As usual, this year’s Pride Collection features a new band, watch faces, and wallpapers for iPhone.
“The new Pride Edition Sport Loop, available to order today, is woven from a rainbow of 11 colors of nylon yarns,” Apple said in a Newsroom post on Monday. “The intricate weaving blends one color into the next, creating depth and movement across the band. The resulting design is joyful and vibrant, showcasing a full spectrum of colors that reflect the unique identities that shape LGBTQ+ communities worldwide.”
The Pride Edition Sport Loop costs $49 and will be available beginning “later this week,” according to Apple.
The reason I’m covering this story is, in an accessibility context, I love the Sport Loop. Although I have a ton of Sport Bands dating back to the Apple Watch’s 2015 debut, the Sport Loop is without question my all-time favorite band. As someone with lackluster fine-motor skills, the infinite adjustability of the it’s-Velcro-but-don’t-call-it-that fastener is so easy to use. In the very early days of Apple Watch, I used to need to ask for help getting my watch on because the Sport Band is somewhat fiddly—which I didn’t like because I was giving away part of my personal agency. With the coming of the Sport Loop, the adjustability and self-adhesive nature of the material helped reclaim said autonomy. It reasserted my own independence as a person with disabilities, and broadly speaking, is a prime example why Apple Watch bands are undoubtedly an integral part of the product’s accessibility story. And from a pure stylistic sense, I’ve always fancied myself a low-key band collector; to wit, every time Apple seasonally refreshes the colors, I’m always keen to see the new colors and find out if there’s any I like. In fact, I have my eye on the 46mm Blue Mist Sport Loop. (I also own several Pride Edition bands from prior vintages, which I think are pretty cool too.)
The Pride Edition wallpapers are purportedly coming with the release of iOS 26.5.
Pride Month is next month.
Understanding the iPhone Air Isn’t Difficult
Matt Birchler believes the iPhone Air belongs to the nerds. His blog post, published over the weekend, comes following a report from Hartley Charlton at MacRumors about smartphone makers being “spooked” by the lackluster sales of the iPhone Air—so much so Apple’s rivals are abandoning plans for their own thin-and-light versions.
“We [enthusiasts] are not buying the Air because it represents the best value in the lineup or because it has the best specs you can get in an iPhone,” Birchler wrote. “You buy it because you believe that phones in the future will be thinner and lighter, and this specific device makes some significant sacrifices just to get you that feeling today.”
Birchler’s piece caught my eye because he hits on something I’ve been thinking about for awhile regarding the Air. To wit, I deliberately chose to upgrade to the Air last September—the sky blue, 1TB model on AT&T—precisely because it was appreciably thinner and lighter than, say, the iPhone 17 Pro. Intellectually, I understood the Air was not at all a bad phone, yet undeniably a worse one compared to the higher-end Pros; but that makes sense, and it explains why Apple’s product lines are stratified. Of course the most expensive iPhone is gonna be the best one from a technological standpoint. Again, though, for my use as a person with disabilities, the tradeoff for an ostensibly inferior phone was gaining the Air’s hallmark thinness and lightness—which, in everyday, practical use is an eminently sensible decision given the chunkiness of the 17 Pro. The Air is much easier to hold and carry around in my pocket, and it has a nice big screen to boot. I love my Air, despite still having misgivings over the single camera versus the three on the Pro. The Air is great for accessibility in my opinion, and it’s a great value if you consider it’s greater than the sum of its parts.
The Air’s svelte stature is the reason you choose it over the other iPhones, full stop. That’s why I don’t understand the “I finally get the iPhone Air” trope. What is there to get? The Air is actually a fucking good phone—but you don’t buy it looking for better battery life or, as I mentioned, better cameras. You buy it because it’s almost incomprehensibly thin and light and you want to lean into that attribute. In other words, buy something else if you can’t, or won’t, do so. Indeed, Birchler rightly acknowledges the majority of mainstream buyers can’t comprehend paying more for less for the Air versus something like the base iPhone 17. That’s a reasonable stance, but it reaffirms the notion you pick the Air with the utmost intention of loving its form over its function.
Supreme Court Rules in favor of reinstating access to abortion Pill Via Mail
Ann E. Marimow reports today for The New York Times the Supreme Court has ruled access to the abortion pill via mail be reinstated, at least temporarily. The pill, known as mifepristone, had been obtainable only by visiting an in-person doctor’s office.
“The Supreme Court on Monday restored nationwide access to a widely used abortion medication in a temporary order that will, for now, allow women to once again obtain the pill mifepristone by mail,” Marimow wrote. “In a brief order, Justice Samuel A. Alito Jr. paused a lower-court ruling from Friday that had prevented abortion providers from prescribing the pills by telemedicine and shipping them to patients, causing confusion for providers and patients. The one-sentence order imposes a pause until at least May 11. He requested that the parties file briefs by Thursday, and then the full court will determine how to proceed.”
Louisiana sued the Food and Drug Administration to restrict access to mifepristone because “the availability of the medication by mail has allowed abortions to continue in the state despite its near-total ban,” according to Marimow. The drug accounts for almost two-thirds of all abortions in the United States, the administration of which is a two-drug process given during the first three months, or 12 weeks, of pregnancy.
This news is obviously first and foremost a women’s health issue—abortion is a form of healthcare—but it’s also very much an accessibility issue in the disability sense. Like ordering from Amazon or UberEats, for instance, that a woman with disabilities could plausibly gain access to their abortion medication through the mail is, on its own merit, an eminently more accessible way to get it. Maybe getting to a doctor’s office is logistically and/or medically fraught. Maybe their condition(s) means they’re primarily homebound. Maybe there’s a mental health issue that makes leaving one’s house difficult if not impossible. Whatever the reason(s), the salient point is simply that home delivery of medication—mifepristone or not—is, at its core, all about greater accessibility. It should go without saying, especially in modern times, but here we are.
Relatedly, the Center for Reproductive Rights put out a press release last Friday wherein the nonprofit organization shared news on the aforementioned Fifth Circuit ruling limiting access to mifepristone. The Center noted requests for the abortion pill via telehealth services has “doubled” since Roe v. Wade was overturned in June 2022, adding the now-paused ruling “jeopardizes that lifeline.”
“Telehealth has been the last bridge to care for many seeking abortion, which is precisely why Louisiana officials want it banned,” Nancy Northup, president and chief executive officer of the Center, said in a statement included with the announcement. “This isn’t about science—it’s about making abortion as difficult, expensive, and unreachable as possible. Telehealth has transformed healthcare. Selectively stripping that away from abortion patients is a political blockade.”
Justice Dept. Extends Web Accessibility Deadline
Michelle Diament reported for Disability Scoop last month the Trump administration has kicked the can down the road regarding what was the late April deadline for compliance with web accessibility mandates under Title II of the Americans with Disabilities Act (ADA). The Justice Department issued an interim final rule on the matter that extended the deadline a full year, with the new date set for April 26, 2027.
“The original rule, which the Justice Department finalized in 2024, imposes first-ever technical standards for websites and mobile apps under Title II of the ADA,” Diament wrote on April 20. “The requirements apply to online offerings from state and local government entities ranging from courts to public hospitals, parks, libraries, police, transit agencies, school districts, universities and more.”
According to Diament, school districts and other organizations felt immense pressure from the ruling who said they were “unprepared to meet the original timeline.” Likewise, the Justice Department noted it “overestimated the capabilities” of said organizations to comply with the law, both practically and technologically. Disability advocates, however, are choosing to view this give-us-more-time rationale as nothing more than excuses-laden lip service, arguing, according to Diament’s story, “the rule was under consideration for more than a decade before it was finalized and that delaying its implementation will harm the very individuals who it is intended to help.”
“Years of notice have not been enough, and now the department is rewarding inaction with more time,” Maria Town, president and CEO of the American Association of People with Disabilities, said in a statement to Diament. “Every year of delay is another year that a person who is blind cannot apply for the benefits they’re owed, that a person with an intellectual or developmental disability cannot navigate a local agency’s website, that a deaf constituent cannot access critical public safety information.”
In my opinion, both arguments can be true: there are challenges in complying and they were given ample warning. To comply with the Justice Department’s ruling is to spend a not-insignificant amount of money in ensuring your technology is up to par. By the same token, edicts like this wouldn’t be necessary if there didn’t exist structural ableism such that school districts and the like viewed accessibility as this ancillary, extraneous thing to be bolted on at a later date—quite literally so in this case. Put another way, the needs of people with disabilities are, by and large, ill-considered unless and until some governmental entity of authority says to consider it—and more often than not, compulsion towards compliance is more motivation of avoiding lawsuits than it is about engendering empathy and inclusiveness. Look no further for a real world proof of concept than 99.9% of so-called “accessible” hotel rooms strewn across America. Many, arguably the majority, do just enough such that the bare minimum qualifications are met—again, to comply with the ADA to avoid getting sued.
I interviewed Town back in 2020 to discuss the ADA turning 30, Covid-19, and more.
Joanna Stern’s ‘New Things’
It was reported by Talking Biz News in early February longtime Wall Street Journal (WSJ) personal technology columnist Joanna Stern was leaving the storied outlet. Stern had been with the WSJ since 2013, fortuitously the same year I started working in media.
“Joanna’s been a brilliant colleague on our tech beat and on our video team, bringing a sharp, unmistakable energy and voice to our coverage,” Emma Tucker, the WSJ’s editor-in-chief, said in a statement on Stern’s departure. “We’ve loved her distinctive take on the industry, and while we’re sad to see her go, we’re delighted that she’ll continue to contribute for us and we wish her the very best as she heads off on a fresh new adventure.”
Life gets busy, so we don’t connect as often as we’d like, but Joanna has not only been a years-long peer of mine in tech journalism—she’s also become a close friend. I’ve been an admirer for her work for so long, and her mine; we’ve seen each other at many Apple events over the years and have been in briefings together, sitting across the table from one another. We even shared a selfie together from an Apple Park golf cart during WWDC a couple years ago. That’s the personal and professional camaraderie, but Joanna’s also long been an advocate of accessibility in tech and the industry-wide efforts to continually make technology ever more empathetic and inclusive to all.
As her friend, I feel compelled to plug Joanna’s new thing: New Things. She has a newsletter, a YouTube channel, a forthcoming book, and even editorial standards. I’ve admittedly been lax on pre-ordering the book for terrible, no-good, very bad reasons, but I have subscribed to Joanna’s channel. I highly recommend doing all three posthaste, and I also highly recommend watching her video (embedded below) wherein she explains why she decided to leave the Journal to go at it independently.
Incidentally, this week marked my 13th anniversary as a journalist—all indie. I’m neither as cool nor well-known as Joanna, and probably never will be, but nonetheless very proud of the trail I’ve blazed in making accessibility a bonafide beat in tech journalism.
A few Stealth Upgrades to Apple home
In more news this week from 9to5 Mac’s Ryan Christoffel, he wrote a piece about a few under-the-radar enhancements Apple has made to the Home app during the iOS 26 cycle. The reason I’m covering his story is because there are a couple which have strong pertinence to accessibility. (If ever you wonder how I get my story ideas…)
“Apple is expected to launch a wave of new Home products later this year, after iOS 27 brings Siri’s long-awaited overhaul,” Christoffel wrote. “But there are three ways Apple Home has recently gotten better during the iOS 26 cycle too. Here’s what’s new.”
The first feature was new in iOS 26.2, released back in December: one-time setup for multipack accessories. Christoffel describes this as “convenient,” and he’s not wrong, but the accessibility angle is arguably more important. For its part, Apple says the feature “lets you use the same setup code to easily enroll multiple accessories when sold together.” It’s not a Home product, but I immediately think of AirTags here. You can buy a 4-pack of them, but you must set up each one individually. How cool would it be to have a QR code or something on the back of the box that, when scanned with your iPhone, adds all four to your Apple Account and asks how you’d like to divvy them up? To Christoffel’s point, it is convenient and certainly expedient—but it’s also more accessible! To wit, it’s much better to set up the AirTags all at once than deal with them piecemeal; the reason for this is because it reduces some amalgamation of cognitive/motor/visual friction, depending on one’s needs and tolerances. Put another way, this multipack accessory setup functionality makes for a nicer, smoother first-run experience which ultimately portends positivity. As Christoffel notes, the feature applies to Home goods like smart plugs, lightbulbs, motion sensors, and more.
“I’m always a fan of changes that make setups easier,” he said.
The second feature involves Apple Home Key. Christoffel writes about the first product to support Home Key and Ultra Wideband (UWB): the $270 Aqara U400. The great thing about having UWB is, as Christoffel notes, one’s door can lock and unlock “based on your presence alone.” I’ve been lax in writing about it, but regarding Home Key broadly, it’s been a revelation for me. I’ve been using the $349 Level Lock Pro for a few months, and it’s terrific. I still use physical keys, and the Level Lock even includes an optional key fob, but locking and unlocking my front door is eminently more accessible using my iPhone or Apple Watch. Like with the accessory setup, no longer do I have to battle my lackluster hand-eye coordination in order to get into my house, in this case. All I need to do is place my phone or watch near the deadbolt and it just works. I love it.
As a category writ large, the smart home exemplifies the notion that technology indeed can transcend sheer novelty or coolness or convenience. Apple is not, and has not, explicitly marketing, say, Home Key as a de-facto accessibility feature—but it really and truly is. Especially if you get something like the aforementioned U400 with UWB, all a person has to do is get near their front door and it’ll be unlocked for them. Again, Home Key removes literal barriers to entry and empowers people to control their home in an autonomous, dignified manner. That’s not at all trivial for people with disabilities.
‘Ted Lasso’ Gets Season 4 Release Date, Trailer
Ryan Christoffel reported for 9to5 Mac earlier this week the dearly beloved Ted Lasso has received a premiere date, as well as a trailer, for Season 4. The new season will drop August 5 on Apple TV, with new episodes dropping weekly through October 7.
“We believed, and now it’s finally happening: Ted Lasso returns for season 4 this summer, and Apple TV just announced the release date and debuted its first trailer,” Christoffel wrote on Tuesday. “Ted Lasso last aired new episodes in 2023, and its conclusion came with lots of uncertainty about the show’s future. Although it seemed like everyone in the cast, along with viewers, wanted the show to continue, season 3 appeared for a time like it was the end.”
Season 4 sees Lasso return to Richmond FC to coach a relegated women’s team.
Believe it or not, I’m probably the only person on the planet who hasn’t yet seen Seasons 1–3 in their entirety. I finished the first season, but never got around to finishing the latter two. As part of my ongoing mental health battle, I’ve been watching a lot of television as a form of self-care; I’m revisiting favorites like For All Mankind (also on Apple TV), The Marvelous Mrs. Maisel (on Prime Video), and The Pitt (on HBO Max). I should put Ted Lasso on my list in anticipation of the fourth season. I found Season 1 to be funny and, as a sports fan, I like the soccer angle. Under the circumstances, I find shows already I know and love an easier way to turn my brain off than invest in an all-new series with new characters and new storylines to absorb.
Now if only The Gilded Age (also on HBO Max) would get a return date already.
Apple Vision Pro Assists in ‘milestone’ Eye Surgery
Eye care company SightMD put out a press release this week in which it announced New York ophthalmologist Dr. Eric Rosenberg has become the “first surgeon in the world” to successfully perform cataract surgery using Apple Vision Pro. The procedure used ScopeXR, a mixed reality platform touted as “a new way to visualize surgery.”
The initial procedure was performed last October, with SightMD saying in the press release “Dr. Rosenberg and his team have performed hundreds of additional cases using the platform, demonstrating both its scalability and real-world clinical impact.” ScopeXR is described as “a spatial computing software platform designed specifically for ophthalmic surgery,” with the company further noting the software platform “[streaming] real-time surgical imaging directly into the surgeon’s headset.”
“What we accomplished in that operating room is something that has never been done before anywhere in the world,” Dr. Eric Rosenberg said in a statement included with the announcement. “This isn’t just about a new device, it’s about reimagining what the operating room of the future looks like. We’ve created a platform that makes surgeons safer, smarter, and more connected.”
He added: “We are now able to bring the world’s best surgeon into any operating room, at any hour, from anywhere on the planet. From residents performing their first cases to surgeons facing unexpected complications, this technology democratizes access to expertise and that will save vision.”
This story caught my attention because I’ve lived with cataracts most of my life, having surgery for them when I was 17. From a technological perspective, it’s good to see Vision Pro increasingly see “mainstream” adoption, albeit it for esoteric, niche use cases. When you consider something truly life-changing like what Dr. Rosenberg and team have accomplished, it gives productivity on visionOS entirely new meaning—making features like Mac Virtual Display feel comparatively small. For all Apple’s bravado about spatial computing being the future of the industry—and they’re not wrong, per se—the Vision Pro hasn’t been the Trojan horse the company hoped it could be for the mass market. Nonetheless, no one can dispute the Vision Pro is a tour de force of engineering and absolutely is “pulled from the future into the present.” I like mine, the original M2 model, a lot—even if it is used only for watching Apple TV, etc.
Once More unto the breach, touch bar edition
John Gruber posted on Daring Fireball this week a link to his appearance on the most recent Vergecast with hosts Nilay Patel and David Pierce. I typically wouldn’t write about it here, but Gruber’s item contained a draw: the much-maligned Touch Bar.
The Vergecast’s episode description asks, somewhat cheekily, whether the Touch Bar’s troubles are attributable to Tim Cook himself or is it his fault Apple didn’t try hard enough to improve on it. Gruber, in response to the question, says in part “going back to dumb fiddly F-keys with functional icons printed on them was uncharacteristically lazy for Apple.” Apple conceived it, developed it, shipped it, and then… gave up on it.
I was at the October 2016 media event during which the then-redesigned MacBook Pro featured the Touch Bar as its marquee feature. To this day, I vividly remember sitting in the Town Hall audience—incidentally, my one and only Apple event I’ve ever covered from the company’s still-in-use Infinite Loop campus—and being mesmerized by Craig Federighi’s demos of the Touch Bar. I loved how more accessible it was to, to name just one example, use the Touch Bar to easily and quickly elect emoji rather than use Character Viewer on macOS. Likewise, I bought Phil Schiller’s marketing pitch that the Touch Bar’s dynamism was a better (and cooler!) alternative to what Gruber called “dumb, fiddly Function keys.” I can’t recall the last time I used the static F-keys on my Magic Keyboard for anything beyond adjusting volume or screen brightness. Again, the allure of the Touch Bar was I could do those things, and then some, right from the sleek little OLED strip above the keyboard, replete with familiar-looking, iOS-like controls.
(That event opened with an accessibility video featuring my friend Sady Paulson.)
I, along with probably everyone else in that room, rightly believed Apple would iterate on the Touch Bar for years to come—maybe even expand to the MacBook Air or the Magic Keyboard peripheral itself. Obviously, that never came to pass. The Touch Bar withered on the proverbial vine before Apple discontinued it with the 2021 release of the M1 Pro/Max MacBook Pros. What was ushered with a bang exited with but a whimper. My understanding over the years has been the software people inside Apple Park more or less fell out of love with the Touch Bar. I’ve never gotten a concrete explanation why, but the enthusiasm evidently was severely, irreparably curbed.
It’s a shame, because hardware was never the Touch Bar’s Achilles heel. It needed improvement software-wise; as I argued numerous times the Touch Bar would’ve been made eminently better had Apple given it haptic feedback. As it was, one big thing that rankled me about the Touch Bar’s public perception from reviewers and the Apple community was how ostensibly “useless” it was because 99% of people are touch typists. I hate to break it to the able-bodied masses, but not everyone is a touch typist. My low vision, coupled with the partial paralysis in my hands caused by cerebral palsy, make touch typing nigh impossible. Indeed, I’m part of the 0.1% who must look down at the keyboard in order to type in my hunt-and-peck fashion. In fact, I’m looking down at my keyboard even as I write this very sentence! The salient point is I staunchly believe the Touch Bar got railroaded perceptually, never mind Apple’s culpability in its demise. The Touch Bar left the world with so much unrealized potential, and I’m still salty over how Apple nerds characterize it. Touch Bar Zoom is/was a masterpiece.
If macOS 27 drops support for Intel Macs, how much time does the Touch Bar have left?
Amazon Announces New ‘Adaptive Display’ feature
I missed the news earlier this month, but Amazon on April 15 announced the all-new Fire TV Stick HD. The $35 device is touted as the company’s “slimmest ever streaming device” and features Fire TV’s redesigned user interface as well as the Alexa+ service.
Amazon’s announcement was made in a blog post written by Isaac Schultz.
“The new Fire TV Stick HD is Amazon’s slimmest streaming device—both smaller in volume and width than previous models,” Schultz wrote. “It’s optimized for Direct Power through a TV’s USB port, so it fits more neatly behind a TV without requiring a separate power adapter.”
He continued: “Fire TV Stick HD also delivers noticeable speed improvements compared to previous HD models—more than 30% faster on average than the last-generation HD stick, which means it turns on and opens apps more quickly. It comes with Wi-Fi 6 and Bluetooth 5.3 support to help ensure a stronger, more reliable connection for customers.”
From an accessibility perspective, the new streaming stick has a cool new feature Amazon calls Adaptive Display. Schultz says the feature, which will arrive in “the coming months,” is “an accessibility feature that makes text, menus, and content easier to see and navigate on screen.” When enabled, Adaptive Display enlarges UI elements such as text and menus while “proportionally scaling content artwork,” Schultz said. Adaptive Display, he added, “[creates] a more balanced browsing experience,” with multiple options available so users can customize their experience.
Pocket Lint’s Craig Donaldson wrote a nice piece this week on Adaptive Display.
“I’ve been in plenty of situations where the text on my TV screen with a Fire TV Stick was simply too small to read, and Adaptive Display aims to fix that by offering multiple size options to enlarge on-screen text, as you can see in the images above,” he reported on Monday. “The first image shows a larger Adaptive Display option, the second a smaller one, and the third the default size.”
At a high level, Adaptive Display strikes me as conceptually similar to the Display Zoom function on iOS. When you first set up an iPhone (or iPad), the system prompts you to choose your desired zoom level. On my iPhone Air, there were three: Normal, Medium, and Large. (Strangely, the Settings app shows only two: Larger Text and Default, which is what I use.) I like having the stock UI layout, then tweak text sizes on my own in the Accessibility menu. As far as I know, tvOS has no such analogue; Apple would do well to add their own version of Adaptive Display for the Apple TV 4K. As to Amazon, kudos to them for their work here. I know lots of people lament Fire TV as a glorified ad platform, but the truth is Amazon deserves flowers for its work in making Fire TV and more as accessible as possible. All these years later, I maintain the Fire TV Cube is a credible, albeit expensive, option for people with disabilities to choose—what with the box’s ability to control one’s home theater setup and change channels in apps like YouTube TV by using one’s voice vis-a-vis Alexa. While Fire TV indeed may be riddled with ads, there are a lot of good ideas in there—especially concerning accessibility.
A look Inside This Year’s Imagine RIT Festival
The Rochester Institute of Technology (RIT) held its annual Imagine RIT: Creativity and Innovation Festival this past Saturday. The most recent event, which has been held every spring since 2008, showcased more than 450 exhibits for those in attendance. The goal of the Festival, RIT writes on its website, is to help people “get a glimpse of the creativity and innovation that students, faculty, and staff experience every day.”
“Our goal is to inspire the next generation of problem solvers and spark excitement about science, technology, engineering, and math,” Lisa Stein, RIT’s executive director for events and conferences, said in a statement on the institution’s website.
Of the hundreds of exhibits featured during this week’s Festival, there were a few which marry technology with disability for accessibility’s sake. One of them involves prostheses and helping those with limb differences—a topic I wrote about yet again earlier this month. Ahead of this year’s Festival, I connected with third-year biomedical engineer Nataly Rosas Franco who, alongside her compatriots Alex McMahon, Emanuel Mongkuier, Kaitlyn So, Marguerite Wascovich, Max Sushynski, and William Brent, two years ago embarked on conceiving and developing an adjustable prosthetic arm for young children. Franco explained in a brief interview conducted over email the septet decided on designing pediatric prosthetics because “we were interested in working on arm prosthetics in general, specifically one that went up to the elbow.” Although there exist “many” arm prosthetics on the market today, there are comparatively few which are few expressly designed for young children—“at least not many that were long-lasting with sophisticated mechanisms,” Franco said.
“We quickly noticed that pediatric prosthetics were usually stiff and bulky since children rapidly outgrow them,” she added. “Or, if they were adjustable they were very rudimentary in movement. We wanted to focus on having adjustable components that could ‘grow’ with the child to provide a cost-effective solution for these patients. While also providing the same refined mechanisms found in adult prosthetics.”
The economics of prostheses are particularly sensitive, considering, as Franco told me, pediatric prosthetics cost anywhere between $5,000 and $50,000, with the top-end of the really wide range typically reserved for athletic pediatric prosthetics. What’s more, the cost isn’t a one-time expense; indeed, new prosthetics have to be built and bought as children age until they reach adulthood. “By including adjustable parts that can ‘grow’ with the child it can help mitigate these exorbitant costs,” Franco said.
When asked how the group’s prosthetic arm functions, Franco explained “by having many adjustable features that can be extended by a caregiver as the child grows,” with areas such as the socket, forearm, and fingers able to expand “through time” in length and width. The expandability matters economically, as most prosthetics are pricier precisely because they’re one-of-one, custom designs. That Franco and team’s prototype is adjustable means costs can be lower because the prosthetic device needn’t be so customized. (The only caveat to this, she said, is if a person required a full arm replacement.) Moreover, Franco said the group’s prosthetic stands out from conventional ones in part because there are “definitely more mechanisms involved.” To wit, her device has capabilities such as individual finger movement, wrist movement, and forearm expansion. In addition, unlike prosthetics from companies such as ExpHand—who similarly tout making prosthetics which “grows”—but only open and closes the hand, Franco boasted their prototype does more, telling me the team implemented EMG, or electromyography, sensors such that wearers can enjoy more nimbleness and a more natural experience—as though they had their limb(s).
“We seek to give children with [limb difference] as normal a life as possible,” she said.
As to the project’s bill of materials, Franco said she and team spent “not more than $1,200” building the prosthetic arm, adding “much of it” is 3D-printed. The internal electronics are themselves inexpensive, with Franco saying the most expensive component is the $33 servomotor. “At most, [the prosthetic] would end up being around the low end of what prosthetics usually cost, but even then it is more cost-effective since it lasts approximately 9 years from ages 7–16 years old,” Franco said.
Elsewhere, Dhaval Mahajan is a human-computer interaction graduate student from India. Mahajan, alongside Sidney Grabosky and Ziming Li, developed smart glasses not because the threesome was excited for connected wearables. They were interested in the category because members pf the research team had been working with autistic adults in a vocational training program for a few years. Specifically, they wanted to use virtual reality technology to “simulate workplace scenarios in a controlled setting,” Mahajan said. Job coaches, he explained, have “consistently” raised the question of whether, given AI’s rise in prominence and the popularity of smart glasses, would it be possible to implement the technologies in training and/or on-the-job settings? The need, not the nerdery, sparked Mahajan and team’s work on bringing their idea to life.
“The appeal of a wearable display in this context lies in its ability to address a significant challenge in vocational support. As trainees become more independent, the coaching that helps them succeed during training is often gradually reduced,” Mahajan said to me in a recent interview over email. “The tools that provide this coaching (checklists, printed recipes, and prompts from coaches) can be either socially noticeable or difficult to manage in real time when interacting with customers. A wearable display can offer a more discreet and user-friendly alternative. It provides guidance in a trainee’s line of sight without using their hands, diverting their gaze from the customer, or requiring a coach to be present at the moment of need.”
The glasses’ software is a custom web app built for supporting the aforementioned needs of the job coaches. Mahajan said the software has three main functions: (1) takes in speech input in order to process the command/query; (2) uses an LLM (large language model) to understand context and directions; and (3) updates the task interface. Furthermore, the software includes a real-time order panel displaying customer requests, with tooltip-like bubbles surfacing suggestions for cues like asking for clarification as well as a step-by-step checklist replete with photos. Broadly, Mahajan described the team’s project as “more of a wearable display than a standalone computer,” based on the XReal Air 2, and noted the software runs via connected machine, with the glasses projecting images unto the user. (It’s worth noting this method is similar to how the original Apple Watch worked; the paired iPhone did the heavy compute. Apple is purportedly using the same strategy for its still-in-development competitor to Meta’s Ray-Bans.) The team deliberately chose to walk this technical path at this stage of their research and development, Mahajan said.
“It let us iterate on the interface quickly and respond to feedback from autistic adults and their job coaches,” he said. “The same interface could move onto more integrated hardware as that category matures.”
The team’s glasses aim to solve two problems, according to Mahajan. First and foremost, they lessen cognitive load. He explained customer service roles require simultaneously juggling multiple skills: product knowledge, customer care, a point-of-sale system, and live conversation. While the individual tasks are eminently learnable, managing them in totality can prove daunting. Thus, the glasses help by what Mahajan described as “putting the current step and a short cue into the line of sight takes that memory work off the trainee and lets them stay present with the customer.” Secondly, the advent of the glasses compensate for coach availability. Job coaches, Mahajan told me, are a finite resource; there’s a wide delta between intensive hands-on training with a client and eventually—hopefully—going hands-off and letting them function autonomously. The glasses, then, can serve as a proxy for the human coaches’ direction during the in-between period. But the LLM does have its limitations, as ever.
The team’s glasses aim to solve two problems, according to Mahajan. First and foremost, they lessen cognitive load. He explained customer service roles require simultaneously juggling multiple skills: product knowledge, customer care, a point-of-sale system, and live conversation. While the individual tasks are eminently learnable, managing them in totality can prove daunting. Thus, the glasses help by what Mahajan described as “putting the current step and a short cue into the line of sight takes that memory work off the trainee and lets them stay present with the customer.” Secondly, the advent of the glasses compensate for coach availability. Job coaches, Mahajan told me, are a finite resource; there’s a wide delta between intensive hands-on training with a client and eventually—hopefully—going hands-off and letting them function autonomously. The glasses, then, can serve as a proxy for the human coaches’ direction during the in-between period. But the LLM isn’t coaching well if there are umpteenth paragraphs of instructions. Indeed, Mahajan said to me that revelation “surprised” him at first, adding the workers and coaches found "paragraphs of advice harder to process mid-conversation.” Better, Mahajan said, to build a simple, well-designed interface which, as he told me, has a checklist that automatically ticks off tasks and a request list populating as a customer is speaking to employees. “The AI is still doing real work, but with context-aware reminders and tips,” Mahajan said.
Mahajan is bullish on smart glasses as a product category, telling me “we’re glad” they’ve entered the mainstream consciousness. He noted any wearable biggest obstacle is “whether the wearer is willing to put it on and keep it on” and lauded companies for “[making] these devices lighter, less conspicuous, and more acceptable in public, and accessibility research directly benefits from that work.” People are going to be more inclined to wear a pair of smart glasses which resemble regular glasses—frames that are sleek, svelte, and comfortable, especially at work.
“We’d push for more on the design side,” Mahajan said. “Most mainstream devices are built around general-consumer use cases, capture, translation, and a smart assistant. In contrast, accessibility tends to arrive later, as a feature or partnership. The question I’m more interested in is what it looks like to design a wearable interface with disabled users from day one, for a specific task they’re trying to learn or do. That’s a different kind of product… it’s where some of the most meaningful work in this space still lies.”
Lastly, there’s Alex Baker and the Neurotechnology Exploration Team. The NXT, as it’s colloquially known at RIT, was described by Baker as a club which “allows students from many different majors to collaborate and test the potential of the connection between the human body and technology.” The NXT team, he added, is committed to “advancing neurotechnology through creating accessible and assistive technology.” For his part, Baker found the space “personally interesting” after seeing the tech in 2024, saying he was “amazed” by “the application of neuroscience, the capabilities of the technology as a whole, and the potential of the electrical signals our bodies emit.”
“Technology is constantly evolving, and the applications are becoming increasingly surreal. Our brains, muscles, and entire nervous system are fascinating and prove how complex we truly are,” Baker said. “The combination of these felt like a fantasy or a dream, so being able to work on making it a reality.”
Baker and the NXT team, which, similarly to Franco’s cohort, also works on prostheses for people with limb difference, built a wheelchair controlled by brainwaves. Baker said the team’s goal is to “[create] disability and rehabilitation services as a solution for people who can’t easily operate a wheelchair on their own or struggle with the use of their prosthetics.” The technology is open-source, with the goal there being to provide “easily affordable aids so people can live more independently and comfortably.”
NXT’s wheelchair is powered by EEG, or electroencephalogram, data, which involves non-invasive methods of collecting information. The team utilizes OpenBCI hardware to capture brain signals, with the hardware built upon a Raspberry Pi and Nvidia’s Jetson Nano. The former is responsible for controlling general system control and communication, while the latter shoulders the burden of "executing our AI model and conducting real-time signal processing,” Baker said. Software-wise, the team uses Python to process the EEG data and communicate with the aforementioned hardware.
* * *
Overall, the three projects I’ve highlighted here echo share common threads. First, technology can absolutely make people’s lives richer and more accessible—literally so in the case of Franco and team’s cost-efficient prosthetic arm. Unlike so much of assistive technology, the societal views on which are rife with patronization and “gee whiz” platitudes, the students I spoke with for this feature story had clearly identified how technology can empower matter-of-factly. Second, AI can be used for genuine good. So much of the talk over artificial intelligence centers on dystopian use cases and Skynet-like dangers, not to mention “software brain”—valid concerns, the lot of them—means stuff like what’s coming out of RIT gets a relative pittance of media attention. My fellow newshounds have decided the pitfalls, et al, of AI are worth examining and re-examining—but that myopic focus comes at the cost of (predictably) undervaluing what the technology can do to actually help humankind, especially those in the disability community. The salient point is twofold: (1) good on these RIT students for recognizing accessibility’s importance and acting upon it; and (2) AI coverage needs more stories about said students’ efforts to honest-to-goodness improve lives.
It’s worth noting Rochester is doing big things in technology. Not only is there the RIT, there’s the Rochester National Technical Institute for the Deaf and students there producing stuff like Sign Speak—and leveraging AI all the while. What I’m saying is, while the Bay Area obviously has Silicon Valley, and our northerly neighbors in Oregon has its Silicon Forest, Upstate New York assuredly has technological might all its own.
Next year’s edition of RIT’s Creativity and Innovation Festival is set for April 24, 2027.
Disney to Launch ‘ASL Re-Animated’ Songs
Tara Bennett at Cartoon Brew posted a first look story earlier this week wherein she writes about Disney’s “ASL Re-Animated” project. It takes songs from films such as Encanto, Frozen 2, and Moana 2 and shows them in ASL. The story has interviews with Disney animator and director Hyrum Osmond, as well as Deaf West Theatre artistic director DJ Kurs and sign language reference choreographer Catalene Sacchetti.
The ASL song project was announced last month and premieres Friday, April 27.
“Osmond said he spent years developing ideas for a project like this before finally pitching it to then Walt Disney Animation Studios CCO Jennifer Lee and current president Clark Spencer,” Bennett reported on Monday.
She continued: “A key stipulation was choosing recent films so the team could easily reload the animation assets, which led them to songs from Frozen 2, Moana 2, and Encanto. Osmond said considerable thought went into both the artistic results and ensuring variety among the songs.”
At a high level, Disney’s work here—there are songs on YouTube—strikes me as highly similar to likeminded endeavors like the National Hockey League’s “NHL × ASL” series and HBO Max’s ASL films. The common thread running through these projects, Disney’s included, is they want to make entertainment more inclusive to the Deaf and hard-of-hearing community. Take Disney’s ASL songs, for example. Music is inherently an aural medium, which ostensibly means it’s inaccessible to a Deaf person because they can’t hear. (Yes, deafness is a spectrum, but that's besides the point I'm emphasizing here.) Ergo, that Disney is making songs accessible to Deaf audiences means music is something they can experience and feel resonance towards. (This is also why adding transcripts to podcasts and podcast apps are so very worthwhile.)
As a CODA, I remember having to quasi-interpret pieces like the National Anthem at the Super Bowl for my football-loving Deaf dad because he never understood the song—or its lyrics. It was quite the challenge as a 10-year-old trying to lean into the TV to hear the music, then pulling back to interpret it for my father. The salient points are twofold: (1) there’s a lot of pressure being the eldest kid to be the in-house interpreter; and (2) modern technology, along with a growing sense of empathy towards disabled people, have made initiatives like Disney’s (and the NHL's and HBO Max's) possible. Suffice it to say, none of this existed during my formative years in the ‘90s. My 14-year-old self in 1995 yearned for something exactly like my 44-year-old self is writing about today.
On Apple, Ascension, and Accessibility
Apple dropped the proverbial bombshell this week: Tim Cook is stepping down as CEO.
Replacing Cook will be John Ternus, the company’s senior vice president of hardware engineering. At 30,000 feet, I’m admittedly keenly interested in the palace intrigue of Cook’s (and the Board’s) decision to tap Ternus for the catbird seat in Apple Park. Why Ternus? What makes him so special? Why not someone else? Beyond someone on Apple’s executive team not wanting the job, I’m simply curious as to what made Ternus the obvious choice, as he was unanimously voted as Cook’s successor by the Board.
Cook gave an answer in the press release, albeit one predictably bereft of intrigue.
“John Ternus has the mind of an engineer, the soul of an innovator, and the heart to lead with integrity and with honor,” Cook said in Monday’s announcement. “He is a visionary whose contributions to Apple over 25 years are already too numerous to count, and he is without question the right person to lead Apple into the future. I could not be more confident in his abilities and his character, and I look forward to working closely with him on this transition and in my new role as executive chairman.”
Ternus is pumped for the new challenges that await him.
“I am profoundly grateful for this opportunity to carry Apple’s mission forward,” he said in a statement of his own. “Having spent almost my entire career at Apple, I have been lucky to have worked under Steve Jobs and to have had Tim Cook as my mentor. It has been a privilege to help shape the products and experiences that have changed so much of how we interact with the world and with one another. I am filled with optimism about what we can achieve in the years to come, and I am so happy to know that the most talented people on earth are here at Apple, determined to be part of something bigger than any one of us. I am humbled to step into this role, and I promise to lead with the values and vision that have come to define this special place for half a century.”
One aspect about the Cook-to-Ternus transition is accessibility. As I wrote last month, as someone who’s coped with multiple disabilities my entire life and who loves Apple products—never mind my journalistic interest in covering Apple for 13 years running—what remains to be seen with Ternus, even now, is how he plans to steward the company’s commitment to accessibility. I’ve neither met nor interviewed Ternus (yet?), but my understanding is he’s just as bullish on, and just as much of an advocate for, accessibility as Cook and the rest of the company’s executive team. I was heartened by learning in this NYT profile of Ternus that his senior project at Penn involved “[designing] a device that allowed quadriplegics to use head motions to control a mechanical feeding arm.” Nobody does something like that were they not touched personally by disability in some way. It means Ternus has sensibility towards disability.
Global Accessibility Awareness Day (GAAD) is next month. Will it be Cook who posts to social media about Apple’s newest accessibility innovations? Will Ternus say something too as part of the peaceful transfer of power, as it were? Whose voice(s) will be present in the press release when Apple presumably previews the new features coming to iOS 27, et al? Apple is not performative or patronizing when it comes to serving the disability community—as Cook said in one shareholders’ meeting, “when we work on making our devices accessible by the Blind, I don’t consider the bloody ROI”—and, to me, this year’s GAAD would be a great opportunity for Ternus to show his allyship of the disability community. Accessibility is something Cook name-drops often in interviews… it shouldn’t be left to him, whether now or as executive chairman.
Consider too this bit of (good) palace intrigue. My friend Jessie Lorenz, a gold medalist Paralympian who now works at Microsoft on accessibility, wrote on X about Apple’s CEO transition, saying in part Ternus ought to “put us in the roadmap, put us in the room, [and] put us in the org chart.” Her post resonated with me because it pointed to another way Ternus could make his mark: Give Apple a chief accessibility officer. If Microsoft and Canada can have one, there’s no reason Apple couldn’t too. The who is somewhat immaterial, as a hypothetical CAO—as opposed to Apple’s new CHO—would put us, the disability community, more “in the room” and “in the org chart.” A CAO would put a face to a community, not unlike how Craig Federighi is seen within the broader Apple community as the face of Apple software writ large. (Sarah Herrlinger, whom I’ve interviewed numerous times, is Apple’s senior director of global accessibility policy and initiatives. She’s more or less Federighi’s analogue.) At the least, if chief hardware officer can exist at Apple, so could chief accessibility officer.
Cook’s relinquishment of the CEO role is, while expected, headline news. Ternus will be critiqued on ostensibly more pressing metrics, particularly when it comes to product strategy, accessibility is damn important too. It’ll be extremely interesting to see how he approaches a vital part of Apple that is undoubtedly an incubator for innovation.
Google Releases Gemini App for macOS
Google last week released Gemini for Mac. I downloaded it to my desktop machine.
“Access Gemini from any screen on your desktop to clarify a topic, recall a formula, or brainstorm on the fly without opening a tab,” the website reads. “It’s help on demand.”
The news was shared on X by Gemini’s account, as well as Google CEO Sundar Pichai.
From a technical perspective, Gemini on macOS is not lazily built in a web wrapper; it’s a native app, built with Swift. Google’s screenshots show Chrome for Mac, as you’d expect, but I’m one of seemingly few who really and truly prefers Safari to Chrome for web browsing. As for accessibility, I reached out to Google PR with a question about whether Gemini for Mac supports accessibility features. A company spokesperson responded via email “we support accessibility features.” I followed up with an inquiry about exactly what those features are, but haven’t yet heard back as of this writing.
As someone who uses ChatGPT on the Mac virtually every day, I’m glad to have another native chatbot app on my computer(s). I tend to go back and forth between ChatGPT and Gemini, as I have no experience with Anthropic’s Claude. I find OpenAI and Google’s respective chatbots to perform similarly overall; I don’t find myself strongly preferring one over the other beyond presentation and user interface. Both of them have proven more than competent as an assistive technology when doing research and generating bits of CSS/HTML code for Curb Cuts’ design. As I’ve written here before, to have ChatGPT automatically generate CSS after giving it a prompt, for example, is a far more accessible way to moonlight as a wannabe web developer when I can’t (or don’t want) to traverse umpteenth webpages for the information I seek. Of course one must be diligent about spotting errors and other hallucinations, but for the most part, both ChatGPT and Gemini work with aplomb. I even keep both in my Dock.
On Netflix Recently ‘Ruining’ Its tvOS App
Chance Miller wrote this week for 9to5 Mac Netflix has “ruined” the user experience of its tvOS app by ditching the stock video player in favor of its own customized thing. He notes the switch occurred “a few weeks ago,” adding frustration is “mounting” evidenced by subscribers threatening to cancel because the update is so user-hostile.
“Netflix has once again made a controversial change to its Apple TV app. In recent weeks, the company has stopped using the native tvOS 26 video player in favor of a custom player similar to the one it uses on other TV platforms,” Miller wrote on Wednesday. “In practice, this makes the most common interactions more cumbersome and blocks users from using platform-specific Apple TV features.”
He also said deprecating the stock video player as Netflix (and others) have “means you lose access to full payback controls using the Apple TV Remote app on your iPhone.” As Miller notes, you can’t toggle Enhance Dialogue, nor can you rewind and have captions/subtitles automatically appear. There are other examples, but the salient point is users lose a lot of platform niceties when companies decide to roll their own UI designs instead of using the stock controls. John Gruber said on Daring Fireball he believes the change by Netflix “sucks,” adding “there’s no upside at all [and] nothing is better, much is worse, and a slew of cool platform features are now gone.”
As I quipped on Mastodon, Netflix’s decision is more impactful than sheerly losing a bunch of cool platform-specific features. The bigger loss is, as ever, about accessibility; from a user perspective, Netflix’s custom video player runs the risk of being inaccessible. As I said, a huge benefit to developers opting for the stock UI elements is, in this case, Apple gives you accessibility “for free”—to wit, the stock video player on Apple TV 4K has already been built to play nicely with, say, VoiceOver labels or Zoom, for instance. The aforementioned risk comes when, whether Netflix or some other developer, decides to roll their own UI—customizability is okay, mind you—and they don’t do their due diligence in ensuring their custom player works with VoiceOver, et al. And it’s not just about screen-reading or zooming; indeed, any custom textual elements and buttons need to work well just as well with larger text sizes too.
I covered Netflix’s big redesign of its app last year, and I like it very much. I admittedly haven’t noticed the recent change to the video player, but can say (for whatever it’s worth) Netflix does care a great deal about accessibility. Also last year, the company’s now-former director of product accessibility—and like me, a fellow CODA—Heather Dowdy bylined a blog post in which she reflected upon “a year of progress in accessibility” for the company as part of marking Global Accessibility Awareness Day.