Steven Aquino Steven Aquino

NBA’s Los Angeles Lakers to stream select Games This Season Formatted for Apple vision Pro

My good friend Jake Krol at TechRadar reported earlier this week the NBA’s Los Angeles Lakers will be using Apple Vision Pro to stream games in Apple’s Apple Immersive Video format for the mixed-reality headset. The streams will be available in both the NBA and Spectrum SportsNet (the Lakers’ broadcast partner) apps for visionOS early next year.

“It’s not every game, but for those that are streaming—exclusive to the $3,500 Spatial Computer—you’ll get access to views that put you right in the middle of the action,” Krol wrote of the Lakers’ plans for Vision Pro and Apple Immersive Video. “Special cameras that support the format will be set courtside and under each basket to give you perspectives that amp up the immersion. The Lakers’ games will be shot using a special version of Blackmagic Design’s URSA Cine Immersive Live camera.”

Apple and the NBA will announce which games on the schedule will be available in Apple Immersive “later this fall,” specifically sometime next month, according to Krol.

As a diehard sports fan—which includes basketball—I’m extremely excited by this news. For one thing, I do have a Vision Pro I mainly use for entertainment: watching TV shows and movies. For another, Krol’s story strikes me as lending more credence to my notion that Vision Pro can make watching sports more accessible to Blind and low vision people. Granted, this isn’t the same as sitting inside Crypto.com Arena, but by virtue of Apple Immersive, it’s damn near close. Watching live sporting events as someone with a visual disability can suck depending on one’s needs and tolerances, which is why using Apple Immersive to shoot games is so tantalizing. The whole point of Apple Immersive is to make viewers feel like they’re right there even if they’re really not; to use the format to stream sports can elicit those same feelings of… immersion. In an accessibility context, someone like me could feel like I could enjoy the action a lot more.

As Krol notes, Apple has taken a liking to using its products for capturing live sports for streaming. Jason Snell at Six Colors recently reported on iPhone 17 Pro being used to film a recent Tigers-Red Sox game from Boston’s Fenway Park for Friday Night Baseball on Apple TV+. Similarly, I wrote in April about the NBA’s Sacramento Kings using a tactile display to make games accessible to Blind and low vision fans visiting Golden1 Center.

Read More
Steven Aquino Steven Aquino

Slide Over’s Return Receives A Hero’s Welcome

Jason Snell at Six Colors wrote this week about a big addition coming to iPadOS 26.1, now in beta: Slide Over. Indeed, one of the tentpole features of the OG multitasking system introduced in iOS 9, Slide Over is soon returning to iPadOS. The functionality was removed in iPadOS 26.0, a decision longtime iPad-loving friends of mine such as Harry McCracken at Fast Company lamented. Maybe it’d return someday, he wished.

Return it shall, according to Snell.

“In iPadOS 26.1 beta 2, Slide Over is now an explicit part of the new multi-window multitasking view,” he reported on Monday. “To enable it, open a window and resize it so that the three ‘stoplight’ buttons appear, tap and hold on the green one, and choose Add to Slide Over. Or choose Move to Left (or Right) Slide Over from the Window menu. Or type option-globe, left or right. All of those will work.”

He continued: “When Slide Over is invoked, the current window will be resized and stuck in the corner. You can grab the top of it and slide it off-screen, and it’ll vanish—only to reappear when you swipe your finger from off the side of the screen back on. You can stick the window on either side, and it’ll hang out there, regardless of whether you’re using full-screen windows or have a bunch of windows. You can even resize the Slide Over window when it’s on screen, and it’ll stay that size—unlike the old implementation.”

Notably, Snell writes Slide Over works only in multi-window mode. Nonetheless, he’s absolutely right when he adds it’s possible to use fullscreen apps in the new windowed mode. “Nobody’s going to force you to make those windows smaller,” Snell said.

Reading Snell’s story got me pondering my own iPad usage. Just over a year ago, I was gifted a 13” M4 iPad Pro (with 1TB storage and cellular) for my birthday. Apropos of the new multitasking capabilities in iPadOS 26, the biggie iPad is ideally suited as a laptop replacement for travel. I’m no longer as bullish on iPadOS-as-productivity as I once was, mainly because I’ve come to prefer macOS for work nowadays. To be clear, this is not so much a philosophical difference—indeed, iPadOS 26 is terrific if grossly overdue and I maintain the iPad is the most accessible computer Apple’s ever created—as it is my personal preference changing over time. Thus, the iPad has been relegated to content consumption duty on the couch. Like my iPhone Pro Max fatigue, the 13” iPad Pro, however thin and light and sporting a gorgeous OLED screen—I’ve discovered the biggie tablet is considerably not conducive to lounging. With the exception of watching movies and TV shows, the 13” model is awkward to hold for extended periods—a sentiment beautifully illustrated when rotating from portrait to landscape orientation.

Given this shift in mentality, I’m excited to hear the M5-powered iPad Pros are purportedly on the way. Presuming the rumors become reality, I’d love to downsize to the 11” iPad Pro so as to better suit my tablet usage. While I’m not ashamed to admit my lack of iPad productivity (I do have the Magic Keyboard as well) is partially due to coping with perpetually living in the throes of severe anxiety and depression—i.e., I don’t touch grass as often as I should—I nonetheless have noticed my penchant for using the iPad as a passive device for relaxation, particularly at nighttime. The truth of the matter is, for this case, the 13” Pro is annoyingly unwieldy all things considered. The 11” iPad Pro gets all the goodness of iPadOS 26—the reimagined multitasking system and everything else—including the much-ballyhooed return of the dearly beloved Slide Over feature.

See also: Don’t miss Federico Viticci’s take for MacStories on Slide Over’s resurrection.

Read More
Steven Aquino Steven Aquino

iPhone Air Review: A Thin and light Thrill Ride

The accessibility story of the all-new iPhone Air can be distilled into one word: hubris.

In the weeks ahead of last month’s unveiling, I was steadfastly dubious that Apple’s purported “iPhone Air”—a name that ultimately proved real—would be accessible to use largely because its thin-and-light body would make it too hard to carry and hold. I was sure the Air would be inaccessible because, owing to its name, its hallmark physical traits would mean less tactility to comfortably grip. Hardware accessibility matters, after all, and Apple’s latest exemplar of engineering exemplifies that notion.

As it turns out, I was wrong—kinda. The iPhone Air is, in fact, an extraordinary device.

Although there remains a certain class of people for whom the iPhone Air’s thinness and lightness could be detrimental as a physical object, the $999 phone has, in my week or so of testing, proven to be the most enlightening and enjoyable review period of any iPhone I’ve ever tested—and I’ve reviewed a lot of them over my 12 years as a journalist. Apple sent me an iPhone Air (a black 1TB model)—alongside the regular iPhone 17 and 17 Pro Max; more on those later—and I chose to use the Air. For one thing, I have neither the time nor the bandwidth to test all three phones and then write three bespoke reviews. For my sanity, I have to pick one and ride with it. For another thing, the Air is, for my unique purview, the most interesting of this year’s crop of iPhones. It’s certainly the coolest model, thanks to its svelte design. My time testing the Air has shown me that, while I wasn’t wrong in my inclination per se, the phone’s ballyhooed thinness and lightness actually are its strongest proponents in a disability context.

I won’t bury the lede: The iPhone Air is spectacular and it’s my new daily phone.

How New Hardware Confronts Old Hubris

I’ve been a devout iPhone Max (née Plus) user for over a decade now. In nearly all my past reviews, I’ve pointed out the need for me to make a technological Faustian bargain: to get the biggest, easiest-to-see screen, I must incur the cost of coping with an aircraft carrier in my hands and in my pocket. The better battery life was the proverbial icing on the cake. Every iPhone I used with regularity for the last several years has been a Pro Max of some sort because I truly believed I needed that big screen, ergonomics be damned. In the last year or so, however, I’ve found myself growing weary of my aforementioned deal with the devil. I’ve grown weary of lugging around such a huge object everywhere I go, even if it is just across the house. The Pro Max’s screen, while glorious to behold, fits as gracefully in my hands and pockets as a bull in a china shop.

The iPhone Air, then, strikes me as offering the best of both worlds.

Consider the screen. While smaller than the 6.9” display on my 16 Pro Max, the 6.5” display on the Air is plenty big enough for me. It even does the iPadOS-like trick when, if you rotate the phone into landscape orientation, the UI morphs into a two-column view in apps like Mail and Messages. I mentioned in last year’s review the Pro Max’s screen size is at the edge of my threshold for comfortability; go any larger and Apple risks encroaching “seriously close” to iPad mini territory, I said, akin to Icarus flying too close to the sun. By contrast, the Air’s big display feels perfectly suited for its form.

And what form—it’s the Air’s entire selling point. As I wrote earlier, the Air’s thinness and lightness have proven to be its most endearing attribute. With the exception of my one year sojourn with my beloved blue iPhone XR because it was a blue iPhone, I’ve always chosen the Pro model because (a) I’m a nerd; and (b) I wanted the best cameras. With the iPhone Air, the emotional appeal is, like the XR in 2018, trumping the nerdiness in me. Whenever I pick up the Air, I instantly get an immense feeling of joy and delight and, frankly, boyish wonder—I’m continually awestruck by its design and how thin and light it is. Intellectually, I’m fully aware of the fact the Air is not Apple’s best iPhone. I know the Air lacks the LiDAR sensor needed for the myriad detection modes in the Magnifier app. I know the phone’s single camera system isn’t as good or robust as the Pro’s. But I don’t care—like when I chose the XR over the objectively better XS because I could have a blue phone, I prefer the Air over the Pro for the visceral user experience.

There’s a reason emotion is, psychologically, a key part of advertising: it works.

As a practical matter, I prefer the Air for the obvious reasons of its thinness and lightness. My perspective on the device has done a complete about-face, as I now love how easy it is to hold my phone in my hand and carry in my pocket. On that note, I disagree with my friend Jason Snell’s opinion that to put a case on the Air is to negate the phone’s reason for being. While it’s true a case inevitably does add bulk and weight, I’ve found the increase to be infinitesimal in my testing. For my purposes, I’ve been ardently pro-case on every iPhone I’ve ever used more for ergonomics than protection. My fine-motor skills, including my muscle tone, decidedly lacks luster and, consequently, I’m prone to accidentally dropping things. The last thing I want to drop is my iPhone, let alone a review unit from Apple. What a case does adds friction and a “tackiness” that helps to better secure the phone. A case makes holding my phone more accessible. I understand Snell’s contention that the 17 or 17 Pro are better choices with a case, but I use one anyway. Apple gave me its official iPhone Air case (in Shadow) and I like it. It gives me—yet again—the best of both: protection and better accessibility.

Cursory Notes On Camera And Battery Life

Speaking of accessories, Apple included the $99 iPhone Air Battery Pack in my proverbial box of goodies. Truth be told, I haven’t opened it; battery life on the Air is considerably worse than on my 16 Pro Max—and I do plan on using the Air’s Battery Pack eventually—but it hasn’t been so bad that I’ve felt anxious about charging. As a remote worker, I spent a lot of time at home, which means my phone is usually sitting on the charger on my desk. I have access to a charger in other parts of the house as well, and battery life has never been so constrained so as to be unusable. Where the Air’s battery would be put through the wringer is, for example, when I’m at Apple Park covering WWDC or other event. On those special days, I’d surely bring the Battery Pack with me. Otherwise, the Air’s battery has been fine in the humdrum of my everyday life.

As to the camera, it’s occurred to me during testing that I actually use the ultra-wide on my 16 Pro Max more often than I realize. I admittedly miss it on the Air, but not so much that it’s a dealbreaker that moves me to the 17 or 17 Pro. I’ve been perfectly happy with the image quality of the Air’s “fusion” camera, and as I said previously, care not that it doesn’t have the LiDAR sensor needed for the Magnifier app’s various detection modes.

Cursory Notes On The Other Phones

As I said at the outset, Apple included the 17 (in green) and 17 Pro Max (in orange) along with the black Air. Design-wise, the Pro Max is the complete and utter antithesis of the Air: more industrial and “tool-like” in ways the Air is not. If the Air is a sports car, the Pro Max is a Range Rover. The latter’s relative girth and heft are striking compared to the Air; as such, the Pro Max doesn’t give me the same feelings of giddiness. The Pro Max is more pro than ever before, in form and function. As to the color, the “cosmic orange” is loud and proud. It’s nice, although not to my personal taste. I’d probably pick the blue.

Now, the standard 17 is interesting indeed. Were the Air nonexistent, I’d probably choose the 17 (in blue) as my new phone. Beyond my fatigue over the Pro Max’s size, the standard 17, spec-wise, is damn impressive. And it has the ultra-wide camera. Apple ought to be commended for doing a great job of reaching feature parity across the refreshed iPhone line. I’m increasingly feeling as though I don’t always need the highest-end, tricked-out gadgets. The 17’s value proposition is stratospherically high, and its siren song would be much more seductive were it not for the advent of the Air.

The Bottom Line

Going back to Snell’s own review, he says the iPhone Air is a harbinger of the future—and he’s absolutely right. The iPhone Air is not the best iPhone if we’re talking metrics—although kudos to Apple for putting the A19 Pro chip inside—and it isn’t the best iPhone for most buyers (that’s the 17). But for me, in the present, it’s the best, most accessible iPhone yet because it’s a marvel of engineering—and emotion. Nike has the copyright market cornered on “Air Max,” but that’s what the iPhone Air essentially feels like to me: a slimmer Max. I couldn’t be happier… it truly does offer the best of both worlds.

And I can get it in blue to boot.

Read More
Steven Aquino Steven Aquino

Disney Announces Redesigned Disney+ App, More

In a press release on Thursday, Disney announced big changes for Hulu and Disney+.

Hulu, Disney says, is poised to “reach worldwide audiences” beginning next week, on October 8, when it becomes what the company is calling “the global general entertainment brand on Disney+.” Disney notes the changes, in strategy and in user interface design, are anticipatory in nature as the “fully integrated unified app experience” is expected to arrive sometime next year. The news comes after it was announced in August the standalone Hulu app would be retired and folded into Disney+.

Most interesting from an accessibility perspective is the overhauled app design. In the announcement, Disney shares screenshots of the new look alongside detailed explanations of what’s changing. At a high level, the tab-oriented design is conceptually identical to what Netflix did to its app insofar as, like Netflix, Disney is touting a “simpler” and “more intuitive” experience, replete with splashier visuals and personalized recommendations powered by “an updated algorithm that learns user preferences over time.” Aesthetically, I think the new design is a real looker; in terms of usability, I like this trend of using a top-anchored tab bar for accessibility. Especially for cognition—which arguably matters most of all when it comes to navigating streaming apps—it can be immensely helpful for many people with intellectual disabilities to know that to find stuff, it starts at the top. Likewise, the heightened emphasis on nicer poster art and other visuals can make it more accessible for someone to identify, for example, The Simpsons, by seeing a big picture of Homer Simpson’s face in the menu. That the Disney+ app is moving to a more tab-focused design is especially important because, being the media conglomerate it is, Disney owns (and thus wants to promote) content from properties like Marvel, ESPN, Star Wars, and of course, the aforementioned Hulu.

“The new design is more modern and intuitive so users can find and discover the characters and stories they love,” Disney wrote of the forthcoming redesign of the Disney+ app. “This includes a new video display in the Hero carousel and a more dynamic brand row, showcasing the latest titles from each brand. We’ve also updated our content sets to showcase more cinematic poster-style artwork.”

I’ve been enjoying Netflix’s new app. It’s pretty and is more user-friendly to me.

Elsewhere, Disney mentions widgets are set to launch on iOS—by contrast, the redesign is more focused on tvOS, for instance—so as to “take users directly into our programming with one click.” Moreover, the company teased the enhancements are “just the beginning,” saying “additional updates [are] planned” in the run-up to the official release of the ballyhooed “unified app experience” coming sometime in 2026.

Read More
Steven Aquino Steven Aquino

Colorado Spotlights ‘inclusive tourism experiences’

Last month, the Colorado Office of Economic Development & International Trade (OEDIT), issued a press release in which the state agency announced what it called “a curated, multi-day destination tour and retreat dedicated entirely to accessible travel.”

The event was organized alongside accessible travel company Wheel the World.

According to the OEDIT, the event, held September 9–12 in Denver, “brought together more than 20 accessibility advocates, influencers, journalists and Colorado tourism leaders for a series of adaptive outdoor adventures, culinary and cultural experiences and meaningful conversations about accessibility in travel,” adding “set against the backdrop of The Mile High City, the multi-day experience showcased Colorado as a destination setting the national standard for inclusive travel” with kayaking and more.

“The Polis-Primavera administration is committed to building a Colorado for All,” Lt. Governor Dianne Primavera said in a statement included in the press release. “Colorado is proud to lead the way in making travel inclusive so that everyone, regardless of ability, can experience the beauty, adventure, and culture our state has to offer. This gathering shows what’s possible when we commit to breaking down barriers and ensuring that travel is truly for all.”

The aforementioned “Polis” refers to Jared Polis (D), Colorado’s governor.

Colorado, which boasts it’s “the best state in the nation for people with disabilities” according to the AAA State of Play 2025 Report, said the recent event builds on efforts by the Tourism Office and Wheel the World to “promote and expand accessible travel in the state, including the Accessible Travel Program launched in August 2024.” The program is described as “[seeking] to enhance accessibility in key tourist destinations, ensuring that Colorado continues to be a welcoming place for all travelers.”

“This gathering was about more than exploring Colorado’s natural beauty and engaging culture,” said Alvaro Silberstein, co-founder and chief executive officer of Wheel the World, in his own statement for the release. “It was about connection and progress. The Colorado Tourism Office is confidently showing what’s possible when destinations lead with accessibility and prioritize the experiences of travelers with disabilities.”

As of this writing, Colorado is one state I’ve yet to visit despite some of the closest people in my life living there. (In fact, I have a note in Apple Notes tallying the states I’ve visited; I’m up to 9 thus far.) As I’ve written several times in my extensive coverage of Airbnb’s accessibility efforts, travel and technology indeed have relation. In this case, that Colorado has invested considerable resources into making its state accessible to visitors (and residents) involves technology in the adaptations and, obviously, in the dissemination of the news—the latter of which is crucial because it’s highly likely the majority of people are unaware about ostensibly obscure inclusive travel initiatives.

Colorado’s accessible travel news was preceded by California’s own announcement in early August that it launched its first-ever Accessibility Hub. The state described the website as “a comprehensive online resource designed to empower travelers with disabilities to explore the Golden State with greater ease and confidence [and an] initiative [supporting] both travelers and industry professionals with tools, tips and curated content aimed at making California more inclusive and navigable for all.”

Read More
Steven Aquino Steven Aquino

OpenAI Releases New ‘Sora’ Video creation App

Ryan Christoffel reports for 9to5 Mac OpenAI, maker of ChatGPT, has a new app in the iOS App Store: Sora. The app, as Christoffel describes it, uses “AI for video creation.”

According to Christoffel, Sora’s description reads in part the software is meant to “turn your ideas into videos and drop yourself into the action,” adding the app has been built with the purpose of being “a new kind of creative app that turns text prompts and images into hyperreal videos with sound using the latest advancements from OpenAI.”

Christoffel notes Sora represents “the first major OpenAI launch” since its latest model, GPT–5, was released to much fanfare last month. Based on some reports I’ve seen online from intrepidly nerdy early adopters, access to Sora currently is restricted to a waitlist. I haven’t yet downloaded it, but I‘ll likely be putting my name on the list soon.

From an accessibility perspective, Sora’s utility lies in a sentence in the aforementioned app description: “A single sentence can unfold into a cinematic scene, an anime short, or remix of a friend’s video.” Of course, this methodology follows the lead of OpenAI’s canonical chatbot; to wit, writing a text prompt will prompt the AI to create based on one’s instructions. Although most people are familiar with simple prompts or queries for interacting with ChatGPT—or Gemini or Claude, for that matter—the reality is, the modality is genius for accessibility. In Sora’s case, that the user is supposed to give the app a short descriptor for what to do means video creation—ostensibly reasonably complex and involved depending on creative intent, tools, etc—suddenly becomes far more accessible because AI is assuming the load and doing the grunt work. As I’ve written many times, Sora very plausibly could do for video creation what, say, ChatGPT does for web searches and other research for school essays and whatnot. Again, the salient point is, used in this manner, AI is a bonafide enabling technology that breaks down barriers to creative processes that one otherwise would be excluded from for myriad reasons. This is neither trivial nor esoteric—and it sure as hell isn’t “lazy.” I wrote about similar concepts in June 2024 when I interviewed Kantrell Betancourt about her then-new book on using Midjourney and seeing AI writ large as an assistive technology.

Read More
Steven Aquino Steven Aquino

gemini’s arrival on google tV Portends a more accessible TV Experience for All Viewers

Google last week announced Gemini is coming to Google TV as one’s “conversational assistant to help you find content to watch on the big screen.” The new feature was detailed in a blog post written by Shalini GovilPai, Google’s vice president of Google TV.

“The TV is the heart of the home—the place where we gather, cheer and connect. For years, Google TV has made it easy to find great entertainment, and Google Assistant has helped TVs do more just by voice, from getting recommendations to dimming the lights,” GovilPai wrote in the lede. “Today, we’re introducing Gemini for TV. Everything you already do with Google Assistant still works, but Gemini on Google TV goes beyond simple commands and lets you engage in free-flowing conversations with your big screen. Get help finding the perfect show for whatever mood you’re in, brainstorm a family trip or answer complex homework questions. Just say ‘Hey Google’ or press the microphone button on your TV remote to unlock a new world of possibilities.”

Amongst the many examples GovilPai gives for using Gemini on the big screen involves one of my favorite shows in The Pitt on HBO Max. She notes people can ask Gemini questions such as ‘What’s the new hospital drama everyone’s talking about?’ and follow-up with queries like ‘What are the reviews for The Pitt?’ Gemini’s user interface manifests itself as a horizontal search bar at the bottom of Google TV’s home screen.

The news comes after Google TV’s aforementioned home screen received a redesign.

I’ve written copiously about my bullishness over Gemini on my iPhone for accessibility, but it has relevance to televisions too. To GovilPai’s point about surfacing content based on vague, open-ended questions, it could be more accessible for someone with certain cognitive conditions to find shows like The Pitt because they needn’t be so precise in asking for what they want. As the artificially intelligent agent, Gemini is capable of inferring that the “new hospital drama everyone’s talking about” quite certainly could be The Pitt. This type of functionality only enriches the accessibility of the Google experience for things like, for instance, controlling one’s smart home setup, as well as Google TV’s content-centric software design—of which I’m already a big fan for accessibility reasons. All told, Gemini vastly improves Google TV’s value proposition, making it a solid choice for one’s home theater if accessibility and ease of use means more to you than, say, hardware and software performance and ecosystem amenities.

Gemini is now available on TCL’s new high-end QM9K set, and will come to the QM7K, QM8K, and X11K models later this year. In addition, streaming boxes such as Google’s own Google TV Streamer and Walmart’s Onn 4K Pro—which I briefly reviewed back in early August—will receive Gemini in an update “later this year,” according to GovilPai.

Read More
Steven Aquino Steven Aquino

Target Unveils ‘first-of-its-kind’ Self-Checkout Stations for Better Accessibility, Autonomy

Target earlier this month announced what it describes as a “first-of-its-kind” in-store innovation: accessible self-checkout kiosks. The new technology will roll out in the United States starting with the holiday season and continue throughout early 2026.

“A father stands beside his daughter at a Target self-checkout. She has low vision, and for the first time, she’s navigating the process on her own. He guides her through each step, offering quiet support as she scans an item. A soft beep sounds, followed by a clear voice reading the total. Her fingers move confidently across the tactile controller, guided by feel and sound rather than sight. The experience feels intuitive and empowering,” Target wrote in its announcement. “Thanks to Target’s new accessible self-checkout, moments like this will soon be possible for more guests across the country. Designed with and for disabled guests and people with disabilities, this solution is the first of its kind in U.S. retail. Rolling out to self-checkout stations nationwide beginning this holiday season and continuing through early 2026, it’s part of Target’s ongoing checkout improvements, reflecting our commitment to creating joyful, guest-first experiences that help all families feel seen, supported and welcome.”

The checkout system features a mix of hardware and software, including Braille and high-contrast icons, a tactile navigation button, and a headphone jack with adjustable volume. Target worked “closely” with the National Federation of the Blind (NFB), of which the company said “provided valuable feedback throughout the development, design and testing process,” adding the feedback “directly shaped the technology.”

Target goes on to reveal the father-daughter duo was a Blind man named Steve D, with her daughter being low vision. Notably, Steve works at Target as a user experience accessibility manager, with the company saying both he and his daughter “have spent years navigating stores that weren’t designed for them” while adding Steve worked on building the new accessible systems. The experience is profound, according to Steve.

“Shopping with my daughter and teaching her how to use the self-checkout, that was powerful,” he said in the post. “It’s not just tech. It’s joy, independence and change.”

The NFB is enthusiastic too.

“Target’s new accessible self-checkout experience is unique not only because it is a first in the industry, but because it was designed through collaboration with the blind, incorporating our technical expertise and lived experience,” Mark Riccobono, NFB’s president, said in a statement for Target’s announcement. “The rollout of this innovation further establishes Target as an industry leader in accessibility and a true partner of the blind in our quest for equal access to all aspects of modern life.”

It’s good to hear Target is investing in accessibility; the new checkout kiosks not only foster inclusion, they also instill independence. As an avid Target shopper myself, I typically go to a line with a human cashier, but do use self-checkout at other retailers. Even with low vision, I manage just fine, but do lament smallish text size, laggy touchscreens, and barcode scanning. To the latter, it oftentimes is difficult to find it on the product; this slows me down and stresses me out—especially if people are waiting.

Read More
Steven Aquino Steven Aquino

Uber Introduces ‘Senior Mode’ for older adults

I received a promotional email from Uber late yesterday afternoon alerting me to the Uber app’s all-new “senior mode.” Intrigued out of journalistic curiosity, I go to click the “Learn More” button in the message, which zips me to this page on Uber’s website. The header text on the page promises what Uber is calling “easier rides for older adults.”

“Caring for loved ones is a balancing act—that’s why we’ve made it easier to support your parents or grandparents,” Uber writes. “Set them up with a simpler ride experience so they feel confident going anywhere, knowing you’re there to help if needed.”

Amongst the hallmark features of senior mode include a “simplified app experience” replete with larger buttons and text. In addition, Uber boasts senior mode features “a minimal homescreen to help make booking even more straightforward, and only see essential booking details for added clarity.” Importantly, Uber says senior mode has aid-focused tools for caregivers and/or loved ones in the event someone needs help.

“Easily lend a hand if your loved one needs help—you’ll be able to track trips, add their favorite places, and call their drivers,” Uber says. “Plus, they’ll always have access to our on-trip safety features such as being able to call 911 and our 24/7 safety support.”

Uber has partnered with GIA Longevity and GoGoGrandparent in building senior mode.

While it’s perfectly logical to presume accessibility is about a discrete, admittedly esoteric suite of software features for people with disabilities, older adults—senior citizens—are very much in alignment with accessibility’s target demographic. It makes perfect sense because it’s only natural that people will require more help for daily living as they age. Eyesight gets less sharp. Hearing gets less sound. Fine-motor skills get less precise. Hell, many of our nation’s veterans are older people who became disabled whilst defending the country. Thus, Uber’s senior mode clearly is the company’s recognition of the aging process and the costs it incurs. Uber is making its app more accessible to seniors, the byproduct of which is heightened agency and autonomy.

Similarly, Apple’s Assistive Access on iOS and television products such as JubileeTV and LG’s new Easy TV all are technologies adapted—accessibility is nothing if not about adaptation—to be simpler for seniors (and anyone else with intellectual disabilities).

The advent of Uber’s senior mode comes months after my beloved Waymo introduced a feature—teen accounts—for those folks on the polar opposite end of the age spectrum.

Read More
Steven Aquino Steven Aquino

Meta to open Pop-Ups for Ray-Ban Display Glasses

Jay Peters reported for The Verge earlier this week Meta is planning to open pop-up shops in various locales over the next several weeks as a means to enable potential buyers to check out its recently-announced Ray-Ban glasses with a display. The $799 wearable, which Meta announced last week and slated to be released on September 30, is the most ambitious of an ever-expanding line of connected eyewear for the company.

“Meta is just about to launch its impressive smart glasses with a display, and to give more people a chance to try them out and see its other smart glasses and VR hardware, the company is going to open new shops starting in October,” Peters wrote Wednesday. "New Meta Lab pop-up shops will be located in Las Vegas and New York, and and its Los Angeles Meta Lab location, which opened as a pop-up last year, will return as Meta’s flagship store. Meta’s Bay Area shop in Burlingame, California, remains open, too.”

According to Peters, Meta is opening its pop-ups to the general public in Las Vegas, Los Angeles, and New York City. The Vegas location opens at the Wynn Las Vegas on October 16, with the Melrose Avenue location in LA on October 24. The NYC spot on Fifth Avenue, in Midtown Manhattan, is scheduled to open a little later, on November 13.

Anyone interested in the display-equipped Ray-Bans can book demo time with Meta.

I felt covering Peters’ story was apropos given my feature this week on Lucyd’s glasses. Like Apple Vision Pro early last year, I can’t help but feel a slight tinge of FOMO seeing reporter pals such as Bloomberg’s Mark Gurman share initial impressions of Meta’s Ray-Bans with a display; the reason isn’t jealousy, but rather deep curiosity of accessibility. At a high level, Meta’s new Ray-Bans hold more personal appeal than the Lucyd pair I reviewed precisely because the Ray-Bans have a screen. Moreover, while I’m simultaneously not itching to get a pair because I’m not as invested in Meta’s ecosystem as I am Apple’s, the nerdy journalist in me is nevertheless damn intrigued by what Meta has built—particularly its wristband controller thing. Indeed, it’s moments like this—mainstream media types like Gurman getting hands-on (face-on?) time with Meta’s glasses before writing about it—that underscore my ardent belief that disability should have a more prominent place setting at the proverbial table of mainstream media coverage. At the very least, it should be obvious to the powers-that-be who run newsrooms that disabled people pay attention to the news as much as anyone else.

Read More
Steven Aquino Steven Aquino

Apple AI Researcher Inadvertently Amplifies Accessibility in Recent Presentation, Blog post

Apple earlier this week pushed a new post to its Machine Learning Research blog in which the company shares highlights of its most recent Apple Workshop on Natural Language and Interactive Systems. Apple describes the get-together, held May 15–16, as “bringing together Apple and members of the academic research community for a two-day event focused on recent advances in NLP [natural language processing].”

Apple’s posts on its Machine Learning Research website are nerdy and technical, and, frankly, academic. What grabbed my attention about this particular piece, however, is Kevin Chen’s presentation on what’s called “Reinforcement Learning for Long-Horizon Interactive LLM Agents.” Marcus Mendes highlighted the event for 9to5 Mac, describing Chen’s talk as “[showcasing] an agent his team trained on a method called Leave-one-out proximal policy optimization, or LOOP.” The agent, Mendes reported, was trained to perform multi-step tasks based on 24 different scenarios. Chen caveated a significant limitation of LOOP presently is it doesn’t support multi-turn user interactions just yet.

According to Mendes, Chen employed the following prompt with the agent: “I went on a trip with friends to Maui recently. I have maintained a note of money I owe to others and others owe me from the trip in simple note. Make private Venmo payments or requests accordingly. In the payments/requests, add a note [called] ‘For Maui trip.’”

Chen’s ask is the “nut graf” in terms of accessibility. To wit, it’s highly plausible a disabled person who needs to Venmo their friends cash for a trip may find manually paying each person individually, along with appending Chen’s note, inaccessible for a variety of reasons—reasons which, as I’m often inclined to point out, transcends sheer convenience. Depending on one’s disabilities—cognitive/visual/motor or some combination thereof—it’s easy to see how paying, say, more than one or two people could be tedious. Sure, you could copy-and-paste the note for expediency’s sake, but the fact remains having to manually pay people means traversing the Venmo app far and wide. Even if it is doable, ability-wise, feasibility doesn’t equal ease of use. Ergo, Chen’s reliance upon AI to do the grunt work for him makes paying people back not merely the conscientious, responsible thing to do—it’s infinitely more accessible too.

Again, Chen is explicit in his disclaiming the LOOP technology is imperfect and needs more massaging. Nonetheless, it’s extremely heartening (and downright exciting) to see how AI’s application in this manner has profound potential to make life so much more accessible for those (like yours truly) who are part of the disability community.

Read More
Steven Aquino Steven Aquino

Linktree Chief Executive Alex Zaccaria Talks New Features, Empowering Creators, More In Interview

Popular “link in bio” company Linktree this week has announced what it describes as “smarter design tools” and more in an effort “elevate” one’s landing page on the internet. The enhancements were detailed in a blog post published on Wednesday.

Linktree was founded in 2016 and today boasts more than 70 million users.

In the post, Linktree says its mission has always been, and always will be, to “make it simple for anyone, from creators and small businesses to nonprofits and global brands, to connect their audiences to everything they do online.” The company added it has “doubled down” on furthering its mission this year by launching various tools which help so-called “Linkers” earn money by selling digital goods, running affiliate programs, and using sponsored links. The “next step” launching today, Linktree says, is giving people a means to “design a Linktree that feels uniquely and beautifully you.”

“As the platform has grown, one thing has become clear across the entire industry: design can be a barrier,” Linktree wrote. “Many creators and businesses want to look professional online but don’t have the tools, time, or design background to make it happen. With this launch, Linktree is making high-quality design accessible to everyone, helping our community look polished, feel authentic, and stand out.”

Linktree’s headlining feature is powered by—what else?—artificial intelligence. Called “Enhance with AI,” the feature is characterized as “a new feature that provides an instant, personalized design makeover [by analyzing] a profile and suggests tailored updates like refreshed layouts, wallpapers, or color schemes, based on what’s working across top-performing Linktrees.” Linktree also notes Enhance with AI was built to offer recommendations which would help spur discovery and increased engagement. Relatedly, Linktree is using the tech to “restyle” profile images so as to “[allow] users to transform their photo into different artistic styles such as cartoon, sketch, or 3D.”

Elsewhere, Linktree is making link names easier to create by suggesting good ones, as well as enabling deeper integration with popular online design tool Canva so that everyone has access to what Linktree says are “professional-grade design tools.”

In a brief interview conducted via email earlier this week, Linktree co-founder and CEO Alex Zaccaria reiterated the high-level talking points shared in today’s announcement, saying his Melbourne-born company has “grown into a one-stop platform for creators, entrepreneurs, small businesses, and global brands.” Furthermore, he told me the enhancements announced today, as well as throughout 2025 thus far, is a true testament to “the commitment we’ve made to our community of more than 70 million Linkers to keep innovating on their behalf.” Amongst those tens of millions of users include celebrities such as Selena Gomez and wellness entrepreneur Kelsey Rose.

But Linktree has evolved into something more than a mere “link in bio” generator.

“You can design, monetize, and grow your audience all in one place,” Zaccaria said of Linktree’s raison d’être. “I like to think of us as a digital Swiss Army knife for creators and small businesses. Whether you are selling an online course, sharing your podcast, or running a seasonal promotion, Linktree is where it all comes together.”

When asked about the gravity of the day’s news, Zaccaria said it’s about instilling “confidence” in people. Linkers, he said, know their page is important because one’s Linktree oftentimes is “the first impression someone gets.” Linktree’s value proposition, Zaccaria added, lies in the reality “not everyone has the time, budget, or skills to be a designer.” In other words, Linktree makes linking more accessible.

“These new tools give them a way to instantly elevate their profile,” Zaccaria said.

Zaccaria pointed to a Sydney small business selling handmade ceramics. With the advent of Linktree’s tools, the owners are able to “make their Linktree look like a digital storefront in just a few clicks” rather than take time from their livelihood to moonlight as amateur web designers. “That is the kind of impact we are aiming for,” Zaccaria said.

Importantly, Zaccaria stressed Linktree’s use of AI isn’t a sign of the company hopping onto an increasingly crowded bandwagon. On the contrary, he emphasized Linktree isn’t using AI “for the sake of it” simply because it’s the technology du jour right now.

“AI, for us, is about removing friction and giving people a head start… it’s about giving Linkers time back,” Zaccaria said. “A musician doesn’t want to spend half an hour wondering which shade of green will look good with their album cover. They just want to share their new track. AI gets them to that point faster, then they can make it their own.”

From an accessibility perspective, Linktree’s emphasis on ease of use, as well as its embrace of AI as automation, is deeply resonant. Although Zaccaria comments convey Linktree’s essence as a conduit for convenience, the truth is the undertones of accessibility are undeniable. To wit, while it’s true the Average Jane or Joe isn’t a web developer, able to wrangle HTML and CSS code to make their sites exude creative intent, it’s also very true not everyone is able to build a website from the ground up. Maybe someone is neurodivergent and easily gets overwhelmed by complex interfaces and instructions on how to write HTML, for instance. Maybe someone has visual and/or fine-motor disabilities and can’t spend a lot of time scanning and clicking without fatigue setting in. Maybe someone’s cognitive abilities are simply not at the level to complete a potentially complex task such as building a blog, let alone write code for it. Especially with assists from AI, Linktree can make that more accessible by lessening much of the cognitive load and assuming the grunt work. At 30,000 feet, Linktree strikes me as similar to Squarespace, the sponsor darling of many podcasts I, and legions of other tech nerds, listen to every week. Instead of self-hosting a blog—which I’ve done before because I’m a nerd, but don’t recommend it—I built this very website on Squarespace precisely because, like Linktree, all the hard work is done for me. Yes, I linked my own domain name and added a few lines of custom CSS code with help from Google Gemini, but for the most part, I don’t want the hassle. To Zaccaria’s prior point, I’m not a web developer either. I’m a journalist, and time spent monkeying around my website is time better spent doing interviews, testing new products, and writing stories.

In terms of feedback, Zaccaria told me the early response to what’s being rolled out today has been “encouraging” for the team. Enthusiasm is high amongst high-profile users such as Cardi B, whom Zaccaria said “has already experimented with video backgrounds and heading options—which is great [proof] these updates resonate at the very top of the market.” Similarly, he noted Linktree’s creator partnerships team has heard creators are “excited to finally have more expressive and customizable design options” and are appreciative of how the new functionality is “a refreshing way to make their Linktree feel more uniquely their own.” The vibe check, as they say, is positive.

Looking towards the future, Zaccaria wants Linktree to “keep breaking down barriers.”

“Our mission has always been to help Linkers connect their audiences to everything they do online… I am especially excited about the role creators are playing as the storefronts of the future,” he said of pondering Linktree’s future. “They are driving billions in sales and shaping culture in ways that traditional advertising never could. My hope is that Linktree continues to be the place where that creativity and commerce come together, and that we keep finding ways to make it easier for Linkers to thrive.”

Read More
Steven Aquino Steven Aquino

A Look At Lucyd’s Smart Eyewear And Accessibility

Nearly three years ago, back in October 2022, I wrote a piece for my old Forbes column in which I examined Meta’s Ray-Ban Stories and Amazon’s Echo Frames after companies sent me their respective sunglasses to try out and review. To distill my conclusion from using both: While I found both pairs to be intriguing from an accessibility perspective, they weren’t essential because I more or less treated the fancy-pants glasses like the cheap drugstore dumb sunglasses I’ve used for eons.

I was recently reminded of this old story of mine, and of its sentiments, when I was approached by Lucyd Eyewear about trying out the company’s so-called “audio eyewear” for accessibility’s sake. At 30,000 feet, Lucyd’s conceit is conceptually identical to that of Amazon and Meta’s: glasses (with speakers in the frames) that let you do things like listen to audio content and more. Lucyd asked which pair I’d like to have, and being someone who leans towards casual and sporty fashion, I chose the $199 Reebok Voltage sunglasses. I’ve been using them for the last several weeks.

The glasses, from which I’ve listened to music and podcasts as well as taken a phone call or two, connect to one’s phone via Bluetooth and, like Apple Watch and Vision Pro on iOS, has a companion app. The setup process was straightforward and painless; unlike something like AirPods, for instance, the Lucyd glasses need to be turned on in order to automatically pair after the initial setup. In my testing, I’ve tended to turn the glasses off so as to preserve its battery—only to later be flummoxed for a few seconds when I go to use the audio features because they aren’t working. What I’m saying is, part of the magic of something like AirPods is there is no power button, not even on the charging case. The earbuds just know when to “spring to life.” In that way, AirPods are far more accessible than Meta Ray-Bans or Echo Frames or, yes, Lucyd, because there’s less cognitive load to remember that“Oh yeah, you gotta turn them on” in order to use the Lucyd glasses’ marquee features. Otherwise, they’re just… dumb glasses.

I compare the Lucyd glasses to AirPods to illustrate that the glasses mostly replicate AirPods’ core functionality in a different form factor. Instead of being in your ears, they’re on your face. That’s not a complaint. In fact, from an accessibility perspective, there’s a cogent argument to be made that Lucyd’s glasses have appeal to someone who, for example, may have sensory conditions such that they don’t like—or can’t tolerate—objects in their ears. Maybe getting earbuds in and out of one’s ears are too fiddly from a fine-motor standpoint. Maybe they’re, like me, too often forgetting their AirPods at home when they leave the house to run errands or whatever. For those people, then, the Lucyd glasses could be perfect. It’d be a win-win situation: they can keep the sun out of their eyes whilst still being able to enjoy audio, take phone calls, and even query ChatGPT. For my usage, it’s been a struggle to remember to turn the glasses on for the audio features, so they’ve more often than not acted like my aforementioned inexpensive sunglasses I mainly wear to keep the sun out of my eyes.

Again, not a complaint. It’s just the nature of my beast.

In complementing this mini review, I was offered the opportunity to interview Lucyd’s CEO in Harrison Gross. Gross, who also serves as the company’s lead developer, explained via email he’s been working on smart glasses for 8 years and has filed 70 patents to show for it. Gross said he’s long been “addicted to screens,” with his vision and attention span suffering because of it. He described his personal mission as “[helping] people live more in the moment with wearables and reduce the need for screen time for people to get the information and digital functionality they need.”

“The emergence of smart eyewear [as a category] and voice-based AI computing is the answer to this problem,” Gross told me of the raison d'être for Lucyd. “I am doing what I can to address the problem of excessive screen time in our society to help people get to a ‘new normal,’ where they have seamless access to computational power in a mobile-friendly, hands-free, and heads-up format. That’s where Lucyd comes in!”

Gross firmly believes consumers “100%” want connected eyewear—even if they aren’t explicitly saying so. The popularity of products like Apple Watch and AirPods are proof, with Gross telling me the reason wearable technology is so enamoring is largely due to “familiarity and convenience.” Consumers, he said, “are much more likely to adopt smarter versions of products they already use than entirely new modalities” and pointedly said that’s why things like Humane’s failed AI Pin haven’t secured a place in the market. “Consumers are resistant to learning entirely new behaviors,” Gross said.

Gross added the appeal of wearable technology is “quite simple” and expounded further by telling me “if you can add more functionality to a particular form factor, the product becomes more useful to the user.” Ultimately, appeal boils down to two things: convenience and utility. In Lucyd’s case, Gross said, “our product is like headphones and glasses in one, so it obviates the need for both traditional glasses and headphones [and replaces] both devices with one at a price that matches traditional eyewear.”

What of smart glasses then?

“The customer for smart glasses is really people who already wear eyeglasses or sunglasses frequently—or safety glasses, as we have seen huge success with our smart safety frame,” Gross said. “It’s much easier to get a regular glasses wearer to switch to smart glasses than someone who doesn’t wear glasses at all.”

When asked about smart glasses and accessibility, Gross said it’s a “really interesting topic” because he believes smart glasses have a sizable distinct advantage: glasses inherently already used “as a medical device to address a whole host of issues.” There exist a wide array of smart glasses that specialize in addressing certain disabilities, he said, and Gross expects the market to “further diversify” as time marches on. He pointed to Lucyd’s own Lyte glasses, which “offers numerous voice-based controls for accessing information and AI from your connected device [and allows] users with difficulty typing to engage more easily with many different digital systems,” as well as Meta’s ever-popular Ray-Bans—which given their partnership with Be My Eyes—enables “general guidance and object exposition for low vision users,” Gross said.

Feedback-wise, Lucyd has been well-received by customers, according to Gross. He explained to me “many of our customers convert” during live demos of the product, adding the device’s value proposition “becomes immediately obvious to them.” Moreover, Gross said many longtime customers, whom he described as “diehards,” give he and his team lots of varied feedback. These “power users,” as Gross called them, are invaluable because they “get the most out of their frames” and, crucially, give Lucyd inspiration for improvements. “We hear all the time how our technology is life-changing for so many people—especially those who love audio content but are unable to wear headphones due to safety or professional concerns,” Gross said.

As one of Lucyd’s newest users, I think they have a good product. I like the smart functions and the stylistic aspect, but to me, smarts glasses have yet to reach their zenith. Particularly for accessibility’s sake, as someone with extremely low vision—Social Security deems me “legally blind” for aid purposes—the ultimate appeal in smart glasses comes with a screen. As I wrote last year in reviewing Apple Vision Pro, the present-day headset form factor is obviously Apple capitulating to the limitations of modern technology’s capabilities. As someone deeply invested in the company’s ecosystem, my dream scenario would be to someday wear a pair of “Apple Vision Glasses” running visionOS. They could help in navigation, object and people detection, and much more. Apple may be working on it, but the current technology isn’t yet ready for the mainstream. Meta seems to think it’s ready, but I wonder about its accessibility.

So, Lucyd. Again, I like the glasses as sunglasses, smarts be damned; I’ve gotten compliments on how good I look wearing them. As someone who already is a heavy user of AirPods, the smarts of Lucyd’s glasses are somewhat stunted for me. They work as advertised, but I’m going on nearly a decade of AirPods life, and old habits undoubtedly die hard. Nevertheless, the experience I’ve had with Lucyd’s glasses has been enlightening not only to satiate my nerdy, journalistic curiosity, but also to get an early glimpse (no pun intended) of what a glasses-forward future could be like for me.

Gross and I are on the same wavelength in that last respect.

“I look forward to [a] future where all eyewear is smart and delivers heads-up functionality to everyone. [It will reduce] our reliance on those pesky screens,” he said.

Read More
Steven Aquino Steven Aquino

iOS 26.1 Beta 1 Includes More Live Translation Languages, Apple Music Swipe Gesture, More

After its software release bonanza last Monday, Apple this Monday is onto the next one.

Juli Clover reports today for MacRumors Apple seeded to developers the first beta of iOS 26.1 (along with its brethren), and the update comes with a few notable features for accessibility. Namely, Live Translation for AirPods is being localized into more languages and Apple Music gets a swipe gesture to change tracks in the mini player.

“AirPods Live Translation works with additional languages in iOS 26.1, including Japanese, Korean, Italian, and Chinese (both Mandarin Traditional and Simplified),” Clover said.

As to the new gesture in Apple Music, I often switch back and forth between views to manually change tracks in the album view if there are only particular songs I want to hear. Thus, this new swipe move theoretically should make that task more accessible because I needn’t have to jump back and forth anymore if I know certain tracks are clustered together. Generally, though, for my most favorite albums—think Linkin Park’s Meteora, for instance—I will start with the first track and let it run from front to back. I’m a completionist that way. Otherwise, the new swipe gesture should prove really handy.

Other changes in iOS 26.1 Beta 1 include the dialer in the Phone app getting its Liquid Glass glow-up, design changes in the Photos app, and more. Based on precedence, Clover posits the public release of iOS 26.1, et al, could come sometime next month.

Read More
Steven Aquino Steven Aquino

Amazon, publishers Pushing for Better accessibility Of E-Books, new report says

My friend (and Six Colors contributor) Shelly Brisbin links this morning to a story about Kindle books gaining more robust accessibility features for Blind and low vision bookworms. She links to a story from Michael Kozlowski of Good eReader, who reported last Friday Amazon is now “prioritizing new accessibility features” for visual disabilities.

“Amazon has been pushing accessibility hard lately, making Kindle books and Kindle e-readers better suited for people with visual disabilities. They have added a new tab to book description pages, called Accessibility,” Kozlowski said of Amazon’s zeal for stronger Kindle accessibility. “It has new Accessibility metadata, including Visual Adjustments, Non-Visual Reading, Conformance, and Navigation.”

“Amazon might be the only company to take accessibility this seriously,” he added.

Kozlowski also said book publishers “have been prioritizing the submission of e-books to Amazon that include accessibility features,” noting an extrinsic motivator is regulation. Indeed, disability-centric laws on the books such as the Americans with Disabilities Act and the recently-enforced European Accessibility Act are “increasing legal pressure to make digital content accessible” by “positioning accessible publishing as both a competitive advantage and a necessity.” One of the biggest publishing houses today, the venerable Simon & Schuster, is a big proponent of better accessibility in the ebook arena; whereas only 60% of its catalog was certified accessible between 2022–2024, that number has risen to 100% this year, according to Kozlowski. Simon & Schuster is a prolific publisher, putting out 800 titles each year.

As Brisbin and Kozlowski both mention, ebook accessibility historically has been hit-or-miss. That Amazon (and publishers) are pushing to close this chasm is heartening. I have an old Paperwhite from 2018 and, while I haven’t used it for awhile since I prefer Apple Books for accessibility reasons, only ever used the larger font size on my Kindle. It was great, and I could read fine; I just prefer the bigger and brighter display of an iPad’s screen—e-ink and LCD are two entirely different display technologies—and I find the accessibility features on iPadOS to be far more comprehensive than Amazon’s suite.

Read More
Steven Aquino Steven Aquino

Waymo Soon will go to San Francisco Int’l Airport

A bit of local news: Waymo will soon go to San Francisco International Airport (SFO).

Rya Jetha reported for The San Francisco Standard this week the Alphabet-owned autonomous vehicle company is receiving a permit from airport authorities allowing it to eventually run service to and from SFO. According to Jetha, the experience will initially consist of three phases: (1) testing vehicles with a safety driver present; (2) offering rides to Waymo and airport employees; and (3) offering rides to the general public. The airport “did not provide a timeline for these phases,” Jetha added.

SFO joins its neighbor in San José Mineta International Airport in Waymo service.

“San Franciscans have anxiously waited for Waymo to make inroads at SFO,” Jetha wrote of the ramifications of this week’s transit news. “In December alone, more than 13,000 people searched for SFO on the Waymo app, and around 700 people installed the app while physically at the airport. A July 2024 survey by Waymo found that 89% of riders in the Bay Area are interested in using the service to get to and from SFO.”

San Francisco mayor Daniel Lurie said in part in a statement shared with Jetha that the city is “expanding safe, reliable, and modern transportation options” vis-a-vis Waymo.

As Jetha mentioned, the Waymo-to-SFO news is welcome given the context that the Bay Area next year is hosting both Super Bowl LX and the FIFA World Cup at Levi’s Stadium in Santa Clara. Having Waymo available as a viable option for getting around the region makes sense—even when we’re not host to such attractions. Indeed, Jetha also noted San Jose’s mayor in Matt Mahan is hopeful people will choose Waymos instead of rental cars to get around as these big events are going on throughout 2026.

From an accessibility perspective, using Waymo to get to the airport feels like a win all around. There’s a button in the app to open the car’s trunk, so it seemingly would be easy to stash one’s luggage back there to and from SFO. Likewise, that a disabled person could use Waymo to get to the airport makes that part of the travel journey more autonomous and independent for them. Moreover, it saves on having to fight for an Uber or Lyft, or asking a friend or family member to take you and/or pick you up there.

Read More
Steven Aquino Steven Aquino

What One Influencer’s Viral Cat Videos Says About social media and its Credibility For accessibility

It’s perfectly logical to presume cat videos and accessibility have zero correlation.

And yet, there’s a valuable lesson to be learned from the ostensibly mismatched pair.

When I sat down with Amaris Branco, a 23-year-old influencer from Ontario, Canada, back in April to discuss her life and career, my interview with her felt instructive insofar as I quickly put the pieces together that her seemingly unrelated work as a content creator illustrates how accessibility pervades everything in everyday life, in ways large and small. With over 81,000 followers on her Instagram, it’d be equally logical to presume Branco is a seasoned, long-time social media maven; the truth is, though, she confided in me actively creating for her social channels was something she “never really took seriously.” She was more observer than participant, telling me she “loves” social media nowadays and began getting into it only recently, circa 2022 and 2023.

“I randomly started posting on TikTok for fun… just as something to do,” Branco said.

Her original conceit was a silly one: Pets can’t see what their humans do on a daily basis. They’re too small, too low to the ground. It “intrigued” her, for instance, that her beloved cat was unable to see into the microwave or spice cabinet. Branco initially resisted the idea to make a video about showing her cat such untraversed terrain, owing to her aforementioned apathy towards social media. Still, the notion gnawed at her… something “kept telling me,” she said, to create content and post it—so she did.

It became a viral sensation that changed Branco’s view of social media—and her life.

“That video took two seconds to make,” Branco said. “That was his [her cat’s] genuine reaction of him looking into the fridge for the first time and the spice cabinet—super raw, super real. That took me two seconds to film, then I posted it that night, not thinking anything of it… I thought, ‘Oh, just another silly video, whatever.’ I woke up the next morning to my video being at 100,000 likes! People were going insane over my cat.” A video that Branco posted on a lark born of curiosity, then shared with whimsy, would spark a trend that eventually raged through TikTok’s algorithm like wildfire.

“[My followers] were like, ‘Oh my god, your cat is so silly looking! Oh my god, your cat is insane. We want to see more videos!’ While people were telling me to make more videos of that series, people were also remaking the series on their own as well,” Branco told me of the response to her video. “They were showing their pets’ random stuff they’ve never seen before in their house, which was really crazy. People did it with their three iguanas. People were doing with their dogs, People were even doing with their babies. So many people hopped on this trend, and it all just happened like that. So I started posting more and more, and that became a huge series that blew up on my TikTok.”

One of Branco’s videos has 30 million views. She has 136,000 followers on TikTok.

Branco, who does contract work at an agency called Cornelia Creative that specializes in meme advertising, told me the cat who catapulted her to her modicum of celebrity, passed away. She’s since made content with her new cat, but admitted the vibes “aren’t the same” as they were with her old one. The popularity of her material led Branco to other opportunities as well, which helped cement her standing in the wide world of creators. “It was a crazy experience in my life,” she said. “I’ve always been so close with my cat, and I never thought a million years would we have started a TikTok trend together. Even now, I’m like, ‘Oh my god, I can’t believe that actually happened!”

Her advice to aspiring creators? “Start posting the content,” Branco said.

The reason Branco’s story resonated with me, from a disability standpoint anyway, lies in her goal of getting people to “not be afraid to put themselves out there [and] post things they love.” She explained how she felt she was “hiding behind my phone’s screen” for the longest time, scared to be vulnerable by showing others the things which light her life. She was afraid of what people from her hometown may say, let alone strangers on the internet. Social media can be a cesspool—but it’s a lifeline too.

Branco appreciates how social media has immense power to highlight authenticity.

“The minute I did start posting and started doing things I love was when I started to see all the success and stuff,” she said. “If I could give advice to anybody, it would be ‘Don’t be afraid to shine your light. Don’t be afraid of judgment. And if you want to post your videos, if you want to do that, absolutely do it because it can literally change your life.’ If I didn’t post that first video, I wouldn’t be doing what I’m doing today. It’s crazy how everything [in life] has a domino effect… yeah, if I could give any advice to anybody, it’d be: ‘Don’t hesitate to post if that’s what you want to do. Don’t be afraid of judgment.’”

I’ve written before about how, for all social media’s unsightly warts, people take for granted how it affords people the ability to connect and be social with other people from around the world in real time. For people in the disability community who are homebound or otherwise limited in their mobility for health and/or logistical reasons, things like Facebook or Instagram or TikTok are bonafide godsends. It’s certainly reasonable to surmise some percentage of Branco’s followers are disabled—counting yours truly on Instagram—and, silly though it is, a disabled person may find joy in her cat videos or her outfit-of-the-day Reels on Instagram as a way to be entertained and live vicariously through Branco. What’s more, disabled people such as Shane Burcaw and my friend Haben Girma provide visibility of our community doing things that buck the entrenched societal stereotypes of disabled people and our capabilities. The advent of social media has given the disability community more reach (and more awareness) than anything else in human history—and it’s not hyperbolic to say it like that. While not assistive technology in the classical sense—like, say, Apple’s suite of software—the moral of Branco’s story is simple: social media, no matter how lighthearted or seemingly insipid, has real potential to have profound impact on genuine human connectedness. There’s a reason Apple’s iPhone 17 launch event earlier this month was filled to the brim with not only traditional journalists like yours truly, but tech YouTubers and influencers galore as well. The hands-on area in the Steve Jobs Theater was all hustle and bustle with media folks using their camera rigs to shoot video or otherwise create content for their audience. At some point in recent times, someone on Apple’s vaunted PR team rightfully realized folks like Branco, if not exactly her, have sway. These people have pull. They, as the job description not-so-subtly implies, have influence. Of course all the attention is what Apple wants; the salient point is, again, social media is more than mindless doom-scrolling. For many, it’s an indispensable tool with which people not only consume (and disseminate) news and opinion, but also form lasting relationships which can transcend the digital spaces from which they’re forged.

Indeed, some of the best, closest relationships I have now originated on social media.

There’s a reason “Disability Twitter” and “Tech Twitter” and “NBA Twitter” are so popular online; the adage rings ever true: birds of a feather tend to flock together.

“I love how the internet is a safe space for that stuff where you’re accepted as a person,” Branco said. “Obviously, there’s gonna be a lot of hate too, but for the most part—what I’ve experienced personally—it’s a great community to shine your light and find like-minded people who enjoy the same things. I’ve met so many people who are cat lovers through making cat videos… I’ve met so many cool people in the process.”

Looking towards the future, Branco’s goal is simple: keep sharing! She told me she’s always ruminating over ideas on how to better engage her audience and let them get to know her. Besides cats, she loves thrifting and vlogging. Going to the beach, too. When I asked Branco what she sees in the proverbial crystal ball, she was succinct in her reply.

“My goal is to show up authentically as I have and make more content,” she said.

Read More
Steven Aquino Steven Aquino

Google Selling New ‘rope wristlet’ for Pixel Phones

Andrew Romero reported for 9to5 Google earlier this week Google has begun selling a familiar-looking accessory for the Pixel 10 line. The company, he said, is now “getting a little more adventurous in the accessory game” by offering so-called “wrist straps” in different colors on its online store. Google announced the Pixel 10 lineup last month.

“A new entry appeared on the Google Store today as part of the company’s accessory lineup for Pixel. The new ‘Google Rope Wristlet’ is a wrist strap for ‘devices with a case,’” Romero wrote of the new wrist straps. “The strap connects with a spring ring clasp to a shim with a D-ring placed between the case and the back of the phone.”

Romero adds the $7 (!) wristlet works with any Pixel phone except the Fold. Additionally, he notes the strap “appears to work with any phone and case with a USB-C port” while also saying the strap “isn’t the first accessory to use this connection method.”

In the lede, I purposely include the phrase “familiar-looking” because Google’s wristlet is conceptually identical to that of Apple’s new iPhone crossbody strap. Moreover, it’d be journalistic malpractice not to point out Apple’s is, at $59, more than eight times more expensive the Google’s—but that’s because Apple loves its margins and, I suspect, its strap is nicer than Google’s in terms of fit and finish. Nonetheless, I think it’s fine Google “copied” Apple with its wristlet. As I wrote about Pixelsnap last month, Google’s MagSafe analogue, the wristlets have accessibility merit for Android users. To wit, Google’s model could make holding one’s Pixel phone more accessible when, for instance, trying to use it whilst holding their cane. Better still, that Google’s wristlet attaches via carabiner would seem to be a more accessible method of attachment than Apple’s. I have yet to fully test Apple’s crossbody strap, but my recollection in the hands-on area at last week’s event is the product could be inaccessible to initially attach. Granted, you only do it once in theory—but nevertheless once is one time too many when you cope with lackluster hand-eye coordination and fine-motor skills.

Whether Apple or Google, these straps show how much hardware accessibility matters.

Read More
Steven Aquino Steven Aquino

The Great Starling home Hub Has Been Discontinued

The Verge’s Jennifer Pattison Tuohy bore bad news: the Starling Home Hub is dead.

“In a message on its website, Starling said it can no longer manufacture the hub due to ‘rapidly rising costs of doing business for small US-based product companies like us (most significantly, tariffs the US government charges us to obtain the components we need to build our product),’” Tuohy somberly reported on Wednesday.

Indeed, the brief message on the homepage of Starling’s website says the company will continue to provide technical support to its customers (yours truly amongst the lot) “as long as we can.” They further note their goal is to maintain enough inventory of its diminutive $99 box “to be able to honor product warranties for existing customers.”

I’m writing this story in mourning; the Starling Home Hub has played an integral role in my HomeKit-based smart home setup for years. As Tuohy writes, Nest devices have never featured native support for Apple Home for people who prefer both HomeKit and the Apple-like design sensibilities of, say, Nest’s thermostats. I’ve written before about how we have a number of Nest products in our home, albeit older ones like the Nest E and Nest Hello, and they continue to work with aplomb—especially since they’re also “integrated” into Apple Home. I suppose whenever the time comes that Starling’s device ceases functionality, I’ll begrudgingly shift to the Google Home app on iOS, but I’ll sure miss the Starling Home Hub. It was, and continues to be for now, a simple, truly plug-and-play solution that really does make my smart home devices more accessible.

Read More
Steven Aquino Steven Aquino

HBO Max Announces ‘Superman’ will stream in ASL

In a press release published earlier this week, HBO Max announced the summer’s superhero blockbuster, Superman, is slated to “make its global streaming debut” on the service this Friday, September 19. Notably for inclusion and accessibility, the film has an exclusive special stream presenting dialogue in American Sign Language (ASL).

Superman is done by Deaf interpreter Giovanni Maucere and directed by Leila Hanaumi.

“In his signature style, [director] James Gunn takes on the original superhero in the newly imagined DC universe with a singular blend of epic action, humor and heart, delivering a Superman who’s driven by compassion and an inherent belief in the goodness of humankind,” HBO Max wrote of Superman in its announcement.

Clark Kent/Superman is played by David Corenswet, while Rachel Brosnahan portrays Lois Lane. (Brosnahan resonates with me personally, as she was the central character, Miriam Maisel, in Amazon’s The Marvelous Mrs. Maisel—one of my favorite shows ever.)

That HBO Max is showing Superman in ASL continues on the trail it has blazed with other content I’ve covered before, such as Sinners and The Last Of Us. In addition, the Warner Bros. Discovery-owned TNT—Warner is the parent company of HBO Max—has offered its acclaimed “NHL × ASL” broadcasts, the production of which has won Sports Emmys. Last November, I interviewed Brice Christianson, a fellow CODA whose deaf inclusion company PXP partners with the National Hockey League on the telecasts,

Read More