Steven Aquino Steven Aquino

Apple is making Accessory pairing more Accessible with ‘AirPods-Like’ Interface In iOS 26.3

Juli Clover reports for MacRumors this week one of the hallmark features for European Union (EU) users of the still-in-beta iOS 26.3 update is what she describes as an “AirPods-like” pairing user interface for third-party earbuds and headphones.

“The European Commission today praised the interoperability changes that Apple is introducing in iOS 26.3, once again crediting the Digital Markets Act (DMA) with bringing ‘new opportunities’ to European users and developers,” Clover wrote. “The Digital Markets Act requires Apple to provide third-party accessories with the same capabilities and access to device features that Apple’s own products get. In iOS 26.3, EU wearable device makers can now test proximity pairing and improved notifications.”

I could be wrong, but it sounds like Apple’s using its AccessorySetupKit API for this.

The politics of the DMA notwithstanding, it strikes me as a very good thing, accessibility-wise, that people in the EU soon will have access to the one-tap pairing process of AirPods (and Beats). As I’ve said numerous times in the past, that one-tap, almost magical pairing paradigm is more than sheerly convenient; it’s a de-facto accessibility feature. In a vacuum, the “long way” of pairing third-party devices with your iPhone—finding the Bluetooth section of Settings, then finding and tapping on the device—is neither hard nor particularly nerdy. From a disability perspective, however, it can be quite the rigamarole: there’s a lot of tapping and scanning, not to mention cognitive load, involved with launching the Settings app, finding the Bluetooth area, and so on. For people with certain cognitive/motor/visual conditions—or some combination thereof—what’s ostensibly an easy process can be downright daunting… and inaccessible. By contrast, the AirPods method consolidates those steps into a single task; what’s more, what’s great about AirPods in particular is Apple leverages iCloud to propagate pairing with a user’s constellation of Apple products. It’s an implementation detail which also manifests itself as a de-facto accessibility feature considering the manual pairing process that iOS 26.3 is reportedly addressing. In the end, this week’s news should make disabled people living in the European Union really happy because product pairing is about to become a way more accessible experience.

These benefits aren’t exclusive to Apple. Google’s “Fast Pair” does it on Android too.

Read More
Steven Aquino Steven Aquino

Curb Cuts Has a dark mode now

The headline says it all. Curb Cuts now has a dark mode.

After solving my “IPHONE” and “IOS” problems last week, I resolved to get even more ambitious in improving the website by adding a dark mode for nighttime viewing. As someone whose devices automatically flip to dark mode at sundown, it’s always bugged me how eye-searingly white my default “light” theme is when I check the site at, say, 9:00 at night. Other blogs run by friends, like Stephen Hackett’s 512 Pixels and Federico Viticci’s MacStories have discrete dark modes and they look very nice, so why shouldn’t Curb Cuts have one too? So yesterday, I decided to spend part of my evening building my own dark mode—all done, of course, with lots of heavy lifting from ChatGPT.

The cool part about Curb Cuts’ new dark mode is it’s automatic; it triggers based on a user’s UI setting. If your iPhone or iPad or MacBook is in light mode during the day, you’ll get the light theme. At night, the proverbial light switch gets flipped off and you’ll get the dark theme. There remain a few minor tweaks to be made, but I think the new look is awesome (and accessible!) and I’m damn proud at being 95% of the way there.

As a practical matter, what I wrote last week is apt here too. I’m decidedly not a web developer, so the lines of CSS code I copy-and-pasted into the Squarespace CMS are instructions I don’t have the skill to write on my own. That’s where I again leaned heavily onto ChatGPT, telling the chatbot what I envisioned for dark mode and having it automatically spit out the code I needed to make my dreams a reality. It took some trial-and-error, but as I said, I’m super happy with the end result despite the need for a bit more polish. I’ll say once more with feeling that code generation is a prime use case for generative AI tools like ChatGPT (or Gemini or whatnot) and, more pertinently, showcase how chatbots can be assistive technologies by making a fit of relatively advanced web development eminently more accessible as a person with disabilities.

Anyway, I hope you enjoy dark mode. Get in touch with any comments or questions.

Read More
Steven Aquino Steven Aquino

Southwest Joins Delta, United Airlines in supporting iOS 26 Boarding Pass Feature

Ryan Christoffel reports today for 9to5 Mac Southwest Airlines has added support for iOS 26’s boarding pass feature in Apple Wallet. Southwest joins fellow industry stalwarts in Delta and United in supporting the new functionality for jet-setters.

“Saving boarding passes to Apple Wallet makes it quick and convenient to access those passes right when you need them,” Christoffel wrote on Monday. “And in iOS 26, Apple upgraded the experience with three new features… Live Activities for boarding passes can be shared with a single tap, making it easy for friends or family members to track your flight. And by integrating airport maps and luggage tracking right into the boarding passes, Apple has put more important travel info in one place.”

Besides Live Activities, the other two of the three new features in Apple Wallet he mentions are access to airport maps and luggage tracking through the Find My app.

I decided to cover this news partly because, upon reflecting on 2025, it occurred to me I flew absolutely nowhere this year after flying 17 times last year. (I was expecting to fly to places like Detroit and New York City for work-related events, but circumstances at home caused me to cancel those trips.) What’s more, Christoffel’s story is yet one more reminder of not only the utility, but the accessibility, of Apple Wallet. I’ve extolled the virtues of Apple Pay in this regard plenty in the past, but these air travel-centric features can play significant roles in making flying more accessible too. To wit, having one’s digital boarding pass accessible from the Lock Screen is far more accessible than digging for a printed version. (Not to mention passports and other identification.) Likewise, airport maps could be useful in, say, helping people who are Blind and low vision quickly and reliably find their gate after passing through the security checkpoint.

As Christoffel notes, the onus falls on airlines to implement support for iOS 26’s boarding pass feature. Beyond Southwest and the others, American Airlines, JetBlue, and Air Canada all have pledged their support for the future, however undisclosed.

A pro tip from me: While Wallet’s flying features are appreciated, I personally adore using Flighty when I’m flying somewhere. It’s truly one of the best apps I’ve ever used.

Read More
Steven Aquino Steven Aquino

NYC Mayor-Elect Zohran Mamdani Pledges support for disabled people in inclusive hiring push

Earlier this month, Christopher Alvarez reported for Able News New York City (NYC) Mayor-elect Zohran Mamdani has pledged to make disabled New Yorkers part of his administration’s broader inclusive hiring push. Mamdani, an avowed democratic socialist, won the mayoral election in November in a landmark win for progressives.

Alvarez’s interview with Mamdani is the first of an exclusive, multi-part series.

“For disabled New Yorkers, employment barriers start at the first point of entry—the application process,” Alvarez wrote. “Of the almost 986,000 New Yorkers with disabilities, nearly 70% are people of color. Persistent barriers in hiring and wage equity remain key concerns—issues that Mamdani has said he intends to address.”

Mamdani has launched an employment portal that he “encourages” disabled job-seekers take advantage of. The website has taken more than 70,000 applications so far.

Notably, Alvarez mentions in his story a 2024 report published by the NYC Comptroller’s office found the disability employment rate in the city is “half that” of New Yorkers without disabilities. I interviewed the city’s Comptroller Brad Lander in July 2024 about that very report, as well as about disability justice writ large. Lander threw his hat into the proverbial ring that was the NYC mayoral race. Lander finished third in the election behind Mamdani of course and former New York State governor Andrew Cuomo.

Read More
Steven Aquino Steven Aquino

Senator Kirsten Gillibrand Calls on Veterans Affairs to provide More accessible Technologies

In a press release published on Friday, New York senator Kirsten Gillibrand (D) announced what’s described as a “bipartisan push” for the Department of Veterans Affairs (VA) to make technology more accessible to veterans with disabilities. Gillibrand, a ranking member of the Senate Special Committee on Aging and member of the Senate Armed Services Committee, is working with U.S. Representative David Valadao (R-CA) in pushing the VA towards “swift action” in greater accessibility for veterans.

“Accessible technology is critical to make sure that veterans with disabilities can get the information and services they need and to make sure that VA employees with disabilities can do their jobs. Roughly one-quarter of veterans have a service-connected disability, and post–9/11 veterans, who [the] VA will serve for decades to come, have a higher rate of service-connected disabilities. Additionally, Section 508 of the Rehabilitation Act of 1973 requires federal technology to be accessible for and usable by people with disabilities,” Senator Gillibrand’s office wrote in its announcement. “Despite this, congressional and independent oversight efforts have consistently found that VA technology does not meet this requirement. A recent VA Office of Inspector General (OIG) report found that, of the 30 critical information and communication technology systems analyzed, 26 were not accessible for people with disabilities. In its report, VA OIG issued four recommendations to improve VA accessibility and encourage the procurement of accessible technology.”

Senator Gillibrand has written a letter to VA leaders wherein she encourages the agency implement the aforementioned recommendations “as fast as possible” while also asking for details on exactly how the VA plans to approach said implementation.

“Ensuring our veterans have the support, information, and services they need is of the utmost importance—and [the] VA cannot do this unless its technology is accessible to veterans and VA employees with disabilities,” Sen. Gillibrand said in a statement. “VA must train its employees to procure accessible technology and take steps to ensure that its technology remains accessible. I will continue to provide rigorous oversight on this issue to make sure that our veterans get the support that they deserve.”

I’ve covered the VA on a couple occasions in the recent past, most recently in April 2024 when I interviewed VA executive Chet Frith about assistive technology and his role leading the agency’s 508 Compliance Office. Prior to my conversation with Frith, I sat down with Dewaine Beard in August 2023, the VA’s principal deputy assistant secretary in the Office of Information and Technology, about his job and what’s in his purview. In addition, I sat down virtually with Illinois senator Tammy Duckworth, herself a disabled vet, to talk, amongst other topics, the importance of accessibility and assistive tech.

Read More
Steven Aquino Steven Aquino

Roomba Manufacturer iRobot Declares Bankruptcy

Earlier this month, John Keilman reported for The Wall Street Journal Roomba maker iRobot filed for bankruptcy. Despite the bad news, however, the company emphasizes “its devices will continue to function normally while the company restructures.”

“Massachusetts-based iRobot has struggled financially for years, beset by foreign competition that made cheaper and, in the opinion of some buyers, technologically superior autonomous vacuums,” Keilman wrote. “When a proposed sale to Amazon.com fell through in 2024 because of regulatory concerns, the company’s share price plummeted.”

iRobot was founded in 1990.

Although I’ve never used a Roomba—nor any other robot vacuum—it’s nonetheless easy for me to see how the things could be useful in an accessibility context. To wit, household chores like cleaning isn’t easy for many people with disabilities, myself included, and vacuuming could be untenable for a variety of reasons. Maybe you can’t hold and push the vacuum. Maybe you can’t see dirty spots. Maybe you can’t empty the bag/bin. Whatever the case, to invest in something like a Roomba is neither indulgent nor living luxuriously; on the contrary, it’s downright practical. The ability to use one’s phone to control it, not to mention have it return to its dock to relieve itself and recharge, can make vacuuming one’s floors an eminently more accessible task. The tech media at large has a penchant for ascribing frivolity and luxury to robotics, and while there is a kernel of truth to that argument, what the able-bodied masses (predictably) gloss over are the people who might truly benefit from, say, a robot vacuum for accessibility’s sake. Again, a Roomba isn’t exactly an inexpensive device, depending on the model, but the investment can be worth it to someone who is unable to manually vacuum yet wishes to retain some agency and autonomy in the process. That in itself is absolutely a goal worth striving for in this case, clean floors be damned.

Read More
Steven Aquino Steven Aquino

Gemini Makes Web development More accessible

A bit of a meta, inside baseball post here, so bear with my nerdiness.

One part of Curb Cuts’ design that has stuck in my craw from the beginning is how I could never get headlines to properly stylize brand names like “iPhone,” iPad,” “iOS,” and so on. This website doesn’t have a codified style guide, but I know, as one prime example, I prefer using letter case in headlines whereby every word begins with a capital letter. The problem with that approach, however, rears its ugly head when using Apple product names. My blog’s template likes to capitalize every word—as it should 95% of the time—even the lowercase “i” in iPhone and its brethren. It’s been driving me nuts, but I’ve let it be because, well, at least I can control stylization in my body copy, right? That is, until today when I got fed up and decided to be more intrepid in fixing the issue and to assuage my slightly obsessive-compulsive, design-centric sensibilities.

Enter Google Gemini. It came to rescue and proved my salvation.

I explained the problem to Gemini and what I wanted to accomplish. After a good bit of back-and-forth and trial-and-error, Gemini helped me identify the core issue: I needed a handful of CSS and JavaScript code to properly stylize the aforementioned product names. The technical part is cool, but the big win—notably from an accessibility perspective—is Gemini itself. I’ve written about this in the past, but it bears repeating here: having the chatbot do all the grunt work such that all I do is hit ⌘-C and ⌘-V (copy/paste on the Mac) into the “Code Injection” section of this site’s backend, press Save, and watch the magic happen is so much more accessible than manually using umpteenth Google searches to find the technological Tylenol I needed to remedy my website’s headache . What’s more, I know only basic CSS/JS; the code Gemini generated for me in 30 seconds’ time is far beyond my aptitude level. But that’s the whole point—my experience this afternoon making these tweaks to Curb Cuts’ layout is a perfect illustration of the power of generative AI to be an assistive technology. To do the grunt work myself is possible for me, but nonetheless comes with the costs of suffering eye strain and fatigue, hand fatigue from typing, and headaches from stress and tiredness. Those after effects aren’t trivial—things which are exacerbated for others who must account for coping with different and/or more severe disabilities than I do.

Chatbots can be far more benefits than mere convenient conduits for trivial pursuit.

Gemini made web development more accessible—and made my site look better too.

Finally.

Read More
Steven Aquino Steven Aquino

Instagram for TV Makes Reels More Accessible

Meta-owned Instagram this week announced Instagram for TV. The app is launching first on Amazon’s Fire TV platform (!) and is intended to enable users to watch Reels, alone or together with friends, on a much larger display than on one’s phone or tablet.

“Today we’re excited to start testing Instagram for TV, bringing reels from your favorite creators to the big screen so you can enjoy them with friends,” Instagram said in its post. “We’ve heard from our community that watching reels together is more fun, and this test is designed to learn which features make that experience work best on TV.”

Instagram says the TV launch is a “test,” adding expansion is planned for the future.

I don’t have a Fire TV device handy to try Instagram TV, but it nonetheless strikes as a good move. From an accessibility perspective, even the relatively big screen on, say, an iPhone Air or iPhone Pro Max is decidedly dwarfed by a 55- or 65- or 77-inch TV screen. This is precisely why FaceTime on tvOS is so smart; I haven’t used it yet because I don’t do a ton of videoconferencing, but just knowing I can do it from my massive LG C3 OLED is pretty cool. It’s more accessible to look at a person on a TV than on my comparatively tiny phone screen. Ergo, the same argument applies to Instagram for TV. I quite enjoy watching Reels—especially for food-oriented content—and can attest to the fact Reels is a super conduit towards bed rot and thus utterly losing all track of time and space. Bed-rotting whilst watching umpteenth Reels is admittedly unhelpful to someone who copes with severe anxiety and depression, but I speak the truth from experience.

Instagram for Fire TV is available to download now.

Read More
Steven Aquino Steven Aquino

The disability Angle In ESPN’s New Stuart Scott Film

As I write this, I’m three-quarters into ESPN’s latest 30 for 30 film, which premiered last week. The nearly 90-minute documentary, titled Boo-Yah: A Portrait of Stuart Scott, chronicles Scott’s life, both personally and professionally as a Black broadcast journalist. Scott, who died of cancer at age 49 in 2015, joined ESPN in 1993 and eventually rose to prominence to become the most popular SportsCenter anchor.

ESPN described the film last month in a press release as “[tracing] Stuart’s journey from local television in North Carolina to becoming one of ESPN’s most influential voices. At a time when hip-hop and popular culture was often marginalized in mainstream media and few Black anchors held national prominence, Stuart brought both unapologetically to SportsCenter—blending sharp analysis, pop culture and swagger in a way that spoke directly to a new generation of fans.”

The network continued in its announcement: “As the film recounts, Stuart’s impact extended far beyond the newsroom. He bridged sports and culture, made SportsCenter must-watch television and became a symbol of courage through his public battle with cancer—culminating in his unforgettable ESPYS speech that reminded viewers, ‘You beat cancer by how you live, why you live, and the manner in which you live.’”

I’m covering the documentary for several reasons, not the least of which because I learned by watching Boo-Yah that Scott had a disability. He coped with a rare visual condition called keratoconus, the effects of which were compounded by an eye injury sustained when a football hit him in the face during a New York Jets mini-camp in 2002. Upon recovering, he wore glasses and, according to the documentary, held his stat sheets super close to his face—I can relate—and struggled to read the teleprompter.

Scott was a mainstay of my sports-watching life; he indeed was my favorite SportsCenter personality. Beyond the disability angle, which I obviously am drawn towards, I feel like there are a lot of professional parallels to Scott’s tenaciousness in getting work (and thus respect) as a journalist from a marginalized community. I of course didn’t know Scott, but I definitely can empathize with his belief that he had to prove himself worthy in an industry where 99.9% of people don’t look like you. Even as I approach my own 13-year anniversary this coming May, with all that I’ve accomplished in tech media over the past decade-and-a-half, I continually feel the pressure to prove my worth over and over again—despite what friends and peers tell me about my extensively impressive résumé. Like Scott, I’m a minority in journalism—arguably the minority’s minority group—and constantly feel like, as Scott’s daughters recount at one point in the film, I must “work twice as hard to get half as much.” We’ve seen lots of success, but only after we’ve kicked down doors at every turn to procure our plaudits.

Scott made it to ESPN. Will I ever make it to ABC News or NBC News or The Gray Lady?

As a related aside, the ESPN app on tvOS is delightful—so much so, it’s in my Top Shelf.

Anyway, I highly suggest sitting down to watch Boo-Yah. It’s well worth your time.

Read More
Steven Aquino Steven Aquino

Inside the rochester institute of technology’s Latest Mission to center the Deaf Viewpoint

Early last month, Susan Murad wrote for the Rochester Institute of Technology’s website about how researchers at RIT, as the New York-based institution is colloquially known, soon will “use eye-tracking to show how deafness impacts vocabulary knowledge and reading as well as how deaf and hard-of-hearing children, who have historically shown lower than average reading outcomes, develop into highly skilled readers.” The research project is largely made possible by way of a not-insignificant lift from a $500,000 grant provided by the venerable National Institutes of Health, or NIH.

According to Murad’s story, RIT’s research is led by Dr. Frances Cooley, an assistant professor at the National Technical Institute for the Deaf’s Department of Liberal Studies. Dr. Cooley, who leads the school’s Reading and Deafness Lab, and team, Murad reported, are examining “how vocabulary knowledge in American Sign Language supports English reading development” [as well as] “how first-language knowledge shapes second-language reading comprehension and eye-movement control.” The team’s findings will “have important implications for theories of reading development and for educational practices that support bilingual learners,” according to Murad.

Fast-forward to mid-December and I had the opportunity to sit down virtually with Dr. Cooley to discuss the work by her and her team. She explained the root of her interest in deafness and reading comprehension traces back to an article she came across while doing graduate work that said the average Deaf person reads at a fourth grade level. Such a sobering statistic bothered Dr. Cooley, she told me, largely because “[it] said to me we’re not doing something in our educational practices to allow deaf students to thrive.” As such, the knowledge motivated her to begin looking into why reading levels amongst Deaf people are so low; she wanted to better understand Deaf people and how exactly they read, along with a deep dive into groups of Deaf readers. In particular, Dr. Cooley was keenly interested in who had early access to ASL versus those who didn’t.

“When we look at those who had early access to American Sign Language, we actually see these incredible differences that are beneficial for Deaf readers,” Dr. Cooley said. “They are actually more efficient. They read faster. They skip more words, and this doesn’t actually negatively impact their comprehension. This is particularly interesting because they’re technically second language users of English, and most second language users are going to be less efficient in their second language, but these Deaf readers are even more efficient than a typically hearing monolingual reader.”

She continued: “I really got excited about this strengths-based approach to understanding what a successful Deaf reader does, and I wanted to be able to translate that into educational practices so that all Deaf readers can thrive. I really think moving away from a focus on what people can’t do and transitioning that to what they can do is really beneficial in a bunch of different ways. Eye-tracking—I love to say your eyes are your best way to point your brain at different things—we don’t really have any other way to point our brains at things, so if we’re looking at the eye movements, we can get really fine-grained information about what people are doing when they’re actually reading. I think that’s much more interesting than having someone read a sentence or read a paragraph and answer questions about it, because that involves a whole bunch of other processes like memory, and to me, that’s less interesting to me. It’s still important, but what people are actually doing as their eyes move across a sentence can tell us so much about the underlying processes of what their brains are actually interested in [when they] successfully extract language from text.”

In a sentence, Dr. Cooley said all this highfalutin eye-tracking tech and subsequent research is meant to “establish how a Deaf child uses their first language ASL skills.”

Asked to expound on her goals, she replied thusly: “I’m looking primarily at Deaf children who had early access to sign language: either they have Deaf, signing parents or they have hearing parents who made an effort to learn sign language early. Then these kids go to bimodal, bilingual schools, so they’re really depending on their ASL skills to learn to read English. I really want to know how, from a bilingualism perspective, how that first language access and having a strong first language can benefit the ability for these children to learn a second language, which is English or any other ambient language in a community, by exploiting their first language skills. We see this in hearing populations. We see this all the time. Bilingualism is the norm in most countries around the world, bilingual or multilingualism. If we understand a Deaf child signer as a developing bilingual child, and we think about the aspects of their first language and how that can help them learn their second language more successfully, we’re getting a more appropriate and equitable snapshot of this minority population.”

When asked about the technical component involved with eye-tracking, Dr. Cooley said the device she uses is mounted atop a desk with a laptop behind it such that a child can sit normally and read what’s on screen. The tracker then uses a painless, undetectable infrared light to the subject’s eyes, which is reflected and travels back to the computer. The reflected light contains data into where the child’s eyes are positioned while reading—all of it in real time. “Based on what we already know about how readers use information to read, we can then look at Deaf readers in this paradigm,” Dr. Cooley said.

She further noted there exists “a really big body of research” centered on eye movements and reading, adding it’s only been recently, in the last 20–30 years, that Deaf people, especially Deaf signers, have been included in these kinds of studies. The richer inclusion meant, Dr. Cooley said, researchers have been able to learn a lot more about how everybody, Deaf or not, “[uses] their eyes to extract language from text.”

As someone with low vision who, incidentally, has struggled with eye-tracking on things like Face ID and Apple Vision Pro, I asked Dr. Cooley how nimble her tracker device is. Her answer? Not very. The technology she currently uses assumes what she described as “your most typical eye differences,” emphasizing the tracker works “just fine” with aids like contact lenses and glasses. Beyond that, however, she “said the team is “unfortunately” excluding people who have ocular motor conditions (like yours truly) not out of maliciousness, but out of a desire to “be certain that what these kids are doing with their eyes is reflective of what their brains are trying to do.” Dr. Cooley went on to tell me people with lazy eye, medically known as strabismus, are excluded because their eyes can’t always point to where their brain wants to focus. This weakness, technologically anyway, is crucial because Dr. Cooley’s tracker relies upon an algorithm to function. She hopes to improve the algorithm over time such as to accommodate more types of readers, but that, she said with humility, is beyond her ken. Nonetheless, it is something very important to her that gets addressed as time goes on.

“If we’re not capturing the cognition of every single population of people, I don’t think we’re really capturing cognition—and that includes people with differences in their eye shapes and people with differences in how they use their vision,” Dr. Cooley said. “But at this point, it’s easier to start with the most traditional eye move [and] eye shape because it’s just easier to draw the conclusions we need. But [accommodating visual disabilities] is an important thing to think about. It’s just currently not one of my goals.”

At a more personal level, Dr. Cooley’s ties to deafness and the community are tight. She’s married to a Deaf person and has been a self-described “second language signer” for close to 16 years, telling me she likes to think of herself as being “pretty involved” with the Deaf community. Despite her horn’s toots, though, Dr. Cooley readily acknowledges the “positionality” as a hearing person in a hearing-dominated world. On the eye-tracking project, she explained there are consultants who help the researchers with not only data collection, but also informing with best practices when working with Deaf children so as to not be “triggering.” This is a key point, Dr. Cooley said, because a lot of Deaf people cope with what she termed “educational trauma,” so RIT’s goal is to avoid said triggers and instead be as “Deaf-friendly” as possible. Still, a significant number of people have reached out to Dr. Cooley and team to express their appreciation for going after the insights they’re trying to glean from their research.

“There’s a great need for this type of information. I think practitioners need it. There’s a lot of information out there about what is most important for a deaf child,” Dr. Cooley said. “One of the biggest arguments that can be made for an oral approach—avoiding sign language and instead making sure a Deaf child is able to speak and read lips and use hearing devices—one of the biggest arguments for that is they won’t be able to learn to read, or will be far less successful in learning to read if they can’t associate sounds with letters. I think that isn’t actually representative of what most Deaf people can do. If you look at Deaf signers, they have this incredibly rich and robust language; most Deaf people will talk about how they use their signing to help read to their children… they sign along with the book, and so their children are exposed to both print and sign. If we can take advantage of these things, I think we can not only make a Deaf child reader more successful, but also feel a little bit better about themselves and not feel like who they are and how they happen to be born is going to make them unable to do something. I think anybody should be able to do anything, and if our educational practices are not well-researched or not founded in research, we can’t know for sure they’re the best practices. It’s pretty clear, given the wide variability in reading outcomes for a lot of Deaf and hard-of-hearing people, that there’s something we don’t know, or there’s something that some people are doing better than others. We just we have to test it and see what’s going on to actually be able to make a difference.”

She added: “All of the conversations I’ve had with people, they’ve all been extremely positive. I think education experts, the people who are actually teaching children in the schools, policy makers, early intervention specialists, everybody wants some type of research that can really be used to show ‘Hey, ASL is not detrimental to your Deaf child, it’s actually going to be beneficial. Here is one of the ways that it’s beneficial.’ I have a lot of people reach out to me and asking for these resources and ask for papers that show American Sign Language is only beneficial for Deaf children learning to read.”

At its core, RIT’s work is ultimately about centering the Deaf point of view.

“I always say, if we actually listened to Deaf adults, a lot of this research might not be necessary,” Dr. Cooley said. “They’ve been telling us for years and years and years that ASL is so incredibly important for so many different reasons, but we need the research. Someone has to do it, and I’m so privileged I get to do it. And I love, love [doing] this work… it makes me excited! It feels like a privilege to be doing what I’m doing.”

Dr. Cooley spoke effusively about being based in Rochester and the city’s sizable Deaf presence. (In fact, this very piece is not my first rodeo with the National Technical Institute for the Deaf, having covered the Sign Speak app in September 2024.) She said it’s typical for those in cognitive science to choose the path of least resistance when it comes to recruiting people to participate in studies like hers. Naturally, the Deaf community is a smaller populace, even in Rochester, so it’s “going to take a little bit more effort” to get folks into the lab. But the payoff is worth it; Dr. Cooley told me her troops have fostered a tight relationship with Rochester’s School for the Deaf. She told me the school is a K–12, bimodal and bilingual institution for Deaf and hard-of-hearing students. Because of proximity, both geographically and logistically, Dr. Cooley said her staff actually finds it “not too difficult” to hook up with interested parents and others. And Rochester isn’t the end-all, be-all either; Dr. Cooley said her team has similar positive relationships spanning the country, from Texas to Indiana and beyond.

“Because of those relationships, we aren’t nearly as concerned with the data collection as somebody else without those relationships would be,” she said. “It’ll definitely take longer to run this type of research than it would take to run this type of study with hearing children because there are fewer concentrated pockets of these readers.”

Looking towards the future, Dr. Cooley hopes to forge “stronger partnerships” with experts across various disciplines, people who oftentimes exist on “in their own little silos.” Without these cross-collaboration, there’s too much navel-gazing and not nearly enough advancing in understanding the world, and the people who inhabit it, better.

“I really hope in the future, we’re able to get to a point where we can directly meet the needs of all children, not just Deaf and hard-of-hearing children—all children who have varied needs in terms of their ability to read and write,” Dr. Cooley said in looking into the proverbial crystal ball. “In the current day and age, if you can’t read and write, your ability in an academic or professional field is going to be pretty limited. I think being able to meet the needs of all of our children so they can be fully functional and fully capable adults is the goal. I really hope my research can start bringing us towards that.”

Read More
Steven Aquino Steven Aquino

White House Claims ASL Interpreters would ‘intrude’ on the president’s public image

Meg Kinnard reported last week for The AP the White House argues that using ASL interpreters during press briefings “would severely intrude on the President’s prerogative to control the image he presents to the public.” The Trump administration made said claim in response to a lawsuit seeking to compel them to provide interpreters. Attorneys for the Justice Department added President Trump has “the prerogative to shape his Administration’s image and messaging as he sees fit.”

“Department of Justice attorneys haven’t elaborated on how doing so might hamper the portrayal President Donald Trump seeks to present to the public,” Kinnard wrote on Friday. “But overturning policies encompassing diversity, equity and inclusion have become a hallmark of his second administration, starting with his very first week back in the White House.”

Kinnard continued: “Government attorneys also argued that it provides the hard of hearing or Deaf community with other ways to access the president’s statements, like online transcripts of events, or closed captioning. The administration has also argued that it would be difficult to wrangle such services in the event that Trump spontaneously took questions from the press, rather than at a formal briefing.”

I first covered this story back in July, the editorializing from which bears repeating here. Like the State Department’s decision to go back to Times New Roman from Calibri in correspondence, the White House’s proclivity to poo-poo the need for sign language interpretation—a defense that much more laughable because Gallaudet University is virtually down the street—is yet another example of the Trump administration’s extinguishing of any and all diversity and inclusion initiatives. It’s being made abundantly clear the powers-that-be, starting with Trump himself, wants America to be White, wealthy, male, and able-bodied. But such rationale is par for the course—not just at 1600 Pennsylvania Avenue, but for society as a whole. The disability community, yours truly included, is always cast away to the margin’s margin, even amongst DEI supporters, because society has internalized that having disabilities is bad and a sign of a “broken” human condition. Down to brass tacks, that’s why accessibility exists: to accommodate traversing a world unbuilt for people like me. Likewise, it’s why disability inclusion is so miserably behind other areas of social justice reporting in journalism; it’s oftentimes seen as too esoteric or niche to devote meaningful resources towards. All things considered, that’s why I always say doing this work and amplifying awareness is a task of Sisyphean proportions most days. We use technology as much as anyone else. We read the news like anyone else. We’re Americans like anyone else in this country… but somehow are thought as something less than the human beings we obviously are.

Read More
Steven Aquino Steven Aquino

Apple Says ‘Pluribus’ is ‘Most-Watched Ever’

Marcus Mendes reported for 9to5 Mac this week Apple TV’s new hit show, Pluribus, has officially become the streaming service’s “most-watched ever.” The news comes shortly after Apple announced Pluribus became its “biggest drama launch ever.”

“Last month, Apple said that Pluribus had overtaken Severance Season 2 as Apple TV’s most successful drama series debut ever, a landmark that wasn’t completely surprising, given the overall anticipation and expectation over a new Vince Gilligan (Breaking Bad, Better Call Saul) project,” Mendes wrote on Friday. “Now, on the same day that F1 : The Movie debuted at the top of Apple TV’s movie rankings, the company confirmed that Pluribus has reached another, even more impressive milestone: it is the most watched show in the service’s history. Busy day.”

As Mendes notes, Apple keeps its viewership cards—and its subscriber numbers—close to the proverbial chest, so it’s difficult to quantify exactly what “most-watched ever” actually means. At any rate, I can attest personally that Apple TV is unquestionably my favorite streaming service—and not solely because of its embrace of earnest disability representation. Like anyone else, I like to be entertained and Apple TV does it for me with shows like Pluribus and Severance and The Morning Show and For All Mankind. I’m not quite up to speed with Pluribus as of this writing, but can heartily say it and Severance are two of the best damn shows I’ve ever seen in my 44 years of life. What makes them even more enjoyable is, technologically speaking, my 77” LG C3 OLED—which came out in 2023 but I got in early January 2025—is so bright and sharp, along with its infinite contrast, and makes not only for spectacular picture quality, it makes for spectacular, accessible picture quality in terms of sheer size and obviously fidelity. Between my various Apple devices, I’ve grown accustomed to OLED displays for some time now; that said, there’s nothing like experiencing OLED on a screen as large as a television’s. Like Steve Jobs said of the iPhone 4’s Retina display 15 years ago, once you go OLED, it’s hard to go back to a “lesser” (and, yes, less expensive) technology.

Anyway, go watch Pluribus posthaste if you haven’t already. It’s so damn good.

According to Mendes, the show’s first season will run through December 26. Season 2 is currently in development following Apple’s original commitment to do two seasons.

Read More
Steven Aquino Steven Aquino

Google Translate Gets Live Translation Enhancements in Latest update

Abner Li reports for 9to5 Google today Google Translate has been updated such that live translation leverages Gemini—including while using headphones. The feature is available in the iOS and Android apps, as well as the Google Translate website and Google Search. Live translation is launching first in the United States and India with the ability to translate from English into over 20 languages such as Chinese and German.

“Google Translate is now leveraging ‘advanced Gemini capabilities’ to ‘improve translations on phrases with more nuanced meanings,’” Li wrote on Friday. “This includes idioms, local expressions, and slang. For example, translating ‘stealing my thunder’ from English to another language will no longer result in a ‘literal word-for-word translation.’ Instead, you get a ‘more natural, accurate translation.’”

(File this under “I Learn Something Every Day”: Google Translate has a web presence.)

As to the real-time translation component, Li says the feature is underpinned by Gemini 2.5 Flash Native Audio and works by pointing one’s phone in the direction of the speaker. He also notes Google says Translate will “preserve the tone, emphasis and cadence of each speaker to create more natural translations and make it easier to follow along with who said what.” Importantly, Li writes the live translation function is launching in beta on Android for now; it’s available in the United States, India, and Mexico in more than 70 languages, with Google further noting the software works with “any pair of headphones.” iOS support and more localization is planned for next year.

“Use cases include conversing in a different language, listening to a speech or lecture when abroad, or watching a TV show/movie in another language,” Li said in describing live translation’s elevator pitch. “In the Google Translate app, make sure headphones are paired and then tap ‘Live translate’ at the bottom. You can specify a language or set the app to ‘Detect’ and then ‘Start.’ The fullscreen interface offers a transcription.”

It doesn’t take an astrophysicist to surmise this makes communication accessible.

At Thanksgiving dinner a couple weeks ago, one of my family members regaled everyone with stories about his recent trip to Paris. He of course knows I’m a technology reporter, and he excitedly told me he bought a pair of AirPods Pro 3 at the Apple Store before his trip so he could try Apple’s own Live Translation feature, powered by Apple Intelligence. I was told it worked “wonderfully” with him being able to hear their French translated to his English piped into his earbuds. It seems to me Google’s spin on live translation works similarly, with the unique part (aside from Gemini) being that it isn’t limited to Pixel Buds. At any rate, language translation is a genuinely good use case for AI—and, more pointedly, a good example of accessibility truly being for everyone, regardless of ability, because it breaks through communicative barriers.

Apple announced Live Translation on AirPods at its fall event in September.

Read More
Steven Aquino Steven Aquino

Report: Refreshed Studio Display Found in code

Earlier this week, Filipe Esposito reported for Macworld an internal build of iOS 26 contains references to a looming update to the Studio Display. The finding, using the codename “J527,” corroborates previous reporting by Mark Gurman at Bloomberg.

“References in the code clearly show that this new Studio Display has a variable refresh rate that can go up to 120Hz, just like the ProMotion display on the latest MacBook Pros. The current Studio Display is limited to 60Hz,” Esposito wrote on Wednesday. “Furthermore, the code references a ‘J527’ monitor that also supports both SDR and HDR modes, an upgrade from the current SDR-only model. This is a strong indication that Apple will replace the LCD panel with better technology, such as Mini-LED that can achieve higher brightness levels.”

According to Esposito, other features of the still-in-development second-generation Studio Display include an A19 processor, ProMotion, and much better HDR support.

I’ve written previously about my sore need for a new Mac to replace my outmoded (yet still chugging along) 2019 Retina 4K iMac, a task I’ve put off for a variety of reasons. I really do feel lots of FOMO not running macOS 26 Tahoe, however, and feel bad for life “dictating” to me that the lowest common denominator—my job not requiring tons of compute power—makes my trusty yet tired iMac “good enough.” As I’ve said before, it sucks to miss out on Apple Silicon amenities like iPhone Mirroring—a feature which I haven’t written about much, if at all, but which has serious benefits from an accessibility perspective. All of this to say, I’m very excited at the prospects of a new external monitor that I can plug one of my MacBooks into; a laptop’s screen is serviceable to me while I’m out of the house—narrator: his severe anxiety and depression scoffs at the notion—but if I’m working primarily at my desk, I’d much rather have a bigger screen to accommodate my low vision. So while the Pro Display XDR is forever my white whale monitor, this rumored Studio Display upgrade sounds damn good too—and is arguably the eminently more practical device for my spartan needs.

One way or another, I’m hellbent on making 2026 the Year of Steven’s Desk Makeover.

Apple released the Studio Display in 2022 to complement the all-new Mac Studio.

Read More
Steven Aquino Steven Aquino

‘Fire TV makes entertainment more accessible’

Late last week, Amazon published a piece on its website in which it touts a few of accessibility benefits of its Fire TV operating system for people with disabilities. The platform’s assistive technologies, the company said, “represent more than just technology: they’re about creating moments where everyone can enjoy entertainment their way,” adding Fire TV “adapts to your needs rather than the other way around.”

“Picture this: It’s movie night, and everyone’s gathered around the TV. One person is trying to solve the mystery before the detective, another is straining to catch every word of dialogue, and someone else needs their hearing aids to enjoy the show. We’ve all been there—wanting to share entertainment moments together but having different needs to experience these moments best,” Amazon wrote in the introduction. “During a time of year when friends and family are gathering more often, Amazon Fire TV is highlighting how Fire TV is built for how you watch. This initiative celebrates the unique ways we all enjoy entertainment and highlights innovative features that make watching your favorite TV shows and movies more accessible and enjoyable for everyone.”

The meat on the bones of Amazon’s post highlights three features in particular: Dialogue Boost, Dual Audio, and Text Banner. I’ve covered all of these technologies in one way or another several times over the years, and have interviewed Amazon executives such as Peter Korn many times as well. In fact, one of my earliest stories for my old Forbes column was an ode to Fire TV hardware in the Fire TV Cube. My praise holds up today; whatever one thinks of Fire TV’s ad-littered user interface and general design, it’s entirely credible for a disabled person who, for example, has motor and visual disabilities, to choose a Fire TV Cube as their set-top box precisely for Fire TV’s accessibility attributes—especially the Cube’s ability to control one’s home theater. To wit, it isn’t trivial that the Cube can switch between HDMI inputs on a TV and even switch on a game console or Blu-ray player. Given the smorgasbord of remotes and whatnot, that someone can ask Alexa to, say, “Turn on my PlayStation 5” is worth its weight in gold in terms of accessibility for its hands-free operation. Again, to choose Fire TV (and the Cube) as one’s preferred TV platform because of accessibility is perfectly valid; it’s plausible that accessibility is of greater importance than the subjectively “messiness” of the Fire TV’s UI and its barrage of advertisements.

You can learn more about Fire TV accessibility (and more) on Amazon’s website.

Read More
Steven Aquino Steven Aquino

Times New Rubio

This week, The New York Times ran a story, under a shared byline of Michael Crowley and Hamed Aleaziz, which reported on Secretary of State Marco Rubio’s memo to State Department personnel saying the agency’s official typeface would go back to 14-point Times New Roman from Calibri. The Times didn’t include Rubio’s full statement, but John Gruber obtained a copy from a source and helpfully posted a plain text version.

“Secretary of State Marco Rubio waded into the surprisingly fraught politics of typefaces on Tuesday with an order halting the State Department’s official use of Calibri, reversing a 2023 Biden-era directive that Mr. Rubio called a ‘wasteful’ sop to diversity,” Crowley and Aleaziz wrote on Wednesday. “While mostly framed as a matter of clarity and formality in presentation, Mr. Rubio’s directive to all diplomatic posts around the world blamed ‘radical’ diversity, equity, inclusion and accessibility programs for what he said was a misguided and ineffective switch from the serif typeface Times New Roman to sans serif Calibri in official department paperwork.”

The reason I’m covering ostensibly arcane typographical choices is right there in the NYT’s copy: accessibility. The Biden administration’s choice to use Calibri, decreed in 2023 under then-Secretary Antony Blinken, was driven in part to be more accessible—Calibri was said to be more readable than Times New Roman. In his piece, Gruber calls bullshit on that notion, saying the motivation was “bogus” and nothing more than a performative, “empty gesture.” He goes on to address Secretary Blinken’s claim, according to a The Washington Post report, that the Calibri-to-Times New Roman shift was made because serif fonts like Times New Roman “can introduce accessibility issues for individuals with disabilities who use Optical Character Recognition technology or screen readers [and] can also cause visual recognition issues for individuals with learning disabilities.” Gruber rightly rails against the OCR and screen-reader rationale as more bullshit while also questioning the visual recognition part.

I’m here to tell you the visual recognition part is true, insofar as certain fonts can render text inaccessible to people with certain visual (and cognitive) disabilities. This is because the design of letters, numerals, symbols, et al, can look “weird” and not “normal” to certain people and how one’s brain processes visual information. Why this is important is because bad typography can, for a person with low vision like yours truly, adversely affect the reading experience—both in comprehension and physically. Depending on your needs and tolerances, slogging through a weird font can actually lead to physical discomfort like eye strain and headaches. It’s why, to name just one example, the short-lived ultra-thin variant of Helvetica Neue was so derided in the first few iOS 7 betas back in 2013. It was too thin to be useful in terms of legibility, prioritizing aesthetics over functionality. (A cogent argument could be made the tweaks Apple has made to Liquid Glass, including adding appearance toggles, are giant, flashing neon signs of correction from similarly prioritizing aesthetics over function at the outset.)

As somewhat of a font nerd myself—I agonized over what to use at Curb Cuts when designing the site before settling on Big Shoulders and Coda—I personally find Times New Roman ugly as all hell and not all that legible, but I can see the argument that it’s more buttoned-up than Calibri for official correspondence within the State Department. Typographical nerdery notwithstanding, however, what I take away from Rubio’s directive is simple: he cares not one iota for people with disabilities, just like his boss.

Read More
Steven Aquino Steven Aquino

Google Gives Pixel Watch 4 Pinch, Flick Gestures

Abner Li reports for 9to5 Google today Google has released what he describes as a “sizable” update for Pixel Watch 4 that adds one-handed gestures. The newfound functionality is part of Wear OS 6.1, which began rolling out to users late last week.

“Based on Android 16, BP4A.251205.005.W7 is rolling out to the Pixel Watch 2, 3, and 4, including both the Bluetooth/Wi-Fi and LTE models,” Li wrote. “This is officially ‘WearOS 6.1.’ (There are no updates to the original model, which will remain on Wear OS 5.1.).”

(Leave it to Google to lean into the nerdy and inscrutable version numbering.)

According to Li, the Pixel Watch 4 gains two new gestures, enabled by default: Double Pinch and Wrist Turn. The ability to answer and end calls with Double Pinch is coming “soon,” Google says. For now, Google says Double Pinch has robust capabilities, including “[scrolling] through alerts, instantly send the first Smart Reply, manage your timer and stopwatch, snooze an alarm, play/pause music, or even snap a photo.”

From an accessibility standpoint, it’s reasonable to presume these new gestures would make Pixel Watch more accessible to users with disabilities. The obvious analogue is, of course, Apple Watch Series 11. In watchOS, users are able to use gestures like Double Tap and Wrist Flick to do essentially the same exact things on Apple Watch that Google touts Pixel Watch now can do. The win for accessibility is simple: for users with certain motor disabilities, that one can use a one-handed gesture to control their watch—whether Apple Watch or Pixel Watch—can make specific actions more accessible. For example, someone needn’t strain their eyes (or their finger) to find and tap the bespoke Answer/End buttons on the screen to accept or end calls, respectively. A quick tap or pinch does the trick, which increases efficiency in addition to improving accessibility.

Today’s Pixel Watch software update news comes only a few days after Google announced accessibility enhancements to Android as part of its December Pixel Drop.

Read More
Steven Aquino Steven Aquino

Onn 4K pro Gets Gemini Support In Update

Luke Bouma at Cord Cutters News reported last week Walmart’s Google TV-powered Onn 4K Pro streaming box recently received a software update which added support for Gemini. The news fulfills a promise by Google that Gemini would be rolling out to more devices by the end of 2025. Google’s own Google TV Streamer got Gemini last month.

“The core of the upgrade centers on an evolved version of Google’s Gemini AI, which has been fine-tuned for more intuitive voice interactions and contextual understanding. Users will notice immediate improvements in voice search, where the AI now processes natural language queries with greater accuracy and speed,” Bouma wrote. “For instance, a simple request like ‘find me comedies from the ‘90s with strong female leads’ yields personalized recommendations drawn from vast libraries across Netflix, Hulu, and YouTube, factoring in viewing history and even real-time mood detection via on-device microphones. This represents a significant leap from the previous Google Assistant integration, which often required more precise phrasing to avoid misfires.”

TCL’s mini-LED QM9K TV was amongst the first devices to get Gemini on Google TV.

Back in early August, I posted a brief review of the aforementioned Onn 4K Pro. Bouma is correct is his assertion the addition of Gemini further buoys the value proposition of the $50 box; indeed, I wrote over the summer there’s a lot to like about Google TV’s content-centric design, especially its YouTube TV integration. Apple could learn a lot from its peer in adapting ideas to improve tvOS and the corresponding Apple TV 4K. Nonetheless, I also wrote tvOS is infinitely more performant than Google TV on the Onn 4K Pro, as the A15 chip in the “current” model smokes whatever off-the-shelf processor runs the Onn product. That, and the Apple ecosystem amenities, are what ultimately keeps me from switching my home theater allegiances. I dusted off my Onn 4k Pro over the weekend to install the Gemini update—and an accompanying update to the remote!—but was disappointed with my inability to summon it anywhere; all I could use was the stock Google Assistant. At any rate, my brief time revisiting the Onn box reminded me how technically inferior it is compared to my Apple TV. Say what you will about the apples-to-oranges comparison between a $50 box and a $130 box, and I still do chuckle at the Apple TV arguably being laughably over-engineered for its raison d'être, but it’s that performance prowess that, in the end, makes the Apple TV the crème de la crème of streaming devices. Walmart and Apple are decidedly not the same, as the kids say now.

I disagree with Bouma’s contention in his story that the now-with-Gemini Onn 4K Pro makes Walmart a disruptor when it comes to technological innovation—if anything, the retailer is opportunistic by leaning into the openness of Google TV/Android—but it nonetheless doesn’t take away the fact the Onn 4K Pro is a damn nice product for the price. If tvOS were to go away tomorrow, I’d switch to the Onn box without a second thought over Roku or Fire TV. As it stands, the Onn 4K Pro is a nice, good enough option—made even better with Gemini’s capabilities—for those who may want more than their TV’s built-in operating system yet can’t afford the premium price of the Apple TV 4K.

Read More
Steven Aquino Steven Aquino

Microsoft Shares ‘year in recap’ for accessibility

Last week, Microsoft marked the International Day of Persons with Disabilities by publishing a blog post in which the company detailed its “year in recap” for Windows accessibility. The piece was written by Akheel Firoz, a product manager at Microsoft.

“The Windows Accessibility team adheres to the disability community’s guiding principle, ‘nothing about us without us,’” Firoz wrote in the introduction to the blog post. “In the spirit of putting people at the center of the design process guided by Microsoft Inclusive Design, working with and getting insights from our advisory boards for the blind, mobility and hard of hearing communities is critical to creating meaningful and effective features that empower every single user.”

Firoz gives Windows’ Fluid Dictation feature top billing in the post, writing it’s “a feature designed to make voice-based text authoring seamless and intuitive for everyone, intelligently corrects grammar, punctuation and spelling in real time as you speak[and] this means your spoken words are instantly transformed into polished, accurate text, reducing the need for tedious manual corrections or complex voice commands.” He goes on to say users are able to leverage Copilot (on supported machines) to ensure custom vocabulary is recognized—all without the need for a network connection. At the heart of the enhancements made to Fluid Dictation is, as Firoz wrote, Microsoft’s desire to enable users to “focus on your ideas, and not the mechanics of text entry by minimizing errors and streamlining corrections when typing with your voice.”

Elsewhere, Firoz details improvements to Voice Access, “more natural and expressive voices” for Magnifier and Narrator, as well as efficient document creation with Narrator.

Although Microsoft, led by chief accessibility officer Jenny Lay-Flurrie, is institutionally committed to advancing accessibility for the disability community, it’s nonetheless worth pointing out the company’s blog post came out just one day before Tom Warren at The Verge reported Microsoft is “quietly walking back its diversity efforts.” Square those how you will, but I personally found the timing interesting if probably coincidental. As someone who has interviewed Flurrie several times, my strong suspicion is she’d riot were Microsoft to walk back the accessibility efforts it has made.

Read More
Steven Aquino Steven Aquino

Apple Execs Kate Adams, Lisa Jackson to Depart

The times, they keep a-changin’ for Apple.

Following the news earlier this week that John Giannandrea and Alan Dye would be moving on, Apple on Thursday announced two more members of its leadership group would be leaving in the not-too-distant future. Kate Adams, Apple’s top lawyer, and Lisa Jackson, who leads the company’s environmental and social initiative programs, both will be retiring in 2026. In Adams’ case, she’ll be replaced by Jennifer Newstead; Newstead previously worked as Meta’s chief legal officer and joins Apple next month.

Curb Cuts typically isn’t the place to read hot executive turnover news and analysis, but this week’s moves by Apple warrant exceptions for accessibility’s sake. Indeed, the exception game certainly applies in Jackson’s case, as her purview of social initiatives obviously includes accessibility. In journalistic terms, covering accessibility as I have for close to 13 years is decidedly unglamorous and non-conducive to scoops or “sources said” reporting—although I’ve had my moments in my career. That said, I can dutifully report my understanding from various sources over time during Jackson’s tenure in Cupertino is that she has long been an ardent supporter of Apple’s accessibility efforts, both in engineering and in inclusivity. Moreover, I’ve interacted with Jackson on more than one occasion, before and after media events, to exchange pleasantries and the like. During those times, Jackson has herself been emphatic about empowering the disability community and just the technical marvels so many of the actual accessibility features truly are. While Sarah Herrlinger is Apple’s public “face” when it comes to accessibility, akin to Craig Federighi with software writ large and, externally, to Jenny Lay-Flurrie at Microsoft, Jackson, from everything I’ve been told, is very much an internal champion of the cause as the proverbial sausage is being made.

Apple can be rightly criticized for lots of things—including, yes, in the accessibility space (see: Liquid Glass). But Apple’s work in accessibility is the furthest thing from performative or an empty bromide. Top to bottom, Apple truly does care about this shit.

“Apple is a remarkable company and it has been a true honor to lead such important work here,” Jackson said in a statement for Apple’s press release. “I have been lucky to work with leaders who understand that reducing our environmental impact is not just good for the environment, but good for business, and that we can do well by doing good. And I am incredibly grateful to the teams I’ve had the privilege to lead at Apple, for the innovations they’ve helped create and inspire, and for the advocacy they’ve led on behalf of our users with governments around the world. I have every confidence that Apple will continue to have a profoundly positive impact on the planet and its people.”

COO Sabih Khan will oversee Jackson’s charges following her departure, Apple said.

Read More