A Quick Housekeeping Note
As the headline says, I want to share a brief note regarding Curb Cuts.
It’s been about a year since Forbes let me go from its contributor network and I turned to publishing this blog full-time. I created social media accounts for Curb Cuts with the intention of automatically posting links to stories as soon as I run them. It occurred to to me only recently that one of those said accounts is actually doing so, on Bluesky. Thanks to life happenings and other busyness, I never got around to connecting my bots so I don’t have to manually share my hard work all over the internet. But this morning, I hunkered down and resolved to do the work. The site’s bots are powered by web-based automation service IFTTT (If This Then That), which I hadn’t used in forever.
Now, you can follow Curb Cuts not only on the aforementioned Bluesky, but also on X/Twitter and Mastodon and the account should automatically post links to stories as they’re published on the site. It’s all done via RSS, of course—the URL for which you also can plug into your favorite feed reader if RSS makes your nerdy world go ‘round.
I’m ecstatic over finally getting around to addressing this because this move theoretically should lead to my reporting getting in front of many more eyeballs.
If you have comments, questions, or notice a bug, don’t hesitate to get in touch.
Gemini On Google tV to gain ability to Change TV Settings, More by voice command
Manisha Priyadarshini reported for Digital Trends last week Google will be giving Gemini on Google TV a so-called “Deep Dives” feature which will “explain complex topics in a more accessible way” without interrupting what users are watching in the moment. Additionally, Gemini will be able to search through users’ Google Photos libraries, as well as generate content using Google’s popular Nano Banana tool.
“Google has announced a set of new Gemini features for Google TV at CES 2026, focused on making on-screen responses more useful and easier to understand from the couch. Google first introduced Gemini on Google TV last year as it moves toward replacing Google Assistant with Gemini on its devices,” Priyadarshini wrote as this year’s CES began on January 5. “Instead of short, text-heavy answers, Gemini on Google TV will lean into richer, more visual responses, including high-resolution images, relevant video context, and live sports information when it makes sense, helping users get clearer answers at a glance without pulling out their phone.”
Most notably for accessibility is Gemini also is gaining the ability to alter a television’s settings, hands-free. A person need only give the AI a comment like “the screen is too dim” or “the volume is too low” and Gemini will spring into action by adjusting those settings accordingly. This conversational approach, Priyadarshini said, is intended to “make everyday fixes quicker and less frustrating.” It strikes me as a huge win for accessibility as well, insofar as people with cognitive and/or visual conditions (or some combination thereof) plausibly may have a difficult time sifting through Google TV’s Settings tree to, for example, increase the brightness of their TV. What’s more, they may find it overwhelming to choose the “correct” picture mode to suit their visual needs and tolerances. To wit, a Professional or Filmmaker Mode make technically provide a more accurate viewing experience, but brightness is dramatically lower as compensation. By contrast, asking Gemini to tweak brightness or whatnot not only reduces cognitive/motor/visual friction, it showcases AI’s profound potential to be a bonafide assistive technology in a similar vein to how Siri makes HomeKit accessible.
More broadly, Priyadarshini’s story leads me to believe Google is pushing Google TV to function at a similar level as Alexa on Amazon’s Fire TV Cube. The comparisons are strong: Alexa on the Cube can do similar tasks, including changing channels on services like YouTube TV and even changing HDMI inputs on one’s television—all of it hands-free. Again, this is the kinda stuff that makes AI assistants like Gemini shine because they can make life more accessible for people with disabilities. If you’re someone who’s a quadriplegic and thus has no use of their hands, voice control is the way you control your devices, TVs included. Relying on Gemini (or Alexa, for that matter) to change settings and the like not only practical, it also instills heightened feelings of agency and autonomy in the disabled person because they needn’t rely on someone else to handle these ostensibly menial jobs for them. Because of this, the Fire TV Cube’s ability to control one’s entire home theater setup is especially impressive and, pointedly, a feature Google (and Apple!) ought to adopt on Google TV and tvOS.
As I always say, convenience and accessibility might be the closest cousins—most able-bodied people oftentimes conflate the two—but they are not one and the same.
the Accessibility story of Apple Creator Studio
This week, Apple announced a software bundle called Apple Creator Studio. The subscription-based service, priced at $12.99 per month or $129 per year, is touted by Apple as “an inspiring collection of the most powerful creative apps.” Apple Creator Studio includes Final Cut Pro, Logic Pro, Pixelmator Pro, Motion, Compressor, and MainStage, alongside the iWork trio of Pages, Keynote, and Numbers. The latter of which is slated to receive “new AI features and premium content,” according to Apple.
Apple Creator Studio launches Wednesday, January 28 on the App Store.
“The apps included with Apple Creator Studio for video editing, music making, creative imaging, and visual productivity give modern creators the features and capabilities they need to experience the joy of editing and tailoring their content while realizing their artistic vision,” Apple wrote in its press release published on Tuesday. “Exciting new intelligent features and premium content build on familiar experiences of Final Cut Pro, Logic Pro, Pixelmator Pro, Keynote, Pages, Numbers, and later Freeform to make Apple Creator Studio an exciting subscription suite to empower creators of all disciplines while protecting their privacy.”
I’m neither a video editor nor a music/podcast producer, and I’m years removed from using Keynote for giving presentations. I’m decidedly not amongst the target demographic for Apple Creator Studio. For a more generalized take on this week’s news, I recommend reading Jason Snell’s story on what makes sense—and what doesn’t. For my purview, however, it’s notable from an accessibility perspective that Apple has embraced the subscription bundle once more. To wit, it strikes me that paying $13/month for professional apps like Final Cut and Logic Pro—the former costs $300 while the latter is $200 individually—is eminently more accessible than paying wholesale upfront. If you’re a content creator with disabilities, living on a shoestring budget, Apple Creator Studio could be a revelation because suddenly Final Cut Pro, for example, is attainable to you as a budding YouTuber. Even if you only use one or two apps in the present, Apple Creator Studio’s value proposition remains high because you essentially get the others “for free” should you wish to explore them at some point in the future. As I’ve said numerous times in the past, the vast majority of those in the disability community must pinch their pennies; that people now are able to pay month-to-month for what well may be mission-critical software like Final Cut makes the subscription model a de-facto accessibility feature. Apple hardware is premium and undoubtedly expensive, but Apple Creator Studio has the potential to be a sweet chaser after swallowing such a bitter pill, particularly in the long run. I’m focusing on economics, but it’s also true a tool like Final Cut may be preferred by a disabled person because of its tight integration with, to name just one example, VoiceOver on macOS.
Apple services boss Eddy Cue alluded to Apple Creator Studio’s accessibleness in a statement included in the company’s announcement (emphasis mine).
“Apple Creator Studio is a great value that enables creators of all types to pursue their craft and grow their skills by providing easy access to the most powerful and intuitive tools for video editing, music making, creative imaging, and visual productivity—all leveled up with advanced intelligent tools to augment and accelerate workflows,” he said. “There’s never been a more flexible and accessible way to get started with such a powerful collection of creative apps for professionals, emerging artists, entrepreneurs, students, and educators to do their best work and explore their creative interests from start to finish.”
There are legions of disabled people who are creating every single day, so accessibility’s strong link to Apple Creator Studio should thus be unsurprising.
Early Impressions of the Pro display XDR
Late last week, my partner surprised me with a belated and incredibly generous Christmas present: she brought home my white whale in Apple’s Pro Display XDR. It was quite the adventure getting the display (and the Pro Stand) up the three flights of (admittedly short) stairs—one outside, two inside—to our living space and into my “office” area. But we did it, and after leaving the unboxing until the next day out of utter tiredness, I excitedly got to work setting up the Pro Display and accompanying Mac.
I’m writing this piece with the intent of sharing my early impressions of the monitor, but I believe it’s important to address some personal context in so badly wanting the Pro Display XDR in the first place. However exorbitantly expensive and ostensibly overkill for my modest computational needs, the reality is the Pro Display XDR possesses the traits I need for a greater experience, accessibility-wise. Its most obvious attribute is, of course, the size; the Pro Display XDR is a 32” 6K screen with mini-LED backlighting. As someone with extremely low vision, that means not only is the screen literally big, colors are brighter and text is sharper too. What’s more, I can fit more windows on screen at once despite preferring to use Stage Manager on macOS. It’s early days yet, but already I can ecstatically report the Pro Display XDR is paying huge dividends in my daily accessibility and productivity—all thanks to its brawn.
It’s fair to ask what of the Studio Display. It’s no slouch, to be sure, but the truth is (a) it’s smaller than the Pro Display; and (b) is considerably less bright (600 nits peak brightness versus 1600). It’s commensurately considerably less expensive as well, but the salient point is, as a practical matter, the Pro Display XDR is markedly more accessible. In the times I’ve noodled around with both monitors in Apple Stores, my eyes have greatly preferred the Pro Display XDR for all the reasons I just mentioned.
There have been rumblings Apple has refreshed versions of the Studio Display and Pro Display XDR in the proverbial pipeline, as the Studio Display was introduced in 2022 and the Pro Display XDR coming out in 2019. However “old” the Pro Display XDR is in technical terms, I honestly cannot think of ways to improve it beyond maybe upgrading the screen technology to OLED. For my purposes as a journalist working in Safari and my text editor of choice, it does what I need it to do in spades. I even love putting my hand around the back once in a while to feel the “cheese grater” ventilation holes; I couldn’t stop smiling looking at how cool it looks back there as I was plugging everything in. It reminds of when Steve Jobs boasted the back of the original iMac “looked better than the other guys” when introducing the computer in 1998. The computer itself will inevitably change once or twice (more on that in a minute), but I imagine the Pro Display XDR easily taking me into the 2030s given my usage habits.
Now, about my “new” Mac. I use quotation marks there because the machine technically isn’t new at all; it’s a 2023 14” MacBook Pro powered by the M2 Pro chip along with 32GB RAM and 2TB SSD which heretofore sat mostly unused in my office. As you can probably surmise, I leave the laptop permanently attached to my monitor and run it in clamshell mode (lid closed). Like the Pro Display itself, the MacBook is computationally excessive for my spartan needs, but I love the experience of using an Apple silicon Mac full-time. As I suspected, features like iPhone Mirroring have been delightful—and accessible—to use, and I love having access to ChatGPT on the desktop, which is Apple Silicon-only. More broadly, another thing I appreciate about this setup is its modularity. To wit, whenever the time comes to upgrade my machine—or check out a review unit, for that matter—it will be far more accessible (and expedient) for me to simply swap one component for another. This stands in contrast to my old Intel iMac, an all-in-one with accessibility merit in its own right, for which I had to physically remove the entire system from my desk in order to set up these new pieces of kit. It’s technically doable for me, but not exactly easy. The Pro Display and its Pro Stand are damn heavy and relatively unwieldy if you’re someone who, like me, has limited strength and range of motion. What I’m saying is, it’s comforting to know next time I can leave the monitor and just switch out the laptop for whatever replaces it.
Finally, a cursory note on my aforementioned, beloved Retina 4K iMac. This July will mark 7 years since I got it, and it served me incredibly well over those years. While it may seem weird to wax romantic about an inanimate object, that iMac—which currently sits on the floor next to my desk, waiting to see its as yet undetermined new journey—saw me through so many highs and lows of my journalistic career. I wrote my 2018 interview with Tim Cook on that machine, arguably my interviewer’s zenith. I’d love to know how many tens of millions of words I cranked out on that thing from 2019 until 2025. It seems apropos that I migrated the Magic Keyboard and Magic Trackpad to my new setup, because both accessories work as perfectly today as they did when I first got them years ago alongside the iMac. (I’m planning to eventually upgrade to the Touch ID-equipped Magic Keyboard, but this is serviceable for now.) What bugs me about my decommissioned iMac is the fact its perfectly good 4K display is going to waste. Although the computer itself is usable if frozen in technological amber given its Intel chipset and macOS Sequoia software, the screen alone remains exquisite. It’s unfortunate Apple no longer supports Target Display Mode in macOS; if the company did so, it would mean my old iMac could have a new lease on life, effectively functioning far into the future as a really nice external monitor to a MacBook or Mac mini like my Pro Display XDR does now. As it stands today, though, that’s not possible… so my iMac sits dormant, relegated (for now) to the annals of my personal tech history.
Anyway, I don’t mean to be a braggadocio in saying I now own a $6000 computer monitor. I’m genuinely humbled by my partner’s generosity and am grateful for the privilege of using the Pro Display XDR. The reason I’m so enthusiastic is precisely because it absolutely makes doing my job (and other things) on my computer a richer, more accessible experience—and that, you can’t put a price tag on when in my shoes.
Apple is making Accessory pairing more Accessible with ‘AirPods-Like’ Interface In iOS 26.3
Juli Clover reports for MacRumors this week one of the hallmark features for European Union (EU) users of the still-in-beta iOS 26.3 update is what she describes as an “AirPods-like” pairing user interface for third-party earbuds and headphones.
“The European Commission today praised the interoperability changes that Apple is introducing in iOS 26.3, once again crediting the Digital Markets Act (DMA) with bringing ‘new opportunities’ to European users and developers,” Clover wrote. “The Digital Markets Act requires Apple to provide third-party accessories with the same capabilities and access to device features that Apple’s own products get. In iOS 26.3, EU wearable device makers can now test proximity pairing and improved notifications.”
I could be wrong, but it sounds like Apple’s using its AccessorySetupKit API for this.
The politics of the DMA notwithstanding, it strikes me as a very good thing, accessibility-wise, that people in the EU soon will have access to the one-tap pairing process of AirPods (and Beats). As I’ve said numerous times in the past, that one-tap, almost magical pairing paradigm is more than sheerly convenient; it’s a de-facto accessibility feature. In a vacuum, the “long way” of pairing third-party devices with your iPhone—finding the Bluetooth section of Settings, then finding and tapping on the device—is neither hard nor particularly nerdy. From a disability perspective, however, it can be quite the rigamarole: there’s a lot of tapping and scanning, not to mention cognitive load, involved with launching the Settings app, finding the Bluetooth area, and so on. For people with certain cognitive/motor/visual conditions—or some combination thereof—what’s ostensibly an easy process can be downright daunting… and inaccessible. By contrast, the AirPods method consolidates those steps into a single task; what’s more, what’s great about AirPods in particular is Apple leverages iCloud to propagate pairing with a user’s constellation of Apple products. It’s an implementation detail which also manifests itself as a de-facto accessibility feature considering the manual pairing process that iOS 26.3 is reportedly addressing. In the end, this week’s news should make disabled people living in the European Union really happy because product pairing is about to become a way more accessible experience.
These benefits aren’t exclusive to Apple. Google’s “Fast Pair” does it on Android too.
Curb Cuts Has a dark mode now
The headline says it all. Curb Cuts now has a dark mode.
After solving my “IPHONE” and “IOS” problems last week, I resolved to get even more ambitious in improving the website by adding a dark mode for nighttime viewing. As someone whose devices automatically flip to dark mode at sundown, it’s always bugged me how eye-searingly white my default “light” theme is when I check the site at, say, 9:00 at night. Other blogs run by friends, like Stephen Hackett’s 512 Pixels and Federico Viticci’s MacStories have discrete dark modes and they look very nice, so why shouldn’t Curb Cuts have one too? So yesterday, I decided to spend part of my evening building my own dark mode—all done, of course, with lots of heavy lifting from ChatGPT.
The cool part about Curb Cuts’ new dark mode is it’s automatic; it triggers based on a user’s UI setting. If your iPhone or iPad or MacBook is in light mode during the day, you’ll get the light theme. At night, the proverbial light switch gets flipped off and you’ll get the dark theme. There remain a few minor tweaks to be made, but I think the new look is awesome (and accessible!) and I’m damn proud at being 95% of the way there.
As a practical matter, what I wrote last week is apt here too. I’m decidedly not a web developer, so the lines of CSS code I copy-and-pasted into the Squarespace CMS are instructions I don’t have the skill to write on my own. That’s where I again leaned heavily onto ChatGPT, telling the chatbot what I envisioned for dark mode and having it automatically spit out the code I needed to make my dreams a reality. It took some trial-and-error, but as I said, I’m super happy with the end result despite the need for a bit more polish. I’ll say once more with feeling that code generation is a prime use case for generative AI tools like ChatGPT (or Gemini or whatnot) and, more pertinently, showcase how chatbots can be assistive technologies by making a fit of relatively advanced web development eminently more accessible as a person with disabilities.
Anyway, I hope you enjoy dark mode. Get in touch with any comments or questions.
Southwest Joins Delta, United Airlines in supporting iOS 26 Boarding Pass Feature
Ryan Christoffel reports today for 9to5 Mac Southwest Airlines has added support for iOS 26’s boarding pass feature in Apple Wallet. Southwest joins fellow industry stalwarts in Delta and United in supporting the new functionality for jet-setters.
“Saving boarding passes to Apple Wallet makes it quick and convenient to access those passes right when you need them,” Christoffel wrote on Monday. “And in iOS 26, Apple upgraded the experience with three new features… Live Activities for boarding passes can be shared with a single tap, making it easy for friends or family members to track your flight. And by integrating airport maps and luggage tracking right into the boarding passes, Apple has put more important travel info in one place.”
Besides Live Activities, the other two of the three new features in Apple Wallet he mentions are access to airport maps and luggage tracking through the Find My app.
I decided to cover this news partly because, upon reflecting on 2025, it occurred to me I flew absolutely nowhere this year after flying 17 times last year. (I was expecting to fly to places like Detroit and New York City for work-related events, but circumstances at home caused me to cancel those trips.) What’s more, Christoffel’s story is yet one more reminder of not only the utility, but the accessibility, of Apple Wallet. I’ve extolled the virtues of Apple Pay in this regard plenty in the past, but these air travel-centric features can play significant roles in making flying more accessible too. To wit, having one’s digital boarding pass accessible from the Lock Screen is far more accessible than digging for a printed version. (Not to mention passports and other identification.) Likewise, airport maps could be useful in, say, helping people who are Blind and low vision quickly and reliably find their gate after passing through the security checkpoint.
As Christoffel notes, the onus falls on airlines to implement support for iOS 26’s boarding pass feature. Beyond Southwest and the others, American Airlines, JetBlue, and Air Canada all have pledged their support for the future, however undisclosed.
A pro tip from me: While Wallet’s flying features are appreciated, I personally adore using Flighty when I’m flying somewhere. It’s truly one of the best apps I’ve ever used.
NYC Mayor-Elect Zohran Mamdani Pledges support for disabled people in inclusive hiring push
Earlier this month, Christopher Alvarez reported for Able News New York City (NYC) Mayor-elect Zohran Mamdani has pledged to make disabled New Yorkers part of his administration’s broader inclusive hiring push. Mamdani, an avowed democratic socialist, won the mayoral election in November in a landmark win for progressives.
Alvarez’s interview with Mamdani is the first of an exclusive, multi-part series.
“For disabled New Yorkers, employment barriers start at the first point of entry—the application process,” Alvarez wrote. “Of the almost 986,000 New Yorkers with disabilities, nearly 70% are people of color. Persistent barriers in hiring and wage equity remain key concerns—issues that Mamdani has said he intends to address.”
Mamdani has launched an employment portal that he “encourages” disabled job-seekers take advantage of. The website has taken more than 70,000 applications so far.
Notably, Alvarez mentions in his story a 2024 report published by the NYC Comptroller’s office found the disability employment rate in the city is “half that” of New Yorkers without disabilities. I interviewed the city’s Comptroller Brad Lander in July 2024 about that very report, as well as about disability justice writ large. Lander threw his hat into the proverbial ring that was the NYC mayoral race. Lander finished third in the election behind Mamdani of course and former New York State governor Andrew Cuomo.
Senator Kirsten Gillibrand Calls on Veterans Affairs to provide More accessible Technologies
In a press release published on Friday, New York senator Kirsten Gillibrand (D) announced what’s described as a “bipartisan push” for the Department of Veterans Affairs (VA) to make technology more accessible to veterans with disabilities. Gillibrand, a ranking member of the Senate Special Committee on Aging and member of the Senate Armed Services Committee, is working with U.S. Representative David Valadao (R-CA) in pushing the VA towards “swift action” in greater accessibility for veterans.
“Accessible technology is critical to make sure that veterans with disabilities can get the information and services they need and to make sure that VA employees with disabilities can do their jobs. Roughly one-quarter of veterans have a service-connected disability, and post–9/11 veterans, who [the] VA will serve for decades to come, have a higher rate of service-connected disabilities. Additionally, Section 508 of the Rehabilitation Act of 1973 requires federal technology to be accessible for and usable by people with disabilities,” Senator Gillibrand’s office wrote in its announcement. “Despite this, congressional and independent oversight efforts have consistently found that VA technology does not meet this requirement. A recent VA Office of Inspector General (OIG) report found that, of the 30 critical information and communication technology systems analyzed, 26 were not accessible for people with disabilities. In its report, VA OIG issued four recommendations to improve VA accessibility and encourage the procurement of accessible technology.”
Senator Gillibrand has written a letter to VA leaders wherein she encourages the agency implement the aforementioned recommendations “as fast as possible” while also asking for details on exactly how the VA plans to approach said implementation.
“Ensuring our veterans have the support, information, and services they need is of the utmost importance—and [the] VA cannot do this unless its technology is accessible to veterans and VA employees with disabilities,” Sen. Gillibrand said in a statement. “VA must train its employees to procure accessible technology and take steps to ensure that its technology remains accessible. I will continue to provide rigorous oversight on this issue to make sure that our veterans get the support that they deserve.”
I’ve covered the VA on a couple occasions in the recent past, most recently in April 2024 when I interviewed VA executive Chet Frith about assistive technology and his role leading the agency’s 508 Compliance Office. Prior to my conversation with Frith, I sat down with Dewaine Beard in August 2023, the VA’s principal deputy assistant secretary in the Office of Information and Technology, about his job and what’s in his purview. In addition, I sat down virtually with Illinois senator Tammy Duckworth, herself a disabled vet, to talk, amongst other topics, the importance of accessibility and assistive tech.
Roomba Manufacturer iRobot Declares Bankruptcy
Earlier this month, John Keilman reported for The Wall Street Journal Roomba maker iRobot filed for bankruptcy. Despite the bad news, however, the company emphasizes “its devices will continue to function normally while the company restructures.”
“Massachusetts-based iRobot has struggled financially for years, beset by foreign competition that made cheaper and, in the opinion of some buyers, technologically superior autonomous vacuums,” Keilman wrote. “When a proposed sale to Amazon.com fell through in 2024 because of regulatory concerns, the company’s share price plummeted.”
iRobot was founded in 1990.
Although I’ve never used a Roomba—nor any other robot vacuum—it’s nonetheless easy for me to see how the things could be useful in an accessibility context. To wit, household chores like cleaning isn’t easy for many people with disabilities, myself included, and vacuuming could be untenable for a variety of reasons. Maybe you can’t hold and push the vacuum. Maybe you can’t see dirty spots. Maybe you can’t empty the bag/bin. Whatever the case, to invest in something like a Roomba is neither indulgent nor living luxuriously; on the contrary, it’s downright practical. The ability to use one’s phone to control it, not to mention have it return to its dock to relieve itself and recharge, can make vacuuming one’s floors an eminently more accessible task. The tech media at large has a penchant for ascribing frivolity and luxury to robotics, and while there is a kernel of truth to that argument, what the able-bodied masses (predictably) gloss over are the people who might truly benefit from, say, a robot vacuum for accessibility’s sake. Again, a Roomba isn’t exactly an inexpensive device, depending on the model, but the investment can be worth it to someone who is unable to manually vacuum yet wishes to retain some agency and autonomy in the process. That in itself is absolutely a goal worth striving for in this case, clean floors be damned.
Gemini Makes Web development More accessible
A bit of a meta, inside baseball post here, so bear with my nerdiness.
One part of Curb Cuts’ design that has stuck in my craw from the beginning is how I could never get headlines to properly stylize brand names like “iPhone,” iPad,” “iOS,” and so on. This website doesn’t have a codified style guide, but I know, as one prime example, I prefer using letter case in headlines whereby every word begins with a capital letter. The problem with that approach, however, rears its ugly head when using Apple product names. My blog’s template likes to capitalize every word—as it should 95% of the time—even the lowercase “i” in iPhone and its brethren. It’s been driving me nuts, but I’ve let it be because, well, at least I can control stylization in my body copy, right? That is, until today when I got fed up and decided to be more intrepid in fixing the issue and to assuage my slightly obsessive-compulsive, design-centric sensibilities.
Enter Google Gemini. It came to rescue and proved my salvation.
I explained the problem to Gemini and what I wanted to accomplish. After a good bit of back-and-forth and trial-and-error, Gemini helped me identify the core issue: I needed a handful of CSS and JavaScript code to properly stylize the aforementioned product names. The technical part is cool, but the big win—notably from an accessibility perspective—is Gemini itself. I’ve written about this in the past, but it bears repeating here: having the chatbot do all the grunt work such that all I do is hit ⌘-C and ⌘-V (copy/paste on the Mac) into the “Code Injection” section of this site’s backend, press Save, and watch the magic happen is so much more accessible than manually using umpteenth Google searches to find the technological Tylenol I needed to remedy my website’s headache . What’s more, I know only basic CSS/JS; the code Gemini generated for me in 30 seconds’ time is far beyond my aptitude level. But that’s the whole point—my experience this afternoon making these tweaks to Curb Cuts’ layout is a perfect illustration of the power of generative AI to be an assistive technology. To do the grunt work myself is possible for me, but nonetheless comes with the costs of suffering eye strain and fatigue, hand fatigue from typing, and headaches from stress and tiredness. Those after effects aren’t trivial—things which are exacerbated for others who must account for coping with different and/or more severe disabilities than I do.
Chatbots can be far more benefits than mere convenient conduits for trivial pursuit.
Gemini made web development more accessible—and made my site look better too.
Finally.
Instagram for TV Makes Reels More Accessible
Meta-owned Instagram this week announced Instagram for TV. The app is launching first on Amazon’s Fire TV platform (!) and is intended to enable users to watch Reels, alone or together with friends, on a much larger display than on one’s phone or tablet.
“Today we’re excited to start testing Instagram for TV, bringing reels from your favorite creators to the big screen so you can enjoy them with friends,” Instagram said in its post. “We’ve heard from our community that watching reels together is more fun, and this test is designed to learn which features make that experience work best on TV.”
Instagram says the TV launch is a “test,” adding expansion is planned for the future.
I don’t have a Fire TV device handy to try Instagram TV, but it nonetheless strikes as a good move. From an accessibility perspective, even the relatively big screen on, say, an iPhone Air or iPhone Pro Max is decidedly dwarfed by a 55- or 65- or 77-inch TV screen. This is precisely why FaceTime on tvOS is so smart; I haven’t used it yet because I don’t do a ton of videoconferencing, but just knowing I can do it from my massive LG C3 OLED is pretty cool. It’s more accessible to look at a person on a TV than on my comparatively tiny phone screen. Ergo, the same argument applies to Instagram for TV. I quite enjoy watching Reels—especially for food-oriented content—and can attest to the fact Reels is a super conduit towards bed rot and thus utterly losing all track of time and space. Bed-rotting whilst watching umpteenth Reels is admittedly unhelpful to someone who copes with severe anxiety and depression, but I speak the truth from experience.
Instagram for Fire TV is available to download now.
The disability Angle In ESPN’s New Stuart Scott Film
As I write this, I’m three-quarters into ESPN’s latest 30 for 30 film, which premiered last week. The nearly 90-minute documentary, titled Boo-Yah: A Portrait of Stuart Scott, chronicles Scott’s life, both personally and professionally as a Black broadcast journalist. Scott, who died of cancer at age 49 in 2015, joined ESPN in 1993 and eventually rose to prominence to become the most popular SportsCenter anchor.
ESPN described the film last month in a press release as “[tracing] Stuart’s journey from local television in North Carolina to becoming one of ESPN’s most influential voices. At a time when hip-hop and popular culture was often marginalized in mainstream media and few Black anchors held national prominence, Stuart brought both unapologetically to SportsCenter—blending sharp analysis, pop culture and swagger in a way that spoke directly to a new generation of fans.”
The network continued in its announcement: “As the film recounts, Stuart’s impact extended far beyond the newsroom. He bridged sports and culture, made SportsCenter must-watch television and became a symbol of courage through his public battle with cancer—culminating in his unforgettable ESPYS speech that reminded viewers, ‘You beat cancer by how you live, why you live, and the manner in which you live.’”
I’m covering the documentary for several reasons, not the least of which because I learned by watching Boo-Yah that Scott had a disability. He coped with a rare visual condition called keratoconus, the effects of which were compounded by an eye injury sustained when a football hit him in the face during a New York Jets mini-camp in 2002. Upon recovering, he wore glasses and, according to the documentary, held his stat sheets super close to his face—I can relate—and struggled to read the teleprompter.
Scott was a mainstay of my sports-watching life; he indeed was my favorite SportsCenter personality. Beyond the disability angle, which I obviously am drawn towards, I feel like there are a lot of professional parallels to Scott’s tenaciousness in getting work (and thus respect) as a journalist from a marginalized community. I of course didn’t know Scott, but I definitely can empathize with his belief that he had to prove himself worthy in an industry where 99.9% of people don’t look like you. Even as I approach my own 13-year anniversary this coming May, with all that I’ve accomplished in tech media over the past decade-and-a-half, I continually feel the pressure to prove my worth over and over again—despite what friends and peers tell me about my extensively impressive résumé. Like Scott, I’m a minority in journalism—arguably the minority’s minority group—and constantly feel like, as Scott’s daughters recount at one point in the film, I must “work twice as hard to get half as much.” We’ve seen lots of success, but only after we’ve kicked down doors at every turn to procure our plaudits.
Scott made it to ESPN. Will I ever make it to ABC News or NBC News or The Gray Lady?
As a related aside, the ESPN app on tvOS is delightful—so much so, it’s in my Top Shelf.
Anyway, I highly suggest sitting down to watch Boo-Yah. It’s well worth your time.
Inside the rochester institute of technology’s Latest Mission to center the Deaf Viewpoint
Early last month, Susan Murad wrote for the Rochester Institute of Technology’s website about how researchers at RIT, as the New York-based institution is colloquially known, soon will “use eye-tracking to show how deafness impacts vocabulary knowledge and reading as well as how deaf and hard-of-hearing children, who have historically shown lower than average reading outcomes, develop into highly skilled readers.” The research project is largely made possible by way of a not-insignificant lift from a $500,000 grant provided by the venerable National Institutes of Health, or NIH.
According to Murad’s story, RIT’s research is led by Dr. Frances Cooley, an assistant professor at the National Technical Institute for the Deaf’s Department of Liberal Studies. Dr. Cooley, who leads the school’s Reading and Deafness Lab, and team, Murad reported, are examining “how vocabulary knowledge in American Sign Language supports English reading development” [as well as] “how first-language knowledge shapes second-language reading comprehension and eye-movement control.” The team’s findings will “have important implications for theories of reading development and for educational practices that support bilingual learners,” according to Murad.
Fast-forward to mid-December and I had the opportunity to sit down virtually with Dr. Cooley to discuss the work by her and her team. She explained the root of her interest in deafness and reading comprehension traces back to an article she came across while doing graduate work that said the average Deaf person reads at a fourth grade level. Such a sobering statistic bothered Dr. Cooley, she told me, largely because “[it] said to me we’re not doing something in our educational practices to allow deaf students to thrive.” As such, the knowledge motivated her to begin looking into why reading levels amongst Deaf people are so low; she wanted to better understand Deaf people and how exactly they read, along with a deep dive into groups of Deaf readers. In particular, Dr. Cooley was keenly interested in who had early access to ASL versus those who didn’t.
“When we look at those who had early access to American Sign Language, we actually see these incredible differences that are beneficial for Deaf readers,” Dr. Cooley said. “They are actually more efficient. They read faster. They skip more words, and this doesn’t actually negatively impact their comprehension. This is particularly interesting because they’re technically second language users of English, and most second language users are going to be less efficient in their second language, but these Deaf readers are even more efficient than a typically hearing monolingual reader.”
She continued: “I really got excited about this strengths-based approach to understanding what a successful Deaf reader does, and I wanted to be able to translate that into educational practices so that all Deaf readers can thrive. I really think moving away from a focus on what people can’t do and transitioning that to what they can do is really beneficial in a bunch of different ways. Eye-tracking—I love to say your eyes are your best way to point your brain at different things—we don’t really have any other way to point our brains at things, so if we’re looking at the eye movements, we can get really fine-grained information about what people are doing when they’re actually reading. I think that’s much more interesting than having someone read a sentence or read a paragraph and answer questions about it, because that involves a whole bunch of other processes like memory, and to me, that’s less interesting to me. It’s still important, but what people are actually doing as their eyes move across a sentence can tell us so much about the underlying processes of what their brains are actually interested in [when they] successfully extract language from text.”
In a sentence, Dr. Cooley said all this highfalutin eye-tracking tech and subsequent research is meant to “establish how a Deaf child uses their first language ASL skills.”
Asked to expound on her goals, she replied thusly: “I’m looking primarily at Deaf children who had early access to sign language: either they have Deaf, signing parents or they have hearing parents who made an effort to learn sign language early. Then these kids go to bimodal, bilingual schools, so they’re really depending on their ASL skills to learn to read English. I really want to know how, from a bilingualism perspective, how that first language access and having a strong first language can benefit the ability for these children to learn a second language, which is English or any other ambient language in a community, by exploiting their first language skills. We see this in hearing populations. We see this all the time. Bilingualism is the norm in most countries around the world, bilingual or multilingualism. If we understand a Deaf child signer as a developing bilingual child, and we think about the aspects of their first language and how that can help them learn their second language more successfully, we’re getting a more appropriate and equitable snapshot of this minority population.”
When asked about the technical component involved with eye-tracking, Dr. Cooley said the device she uses is mounted atop a desk with a laptop behind it such that a child can sit normally and read what’s on screen. The tracker then uses a painless, undetectable infrared light to the subject’s eyes, which is reflected and travels back to the computer. The reflected light contains data into where the child’s eyes are positioned while reading—all of it in real time. “Based on what we already know about how readers use information to read, we can then look at Deaf readers in this paradigm,” Dr. Cooley said.
She further noted there exists “a really big body of research” centered on eye movements and reading, adding it’s only been recently, in the last 20–30 years, that Deaf people, especially Deaf signers, have been included in these kinds of studies. The richer inclusion meant, Dr. Cooley said, researchers have been able to learn a lot more about how everybody, Deaf or not, “[uses] their eyes to extract language from text.”
As someone with low vision who, incidentally, has struggled with eye-tracking on things like Face ID and Apple Vision Pro, I asked Dr. Cooley how nimble her tracker device is. Her answer? Not very. The technology she currently uses assumes what she described as “your most typical eye differences,” emphasizing the tracker works “just fine” with aids like contact lenses and glasses. Beyond that, however, she “said the team is “unfortunately” excluding people who have ocular motor conditions (like yours truly) not out of maliciousness, but out of a desire to “be certain that what these kids are doing with their eyes is reflective of what their brains are trying to do.” Dr. Cooley went on to tell me people with lazy eye, medically known as strabismus, are excluded because their eyes can’t always point to where their brain wants to focus. This weakness, technologically anyway, is crucial because Dr. Cooley’s tracker relies upon an algorithm to function. She hopes to improve the algorithm over time such as to accommodate more types of readers, but that, she said with humility, is beyond her ken. Nonetheless, it is something very important to her that gets addressed as time goes on.
“If we’re not capturing the cognition of every single population of people, I don’t think we’re really capturing cognition—and that includes people with differences in their eye shapes and people with differences in how they use their vision,” Dr. Cooley said. “But at this point, it’s easier to start with the most traditional eye move [and] eye shape because it’s just easier to draw the conclusions we need. But [accommodating visual disabilities] is an important thing to think about. It’s just currently not one of my goals.”
At a more personal level, Dr. Cooley’s ties to deafness and the community are tight. She’s married to a Deaf person and has been a self-described “second language signer” for close to 16 years, telling me she likes to think of herself as being “pretty involved” with the Deaf community. Despite her horn’s toots, though, Dr. Cooley readily acknowledges the “positionality” as a hearing person in a hearing-dominated world. On the eye-tracking project, she explained there are consultants who help the researchers with not only data collection, but also informing with best practices when working with Deaf children so as to not be “triggering.” This is a key point, Dr. Cooley said, because a lot of Deaf people cope with what she termed “educational trauma,” so RIT’s goal is to avoid said triggers and instead be as “Deaf-friendly” as possible. Still, a significant number of people have reached out to Dr. Cooley and team to express their appreciation for going after the insights they’re trying to glean from their research.
“There’s a great need for this type of information. I think practitioners need it. There’s a lot of information out there about what is most important for a deaf child,” Dr. Cooley said. “One of the biggest arguments that can be made for an oral approach—avoiding sign language and instead making sure a Deaf child is able to speak and read lips and use hearing devices—one of the biggest arguments for that is they won’t be able to learn to read, or will be far less successful in learning to read if they can’t associate sounds with letters. I think that isn’t actually representative of what most Deaf people can do. If you look at Deaf signers, they have this incredibly rich and robust language; most Deaf people will talk about how they use their signing to help read to their children… they sign along with the book, and so their children are exposed to both print and sign. If we can take advantage of these things, I think we can not only make a Deaf child reader more successful, but also feel a little bit better about themselves and not feel like who they are and how they happen to be born is going to make them unable to do something. I think anybody should be able to do anything, and if our educational practices are not well-researched or not founded in research, we can’t know for sure they’re the best practices. It’s pretty clear, given the wide variability in reading outcomes for a lot of Deaf and hard-of-hearing people, that there’s something we don’t know, or there’s something that some people are doing better than others. We just we have to test it and see what’s going on to actually be able to make a difference.”
She added: “All of the conversations I’ve had with people, they’ve all been extremely positive. I think education experts, the people who are actually teaching children in the schools, policy makers, early intervention specialists, everybody wants some type of research that can really be used to show ‘Hey, ASL is not detrimental to your Deaf child, it’s actually going to be beneficial. Here is one of the ways that it’s beneficial.’ I have a lot of people reach out to me and asking for these resources and ask for papers that show American Sign Language is only beneficial for Deaf children learning to read.”
At its core, RIT’s work is ultimately about centering the Deaf point of view.
“I always say, if we actually listened to Deaf adults, a lot of this research might not be necessary,” Dr. Cooley said. “They’ve been telling us for years and years and years that ASL is so incredibly important for so many different reasons, but we need the research. Someone has to do it, and I’m so privileged I get to do it. And I love, love [doing] this work… it makes me excited! It feels like a privilege to be doing what I’m doing.”
Dr. Cooley spoke effusively about being based in Rochester and the city’s sizable Deaf presence. (In fact, this very piece is not my first rodeo with the National Technical Institute for the Deaf, having covered the Sign Speak app in September 2024.) She said it’s typical for those in cognitive science to choose the path of least resistance when it comes to recruiting people to participate in studies like hers. Naturally, the Deaf community is a smaller populace, even in Rochester, so it’s “going to take a little bit more effort” to get folks into the lab. But the payoff is worth it; Dr. Cooley told me her troops have fostered a tight relationship with Rochester’s School for the Deaf. She told me the school is a K–12, bimodal and bilingual institution for Deaf and hard-of-hearing students. Because of proximity, both geographically and logistically, Dr. Cooley said her staff actually finds it “not too difficult” to hook up with interested parents and others. And Rochester isn’t the end-all, be-all either; Dr. Cooley said her team has similar positive relationships spanning the country, from Texas to Indiana and beyond.
“Because of those relationships, we aren’t nearly as concerned with the data collection as somebody else without those relationships would be,” she said. “It’ll definitely take longer to run this type of research than it would take to run this type of study with hearing children because there are fewer concentrated pockets of these readers.”
Looking towards the future, Dr. Cooley hopes to forge “stronger partnerships” with experts across various disciplines, people who oftentimes exist on “in their own little silos.” Without these cross-collaboration, there’s too much navel-gazing and not nearly enough advancing in understanding the world, and the people who inhabit it, better.
“I really hope in the future, we’re able to get to a point where we can directly meet the needs of all children, not just Deaf and hard-of-hearing children—all children who have varied needs in terms of their ability to read and write,” Dr. Cooley said in looking into the proverbial crystal ball. “In the current day and age, if you can’t read and write, your ability in an academic or professional field is going to be pretty limited. I think being able to meet the needs of all of our children so they can be fully functional and fully capable adults is the goal. I really hope my research can start bringing us towards that.”
White House Claims ASL Interpreters would ‘intrude’ on the president’s public image
Meg Kinnard reported last week for The AP the White House argues that using ASL interpreters during press briefings “would severely intrude on the President’s prerogative to control the image he presents to the public.” The Trump administration made said claim in response to a lawsuit seeking to compel them to provide interpreters. Attorneys for the Justice Department added President Trump has “the prerogative to shape his Administration’s image and messaging as he sees fit.”
“Department of Justice attorneys haven’t elaborated on how doing so might hamper the portrayal President Donald Trump seeks to present to the public,” Kinnard wrote on Friday. “But overturning policies encompassing diversity, equity and inclusion have become a hallmark of his second administration, starting with his very first week back in the White House.”
Kinnard continued: “Government attorneys also argued that it provides the hard of hearing or Deaf community with other ways to access the president’s statements, like online transcripts of events, or closed captioning. The administration has also argued that it would be difficult to wrangle such services in the event that Trump spontaneously took questions from the press, rather than at a formal briefing.”
I first covered this story back in July, the editorializing from which bears repeating here. Like the State Department’s decision to go back to Times New Roman from Calibri in correspondence, the White House’s proclivity to poo-poo the need for sign language interpretation—a defense that much more laughable because Gallaudet University is virtually down the street—is yet another example of the Trump administration’s extinguishing of any and all diversity and inclusion initiatives. It’s being made abundantly clear the powers-that-be, starting with Trump himself, wants America to be White, wealthy, male, and able-bodied. But such rationale is par for the course—not just at 1600 Pennsylvania Avenue, but for society as a whole. The disability community, yours truly included, is always cast away to the margin’s margin, even amongst DEI supporters, because society has internalized that having disabilities is bad and a sign of a “broken” human condition. Down to brass tacks, that’s why accessibility exists: to accommodate traversing a world unbuilt for people like me. Likewise, it’s why disability inclusion is so miserably behind other areas of social justice reporting in journalism; it’s oftentimes seen as too esoteric or niche to devote meaningful resources towards. All things considered, that’s why I always say doing this work and amplifying awareness is a task of Sisyphean proportions most days. We use technology as much as anyone else. We read the news like anyone else. We’re Americans like anyone else in this country… but somehow are thought as something less than the human beings we obviously are.
Apple Says ‘Pluribus’ is ‘Most-Watched Ever’
Marcus Mendes reported for 9to5 Mac this week Apple TV’s new hit show, Pluribus, has officially become the streaming service’s “most-watched ever.” The news comes shortly after Apple announced Pluribus became its “biggest drama launch ever.”
“Last month, Apple said that Pluribus had overtaken Severance Season 2 as Apple TV’s most successful drama series debut ever, a landmark that wasn’t completely surprising, given the overall anticipation and expectation over a new Vince Gilligan (Breaking Bad, Better Call Saul) project,” Mendes wrote on Friday. “Now, on the same day that F1 : The Movie debuted at the top of Apple TV’s movie rankings, the company confirmed that Pluribus has reached another, even more impressive milestone: it is the most watched show in the service’s history. Busy day.”
As Mendes notes, Apple keeps its viewership cards—and its subscriber numbers—close to the proverbial chest, so it’s difficult to quantify exactly what “most-watched ever” actually means. At any rate, I can attest personally that Apple TV is unquestionably my favorite streaming service—and not solely because of its embrace of earnest disability representation. Like anyone else, I like to be entertained and Apple TV does it for me with shows like Pluribus and Severance and The Morning Show and For All Mankind. I’m not quite up to speed with Pluribus as of this writing, but can heartily say it and Severance are two of the best damn shows I’ve ever seen in my 44 years of life. What makes them even more enjoyable is, technologically speaking, my 77” LG C3 OLED—which came out in 2023 but I got in early January 2025—is so bright and sharp, along with its infinite contrast, and makes not only for spectacular picture quality, it makes for spectacular, accessible picture quality in terms of sheer size and obviously fidelity. Between my various Apple devices, I’ve grown accustomed to OLED displays for some time now; that said, there’s nothing like experiencing OLED on a screen as large as a television’s. Like Steve Jobs said of the iPhone 4’s Retina display 15 years ago, once you go OLED, it’s hard to go back to a “lesser” (and, yes, less expensive) technology.
Anyway, go watch Pluribus posthaste if you haven’t already. It’s so damn good.
According to Mendes, the show’s first season will run through December 26. Season 2 is currently in development following Apple’s original commitment to do two seasons.
Google Translate Gets Live Translation Enhancements in Latest update
Abner Li reports for 9to5 Google today Google Translate has been updated such that live translation leverages Gemini—including while using headphones. The feature is available in the iOS and Android apps, as well as the Google Translate website and Google Search. Live translation is launching first in the United States and India with the ability to translate from English into over 20 languages such as Chinese and German.
“Google Translate is now leveraging ‘advanced Gemini capabilities’ to ‘improve translations on phrases with more nuanced meanings,’” Li wrote on Friday. “This includes idioms, local expressions, and slang. For example, translating ‘stealing my thunder’ from English to another language will no longer result in a ‘literal word-for-word translation.’ Instead, you get a ‘more natural, accurate translation.’”
(File this under “I Learn Something Every Day”: Google Translate has a web presence.)
As to the real-time translation component, Li says the feature is underpinned by Gemini 2.5 Flash Native Audio and works by pointing one’s phone in the direction of the speaker. He also notes Google says Translate will “preserve the tone, emphasis and cadence of each speaker to create more natural translations and make it easier to follow along with who said what.” Importantly, Li writes the live translation function is launching in beta on Android for now; it’s available in the United States, India, and Mexico in more than 70 languages, with Google further noting the software works with “any pair of headphones.” iOS support and more localization is planned for next year.
“Use cases include conversing in a different language, listening to a speech or lecture when abroad, or watching a TV show/movie in another language,” Li said in describing live translation’s elevator pitch. “In the Google Translate app, make sure headphones are paired and then tap ‘Live translate’ at the bottom. You can specify a language or set the app to ‘Detect’ and then ‘Start.’ The fullscreen interface offers a transcription.”
It doesn’t take an astrophysicist to surmise this makes communication accessible.
At Thanksgiving dinner a couple weeks ago, one of my family members regaled everyone with stories about his recent trip to Paris. He of course knows I’m a technology reporter, and he excitedly told me he bought a pair of AirPods Pro 3 at the Apple Store before his trip so he could try Apple’s own Live Translation feature, powered by Apple Intelligence. I was told it worked “wonderfully” with him being able to hear their French translated to his English piped into his earbuds. It seems to me Google’s spin on live translation works similarly, with the unique part (aside from Gemini) being that it isn’t limited to Pixel Buds. At any rate, language translation is a genuinely good use case for AI—and, more pointedly, a good example of accessibility truly being for everyone, regardless of ability, because it breaks through communicative barriers.
Apple announced Live Translation on AirPods at its fall event in September.
Report: Refreshed Studio Display Found in code
Earlier this week, Filipe Esposito reported for Macworld an internal build of iOS 26 contains references to a looming update to the Studio Display. The finding, using the codename “J527,” corroborates previous reporting by Mark Gurman at Bloomberg.
“References in the code clearly show that this new Studio Display has a variable refresh rate that can go up to 120Hz, just like the ProMotion display on the latest MacBook Pros. The current Studio Display is limited to 60Hz,” Esposito wrote on Wednesday. “Furthermore, the code references a ‘J527’ monitor that also supports both SDR and HDR modes, an upgrade from the current SDR-only model. This is a strong indication that Apple will replace the LCD panel with better technology, such as Mini-LED that can achieve higher brightness levels.”
According to Esposito, other features of the still-in-development second-generation Studio Display include an A19 processor, ProMotion, and much better HDR support.
I’ve written previously about my sore need for a new Mac to replace my outmoded (yet still chugging along) 2019 Retina 4K iMac, a task I’ve put off for a variety of reasons. I really do feel lots of FOMO not running macOS 26 Tahoe, however, and feel bad for life “dictating” to me that the lowest common denominator—my job not requiring tons of compute power—makes my trusty yet tired iMac “good enough.” As I’ve said before, it sucks to miss out on Apple Silicon amenities like iPhone Mirroring—a feature which I haven’t written about much, if at all, but which has serious benefits from an accessibility perspective. All of this to say, I’m very excited at the prospects of a new external monitor that I can plug one of my MacBooks into; a laptop’s screen is serviceable to me while I’m out of the house—narrator: his severe anxiety and depression scoffs at the notion—but if I’m working primarily at my desk, I’d much rather have a bigger screen to accommodate my low vision. So while the Pro Display XDR is forever my white whale monitor, this rumored Studio Display upgrade sounds damn good too—and is arguably the eminently more practical device for my spartan needs.
One way or another, I’m hellbent on making 2026 the Year of Steven’s Desk Makeover.
Apple released the Studio Display in 2022 to complement the all-new Mac Studio.
‘Fire TV makes entertainment more accessible’
Late last week, Amazon published a piece on its website in which it touts a few of accessibility benefits of its Fire TV operating system for people with disabilities. The platform’s assistive technologies, the company said, “represent more than just technology: they’re about creating moments where everyone can enjoy entertainment their way,” adding Fire TV “adapts to your needs rather than the other way around.”
“Picture this: It’s movie night, and everyone’s gathered around the TV. One person is trying to solve the mystery before the detective, another is straining to catch every word of dialogue, and someone else needs their hearing aids to enjoy the show. We’ve all been there—wanting to share entertainment moments together but having different needs to experience these moments best,” Amazon wrote in the introduction. “During a time of year when friends and family are gathering more often, Amazon Fire TV is highlighting how Fire TV is built for how you watch. This initiative celebrates the unique ways we all enjoy entertainment and highlights innovative features that make watching your favorite TV shows and movies more accessible and enjoyable for everyone.”
The meat on the bones of Amazon’s post highlights three features in particular: Dialogue Boost, Dual Audio, and Text Banner. I’ve covered all of these technologies in one way or another several times over the years, and have interviewed Amazon executives such as Peter Korn many times as well. In fact, one of my earliest stories for my old Forbes column was an ode to Fire TV hardware in the Fire TV Cube. My praise holds up today; whatever one thinks of Fire TV’s ad-littered user interface and general design, it’s entirely credible for a disabled person who, for example, has motor and visual disabilities, to choose a Fire TV Cube as their set-top box precisely for Fire TV’s accessibility attributes—especially the Cube’s ability to control one’s home theater. To wit, it isn’t trivial that the Cube can switch between HDMI inputs on a TV and even switch on a game console or Blu-ray player. Given the smorgasbord of remotes and whatnot, that someone can ask Alexa to, say, “Turn on my PlayStation 5” is worth its weight in gold in terms of accessibility for its hands-free operation. Again, to choose Fire TV (and the Cube) as one’s preferred TV platform because of accessibility is perfectly valid; it’s plausible that accessibility is of greater importance than the subjectively “messiness” of the Fire TV’s UI and its barrage of advertisements.
You can learn more about Fire TV accessibility (and more) on Amazon’s website.
Times New Rubio
This week, The New York Times ran a story, under a shared byline of Michael Crowley and Hamed Aleaziz, which reported on Secretary of State Marco Rubio’s memo to State Department personnel saying the agency’s official typeface would go back to 14-point Times New Roman from Calibri. The Times didn’t include Rubio’s full statement, but John Gruber obtained a copy from a source and helpfully posted a plain text version.
“Secretary of State Marco Rubio waded into the surprisingly fraught politics of typefaces on Tuesday with an order halting the State Department’s official use of Calibri, reversing a 2023 Biden-era directive that Mr. Rubio called a ‘wasteful’ sop to diversity,” Crowley and Aleaziz wrote on Wednesday. “While mostly framed as a matter of clarity and formality in presentation, Mr. Rubio’s directive to all diplomatic posts around the world blamed ‘radical’ diversity, equity, inclusion and accessibility programs for what he said was a misguided and ineffective switch from the serif typeface Times New Roman to sans serif Calibri in official department paperwork.”
The reason I’m covering ostensibly arcane typographical choices is right there in the NYT’s copy: accessibility. The Biden administration’s choice to use Calibri, decreed in 2023 under then-Secretary Antony Blinken, was driven in part to be more accessible—Calibri was said to be more readable than Times New Roman. In his piece, Gruber calls bullshit on that notion, saying the motivation was “bogus” and nothing more than a performative, “empty gesture.” He goes on to address Secretary Blinken’s claim, according to a The Washington Post report, that the Calibri-to-Times New Roman shift was made because serif fonts like Times New Roman “can introduce accessibility issues for individuals with disabilities who use Optical Character Recognition technology or screen readers [and] can also cause visual recognition issues for individuals with learning disabilities.” Gruber rightly rails against the OCR and screen-reader rationale as more bullshit while also questioning the visual recognition part.
I’m here to tell you the visual recognition part is true, insofar as certain fonts can render text inaccessible to people with certain visual (and cognitive) disabilities. This is because the design of letters, numerals, symbols, et al, can look “weird” and not “normal” to certain people and how one’s brain processes visual information. Why this is important is because bad typography can, for a person with low vision like yours truly, adversely affect the reading experience—both in comprehension and physically. Depending on your needs and tolerances, slogging through a weird font can actually lead to physical discomfort like eye strain and headaches. It’s why, to name just one example, the short-lived ultra-thin variant of Helvetica Neue was so derided in the first few iOS 7 betas back in 2013. It was too thin to be useful in terms of legibility, prioritizing aesthetics over functionality. (A cogent argument could be made the tweaks Apple has made to Liquid Glass, including adding appearance toggles, are giant, flashing neon signs of correction from similarly prioritizing aesthetics over function at the outset.)
As somewhat of a font nerd myself—I agonized over what to use at Curb Cuts when designing the site before settling on Big Shoulders and Coda—I personally find Times New Roman ugly as all hell and not all that legible, but I can see the argument that it’s more buttoned-up than Calibri for official correspondence within the State Department. Typographical nerdery notwithstanding, however, what I take away from Rubio’s directive is simple: he cares not one iota for people with disabilities, just like his boss.