It seems AT&T has announced they want to stop supporting their analog switched network and go with a full internet protocol communications system. I'm looking at a story on GigaOm that starts with a photo of copper refrigeration tubing. This does not give me a lot of confidence in the story. When they call the old landline systems "copper pipes" it's just a euphemism. It's really small gauge wire arranged in twisted pairs for reasons explained in core curriculum physics classes in any engineering school.
But skipping right ahead past my criticism of journalistic misses, what about the audio?
The old analog system was designed to limit the transmission of audio to a range of 300 Hz to 3400 Hz and it was full duplex. You could both talk simultaneously and hear each other. It made conversations natural, even if you couldn't tell an S from an F because sibilance is above 3400 Hz. We had a phonetic alphabet and we worked around it.
Back in the '90s when I worked on equipment for the coaxial cable network we had high hopes we would be able to use the higher bandwidth capacity to IMPROVE voice quality on phone calls. AT&T built fiber networks that actually included some circuits that eliminated that 300 Hz to 3400 Hz notch filter effect. When you got put on hold on one of those circuits the recorded message and on-hold music sounded pretty good and I would beam with pride that we were making strides forward in audio quality. Ahh the '90s.
In the late '90s the digital cell phones arrived. And they sounded terrible. I rationalized that they use codecs that were developed for ham radio and are notoriously tuned for the deep male voice. Codecs are the chips that convert analog to digital and compress the data to the smallest number of bits that can be converted back and remotely resemble the original. Coder/decoders = codec like modulator/demodulator = modem. Back then nobody could understand a woman on a digital ham radio. (I stopped messing with that stuff so I don't know about now.) Anyway, I figured the coding for female voices would quickly improve because the market would demand it. The digital signal processing people were smart. They could figure it out. There were all kinds of ways to fix this, even if they had to sell different phone models for people with high voices. The pen thing is sexist, but a phone, that's just physics. Make me one, now. Surely the money for research would be quick to appear.
Well they never did figure it out. The CODECs got worse and worse. When the iPhone first came out people complained that the voice quality on phone calls was terrible. But the computer part was so good people bought it anyway. I think that's why texting took off the way it did. You can't understand a word anybody says on those things. If you are hard of hearing but are too vain to get a hearing aid like my mother you wind up with conversations where she says things like, "I can't hear words, only sounds. I can't understand anything you are saying." "Can you speak in a lower pitch? I can't understand?" Well dammit! What the hell?! I went to school in the '80s with a passion for making clear, understandable audio. I studied acoustics in the Physics department and Psychoacoustics in the Psychology department to understand all the nuances of communication with sound. And then in 1990 they stopped teaching Psychoacoustics because the professor retired. Then my Physics professor retired. And I feel like the whole field has died. It's like the whole marketplace has turned it's back on sound quality. (Even though Dr. Patronis still updates his textbook.) I sure was never able to get a job working on it.
I want the FCC to demand that whatever changes happen going forward we get an improvement in audio quality. Technology has advanced and processors are so fast, bandwidth is so large, we should be able to cancel out audio feedback from microphones and speakers with small physical separation and get back to full duplex. I don't care if you have to put two separate radios in the phone that drain the battery twice as fast, somebody try it. Maybe it only works with other people with the same kind of phone. Call it the Can and String phone and you buy them in pairs and share them with your loved ones. Just try SOMETHING. If my phone is not a form of communication I have to go all the way to my mother's house to fix her heat pump controller instead of suggesting she just turn it off and turn it back on.
I want a phone that trains itself up on my actual voice and tunes the codec to make me sound awesome. When I have to speak on a video I use Garage Band to slow my voice down so it doesn't sound so high. Why can't my phone do that? Put my voice in the range the person I'm talking to can hear after the ravages of old age have taken away their high frequency range? Phone designers are going to all this trouble to make it so I can watch streaming TV shows on the damn thing, and that's great, but why not invest in the research to make it actually work as a PHONE?! I never talk to anybody anymore if I can help it. It's just too upsetting.