I sometimes wonder what the alternate reality where semiconductor advances ended in the eighties would look like.<p>We might have had to manage with just a few MB of RAM and efficient ARM cores running at maybe 30 MHz or so. Would we still get web browsers? How about the rest of the digital transformation?<p>One thing I do know for sure. LLMs would have been impossible.
For me the interesting alternate reality is where CPUs got stuck in the 200-400mhz range for speed, but somehow continued to become more efficient.<p>It’s kind of the ideal combination in some ways. It’s fast enough to competently run a nice desktop GUI, but not so fast that you can get overly fancy with it. Eventually you’d end up OSes that look like highly refined versions of System 7.6/Mac OS 8 or Windows 2000, which sounds lovely.
I <i>loved</i> System 7 for its simplicity yet all of the potential it had for individual developers.<p>Hypercard was absolutely dope as an entry-level programming environment.
The Classic Mac OS model in general I think is the best that has been or ever will be in terms of sheer practical user power/control/customization thanks to its extension and control panel based architecture. Sure, it was a security nightmare, but there was practically nothing that couldn’t be achieved by installing some combination of third party extensions.<p>Even modern desktop Linux pales in comparison because although it’s technically possible to change anything imaginable about it, to do a lot of things that extensions did you’re looking at at minimum writing your own DE/compositor/etc and at worst needing to tweak a whole stack of layers or wade through kernel code. Not really general user accessible.<p>Because extensions were capable of changing anything imaginable and often did so with tiny-niche tweaks and all targeted the same system, any moderately technically capable person could stack extensions (or conversely, disable system-provided ones which implemented a lot of stock functionality) and have a hyper-personalized system without ever writing a line of code or opening a terminal. It was beautiful, even if it was unstable.
> The Classic Mac OS model in general I think is the best that has been or ever will be in terms of sheer practical user power/control/customization<p>A point for discussion is whether image-based systems are the same kind of thing as OSes where system and applications are separate things, but if we include them, Smalltalk-80 is better in that regard. It doesn’t require you to reboot to install a new version of your patch (if you’re very careful, that’s sometimes possible in classic Mac OS, too, but it definitely is harder) and is/has an IDE that fully supports it.<p>Lisp systems and Self also have better support for it, I think.
I’m not too nostalgic for an OS that only had cooperative scheduling. I don’t miss the days of Conflict Catcher, or having to order my extensions correctly.
Illegal instruction? Program accessed a dangling pointer? Bomb message held up your own computer and you had to restart (unless you had a non-stock debugger attached and can run ExitToShell, but no promises there.)
It had major flaws for sure, but also some excellent concepts that I wish could've found a way to survive through to the modern day. Modern operating systems may be stable and secure, but they're also far more complex, inflexible, generic, and inaccessible and don't empower users to anywhere near the extent they could.
> unless you had a non-stock debugger attached and can run ExitToShell<p>You could also directly jump into the ExitToShell code in ROM (G 49F6D8, IIRC). Later versions of Minibug had an “es” command that more or less did the same thing (that direct jump always jumps into the ROM code, “es” would, I think, jump to any patched versions)
> Not really general user accessible.<p>Writing a MacOS classic extension wasn’t exactly easy. Debugging one could be a nightmare.<p>I’m not sure how GTK themes are done now, but they used to be very easy to make.
I sometimes drop by cpu down to the 400Mhz-800Mhz range. 400 is rough. 800, not so bad. It runs fine, with something like i3 or sway.<p>If we really got stuck in the hundreds of MHz range, I guess we’d see many-core designs coming to consumers earlier. Could have been an interesting world.<p>Although, I think it would mostly be impossible. Or maybe we’re in that universe already. If you are getting efficiency but not speed, you can always add parallelism. One form of parallelism is pipelining. We’re at like 20 pipeline stages nowadays, right? So in the ideal case if we weren’t able to parallelize in that dimension we’d be at something like 6Ghz/20=300Mhz. That’s pretty hand-wavey, but maybe it is a fun framing.
The GameBoy Advance could run 2D games (and some 3D demos) on 2 AA batteries for 16 hours.
I wonder if we could get something more efficient with modern tech? It seems research made things faster but more power hungry. We compensate with better batteries instead. I guess we can and it's a design goal problem, I also do love a screen with backlight.
> It seems research made things faster but more power hungry<p>No, modern CPUs are far more power efficient for the same compute.<p>The primary power draw in a simple handheld console like would be the screen and sound.<p>Putting an equivalent MCU on a modern process into that console would make the CPU power consumption so low as to be negligible.
As a consumer product example: e-ink readers. (Of course, it helps as well that the GameBoy had no radios etc...)
Yes; yet... I thought the efficiency per compute has to do more with the nm process shrinking the die than anything else. That and power use is divided by so many more instructions per second
Given enough power and space efficiency you would start putting multiple cpus together for specialized tasks. Distributed computing could have looked differently
This is what the Mac effectively does now - background tasks run on low-power cores, keeping the fast ones free for the interactive tasks. More specialised ARM processors have 3 or more tiers, and often have cores with different ISAs (32 and 64 bit ones). Current PC architectures are already very distributed - your GPU, NIC/DPU, and NVMe SSD all run their own OSs internally, and most of the time don’t expose any programmability to the main OS. You could, for instance, offload filesystem logic or compression to the NVMe controller, freeing the main CPU from having to run it. Same could be done for a NIC - it could manage remote filesystem mounts and only expose a high-level file interface to the OS.<p>The downside would be we’d have to think about binary compatibility between different platforms from different vendors. Anyway, it’d be really interesting to see what we could do.
This is more or less what we have now. Even a very pedestrian laptop has 8 cores. If 10 years ago you wanted to develop software for today’s laptop, you’d get a 32-gigabyte 8-core machine with a high-end GPU. And a very fast RAID system to get close to an NVMe drive.<p>Computers have been “fast enough” for a very long time now. I recently retired a Mac not because it was too slow but because the OS is no longer getting security patches. While their CPUs haven’t gotten twice as fast for single-threaded code every couple years, cores have become more numerous and extracting performance requires writing code that distributes functionality well across increasingly larger core pools.
This was the Amiga. Custom coprpcessors for sound, video, etc.
Commodore 64 and Ataris had intelligent peripherals. Commodore’s drive knew about the filesystem and could stream the contents of a file to the computer without the computer ever becoming aware of where the files were on the disk. They also could copy data from one disk to another without the computer being involved.<p>Mainframes are also like that - while a PDP-11 would be interrupted every time a user at a terminal pressed a key, IBM systems offloaded that to the terminals, that kept one or more screens in memory, and sent the data to another computer, a terminal controller, that would, then, and only then, disturb the all important mainframe with the mundane needs or its users.
The alternative reality I wish we could move to, across the universe, is the one where SGI were the first to build a titanium laptop and became the worlds #1 Unix laptop vendor ..
Or if 640k was not only all you'd ever need, it was all we'd ever get.
There's something to this. The 200-400MHz era was roughly where hardware capability and software ambition were in balance — the OS did what you asked, no more.<p>What killed that balance wasn't raw speed, it was cheap RAM. Once you could throw gigabytes at a problem, the incentive to write tight code disappeared. Electron exists because memory is effectively free. An alternate timeline where CPUs got efficient but RAM stayed expensive would be fascinating — you'd probably see something like Plan 9's philosophy win out, with tiny focused processes communicating over clean interfaces instead of monolithic apps loading entire browser engines to show a chat window.<p>The irony is that embedded and mobile development partially lives in that world. The best iOS and Android apps feel exactly like your description — refined, responsive, deliberate. The constraint forces good design.
> What killed that balance wasn't raw speed, it was cheap RAM. Once you could throw gigabytes at a problem, the incentive to write tight code disappeared. Electron exists because memory is effectively free.<p>I dunno if it was cheap RAM or just developer convenience. In one of my recent comments on HN (<a href="https://news.ycombinator.com/item?id=46986999">https://news.ycombinator.com/item?id=46986999</a>) I pointed out the performance difference in my 2001 desktop between a `ls` program written in Java at the time and the one that came with the distro.<p>Had processor speeds <i>not</i> increased at that time, Java would have been relegated to history, along with a lot of other languages that became mainstream and popular (Ruby, C#, Python)[1]. There was simply no way that companies would <i>continue</i> spending 6 - 8 times more on hardware for a specific workload.<p>C++ would have been the enterprise language solution (a new sort of hell!) and languages like Go (Native code with a GC) would have been created sooner.<p>In 1998-2005, computer speeds were increasing so fast there was no incentive to develop new languages. All you had to do was wait a few months for a program to run faster!<p>What we did was trade-off efficiency for developer velocity, and it was a good trade at the time. Since around 2010 performance increases have been dropping, and when faced with stagnant increases in hardware performance, new languages were created to address that (Rust, Zig, Go, Nim, etc).<p>-------------------------------<p>[1] It took two decades of constant work for those high-dev-velocity languages to reach some sort of acceptable performance. Some of them are still orders of magnitude slower.
Lots of good practices! I remember how aggressively iPhoneOS would kill your application when you got close to being out of physical memory, or how you had to quickly serialize state when the user switched apps (no background execution, after all!) And, or better or for worse, it was native code because you couldn’t and still can’t get a “good enough” JITing language.
We had web browsers, kinda, in that we'd call up BBSes, and use ansi for menus and such.<p>My Vic20 could do this, and a C64 easily, really it was just graphics that were wanting.<p>I was sending electronic messages around the world via FidoNet and PunterNet, downloaded software, was on forums, and that all on BBSes.<p>When I think of the web of old, it's the actual information I love.<p>And a terminal connected to a bbs could be thought of as a text browser, really.<p>I even connectd to CompuServe in the early 80s via my C64 through "datapac", a dial gateway via telnet.<p>ANSI was a standard too, it could have evolved further.
Heavy webpages are the main barrier for projects like this. We need something that is just reader view for everything without the overhead of also being able to do non reader view. Like w3m or lynx but with sane formatting, word wrap etc.
> graphics that were wanting<p>Prodigy established a (limited) graphical online service in 1988.
I think the boring answer is that we waste computing resources simply because if memory and CPU cycles are abundant and cheap, developers don't find it worth their time to optimize nearly as much as they needed to optimize in the 1980s or 1990s.<p>Had we stopped with 1990s tech, I don't think that things would have been fundamentally different. 1980s would have been more painful, mostly because limited memory just did not allow for particularly sophisticated graphics. So, we'd be stuck with 16-color aesthetics and you probably wouldn't be watching movies or editing photos on your computer. That would mean a blow to social media and e-commerce too.
I'm in the early phases of working on a game that explores that.<p>The backstory is that in the late 2050s when AI has its hands in everything, humans loose trust of it. There are a few high profile incidents - based on AI decisions -, which cause public opinion to change, and an initiative is brought in to ensure important systems run hardware and software that can be trusted and human reviewed.<p>A 16bit CPU architecture - with no pipelining, speculative execution etc is chosen, as it's powerful enough to run such systems, but also simple enough that a human can fully understand the hardware and software.<p>The goal is to make a near-future space exploration MMO. My Macbook Pro can simulate 3000 CPU cores simultaneously, and I have a lot of fun ideas for it. The irony is that I'm using LLMs to build it :D
No, if we had the web it would be more like what gopher was. Or maybe lynx.<p>Edit: oh I thought you meant if we were stuck in 6502 style stuff. With megabytes of ram we'd be able to do a lot more. When I was studying we ran 20 X terminals with ncsa mosaic on a server with a few CPUs and 128GB RAM or so. Graphic browsing would be fine.<p>Only when Java and JavaScript came on the scene things got unbearably slow. I guess in that scenario most processing would have stayed server-side.
Apart from transputers mentioned already, there’s <a href="https://greenarrays.com/home/documents/g144apps.php" rel="nofollow">https://greenarrays.com/home/documents/g144apps.php</a><p>Both the hardware and the forth software.<p>APIs in a B2B style would likely be much more prevalent, less advertising (yay!) and less money in the internet so more like the original internet I guess.<p>GUIs like <a href="https://en.wikipedia.org/wiki/SymbOS" rel="nofollow">https://en.wikipedia.org/wiki/SymbOS</a><p>And
<a href="https://en.wikipedia.org/wiki/Newton_OS" rel="nofollow">https://en.wikipedia.org/wiki/Newton_OS</a><p>Show that we could have had quality desktops and mobile devices
I remember using the web on 25mhz computers. It ran about as fast as it does today with a couple ghz. Our internet was a lot slower than as well.
> I remember using the web on 25mhz computers. It ran about as fast as it does today with a couple ghz.<p>I know it’s a meme on HN to complain that modern websites are slow, but this is a perfect example of how completely distorted views of the past can get.<p>No, browsing the web in the early 90s was <i>slooow</i>. Even simple web pages took a long time to load. As you said, internet connections were very slow too. I remember visiting pages with photos that would come with a warning about the size of the page, at which point I’d get up and go get a drink or take a break while it loaded. Then scrolling pages with images would feel like the computer was working hard.<p>It’s silly to claim that 90s web browsers ran about as fast as they do today.
> No, browsing the web in the early 90s was slooow. Even simple web pages took a long time to load. As you said, internet connections were very slow too. I remember visiting pages with photos that would come with a warning about the size of the page, at which point I’d get up and go get a drink or take a break while it loaded.<p>At home, when I was on dialup, certainly.<p>At work I did not experience this. Most pages loaded in Netscape navigator in about the same time that most pages load now - a few seconds.<p>> Then scrolling pages with images would feel like the computer was working hard.<p>Well, yes, single-core, single-socket and single-processor meant that the computer could literally only do a single thing at a time, and yet the scrolling experience on most sites was still good enough to be better than the scrolling experience on some current React sites.
Browsing the web was slow, because the network was slow. It wasn't really because the desktop computers were slow. I remember our company having just a 64 kbit/s connection to the 'net, even as late as in 1997.. well, that was pretty good compared to the place where I was contracted to at the time, in Italy.. they had 19.2 kbit/s. Really big sites could have something much better, and browsing the internet at their sites was a different experience then, using the same computers.
It's not an accurate recollection at all. In 1990 a couple of us 12 year olds snuck into the university library to use the web to look at the Marathon website. It took 5 minutes to load some trivially-sized gifs and a tiny amount of HTML. They had a pretty decent connection for the day.<p>Web pages took a minute to load, now we're optimising them for instant response.
This is probably me experiencing a simulacra but with that slow loading getting up to go get a drink workflow, each page load was more special. It was magical discovering new websites just like trying out new software by picking something up from those "pegboards" at computer stores.<p>It also was a simpler time, the technology was in peoples lives but as a small side quest to their main lives. It took the form of a bulky desktop in the den or something like that. When you walked away from that beige box, it didn't follow or know about the rest of your life.<p>A life where a Big Mac meal was only $2.99, a toyota corolla was $9-15k, houses were ~100k, and when average dev salaries were ~50k. That was a different life. I don't know why but I picture this music video that was included on the Windows 95 cd bonus folder when I think of this simulacra: <a href="https://www.youtube.com/watch?v=iqL1BLzn3qc" rel="nofollow">https://www.youtube.com/watch?v=iqL1BLzn3qc</a>
> music video that was included on the Windows 95 cd bonus folder when I think of this simulacra: <a href="https://www.youtube.com/watch?v=iqL1BLzn3qc" rel="nofollow">https://www.youtube.com/watch?v=iqL1BLzn3qc</a><p>When I saw that video in 1995, I understood something we now know as Youtube would be inevitable as the connection speeds improve. Although I thought it'd be like MTV, a way to watch the newest music videos.
No, I think he’s right. I don’t recall the web being any faster today than it was thirty years ago, download speed excepted. The overall experience is about the same, if not worse, IMO.
My clim is that the modern web is bloated.<p>I had t3 connections for most of my browsing which was faster than ethernet of the day - even by todays standards that isn't too bad. I avoided dialup if I could because it was slow. Even isdn was okay speeds.
Wirths law in effect.
Yeah slow?<p>Try using a 2400baud modem, that was slow
I started on 300baud - but never accessed the internet from that so I won't count it in this discussion.
Those things always confuse me. I think 2400 baud modems were like 9600 bps? At least 56k modems were 8000 baud.
what a glorious time that was! now it's too easy to get stuck looking at the stream of (usually AI generated) crap. I long for the time when the regular screen break was built-in.
It crashed a lot more, the fonts (and screens) were uglier, and Javascript was a lot slower. The good thing was that there was very little Javascript.
> The good thing was that there was very little Javascript.<p>Because all of the complicated client side stuff was in Java applets or Shockwave :(
Pepperidge Farm remembers having to wait 10 minutes for a GameBoy emulator to load to play Pokémon Yellow on school computers…
[dead]
I cannot recall crashes being a problem.
With Windows 9x, I recall the crashes being manageable, but it was advisable to give the system 15 minutes to settle down after rebooting. Windows would start multiple things at once on startup and it was a bit risky to overstress it.<p>Windows NT 4 seemed OK, but a lot of software didn't run.<p>By the time of Windows 2000 the tradeoff was much better.<p>(Allowing a settle down time remained a good idea, in my experience. Even if Windows 2000 and later were very unlikely to actually crash, the response time would still be dogshit until everything had been given time to settle into a steady state. This problem didn't get properly fixed until pervasive use of SSDs - some time between Windows 7 and Windows 8, maybe? - and even then the fix was just that there was no longer any pressing need to actually fix it.)
I remember Netscape Navigator crashing, taking Solaris down with it. I could only imagine what it was like on Windows 9x. I don't want to imagine what Windows 3.x users endured. Windows 3.x was the OS where people saved early and saved often, since the lack of proper memory protection meant that a bad application (or worse, a bad driver) could BSOD the system at any time.
I remember using the web in the 90s. I often left to make a sandwich while pages loaded.
Try opening Gmail on one of those. Won’t be fun.
> Would we still get web browsers?<p><a href="https://en.wikipedia.org/wiki/PLATO_(computer_system)" rel="nofollow">https://en.wikipedia.org/wiki/PLATO_(computer_system)</a> is from the 1960s, so, technically, it certainly is possible. Whether it would make sense commercially to support a billion users would depend on whether we would stay stuck on prices of the eighties, too.<p>Also, there’s mobile usage. I would it be possible to build a mobile network with thousands of users per km² with tech from the eighties?
> One thing I do know for sure. LLMs would have been impossible.<p>We had ELIZA, and that was enough for people to anthropomorphize their teletype terminals.
I always think the Core 2 Duo was the inflexion point for me. Before that current software always seemed to struggle on current hardware but after it was generally fine.<p>As much as I like my Apple Silicon Mac I could do everything I need to on 2008 hardware.
It's remarkable how a modern $50 SBC outperforms the old Core 2 Duo line.
Alongside the power of a single core, that was alongside adoption of multicore and moving from 32 to 64 bit for the general user, which enabled greater than 4GB memory and lots of processes to co-exist more gracefully.
Transputers. Lots and lots and lots of transputers. (-:
I don't think there's really a credible alternate reality where Moore's law just stops like that when it was in full swing.<p>The ones that "could have happened" IMO are the transistor never being invented, or even mechanical computers becoming much more popular much earlier (there's a book about this alternate reality, The Difference Engine).<p>I don't think transistors being invented was that certain to happen, we could've got better vacuum tubes, or maybe something else.
As someone has brought up, Transputers (an early parallel architecture) was a thing in the 1980s because people thought CPU speed was reaching a plateau. They were kind of right (which is why modern CPUs are multicore) but were a decade or so too early so transputers failed in the market.
CPU cores are still getting faster, but not at the 1980/90s cadence. We get away with that because the cores have been good enough for a decade - unless you are doing heavy data crunching, the cores will spend most of the time waiting for you to do something. I sometimes produce video and the only times I hear the fans turning on is when I am encoding content. And even then, as long as ffmpeg runs with `nice -n 19`, I continue working normally as if I had the computer all to myself.
If you're on Linux, I'd highly recommend trying `chrt -i 0`. Not quite night-and-day compared to nice 19, but anecdotally it is noticeable, especially if you game.
When MC68030 (1986) was introduced, I remember reading how computers probably won't get much faster, because PCB signal integrity would not allow further improvements.<p>People that time were not actually sure how long the improvements would go on.
We were stuck with 33MHz PCBs for a long time as people kept trying and failing to get 50MHz PCBs to work. Then Intel came out with the 486DX2 which allowed you to run a 50MHz processor with an external 25MHz bus (so a 25MHz PCB) and we started moving forward again, though we did eventually get PCBs to go much faster as well.<p>The Transputers (mentioned in other comments) had already decoupled the core speed from the bus speed and Chuck Moore got a patent for doing this in his second Forth processor[1], which patent trolls later used to extract money from Intel and others (a little of which went to Chuck and allowed him to design a few more generations of Forth processors).<p>[1] <a href="https://en.wikipedia.org/wiki/Ignite_(microprocessor)" rel="nofollow">https://en.wikipedia.org/wiki/Ignite_(microprocessor)</a>
Teletext existed in the 80s and was widely in use, so we'd have some kind of information network.<p>BBSes existed at the same time and if you were into BBSes you were obsessive about it.
You'd probably get much more multiprocessor stuff much earlier. There's probably 2 or 3 really good interfaces to wire an almost arbitrary number of CPUs together and run some software across all of them (AMP not SMP).
We did have web browsers, I had Internet Explorer on Windows 3.1, 33mhz 8mb RAM.
I still remember the Mosaic from NCSA. Internet in a box.
Probably was "Windows 3.11, For Workgroups" as iirc Windows 3.1 didn't ship with a TCP/IP stack
This is basically the premise of the Fallout universe. I think in the story it was the transistor was never invented though.
And imagine if telecom had topped out around ISDN somewhere, with perhaps OC-3 (155Mbps) for the bleeding-fastest network core links.<p>We'd probably get MP3 but not video to any great or compelling degree. Mostly-text web, perhaps more gopher-like. Client-side stuff would have to be very compact, I wonder if NAPLPS would've taken off.<p>Screen reader software would probably love that timeline.
you are wrong. Windows 3.11 era used CPUs with like 33mhz cpu, and yet we had TONS of graphical applications. Including web browsers, Photoshop, CAD, Excel and instant messangers<p>Only thing that killed web for old computers is JAVASCRIPT.
I don't see how this contradicts any of what they said, unless they've edited their comment.<p>You're right we had graphical apps, but we did also have very little video. CuSeeMe existed - video conferencing would've still been a thing, but with limited resolution due to bandwidth constraints. Video in general was an awful low res mess and would have remained so if most people were limited to ISDN speeds.<p>While there were still images on the web, the amount of graphical flourishes were still heavily bandwidth limited.<p>The bandwidth limit they proposed would be a big deal even <i>if</i> CPU speeds continued to increase (it could only mitigate so much with better compression).
> Only thing that killed web for old computers is JAVASCRIPT.<p>JavaScript is innocent. The people writing humongous apps with it are the ones to blame. And memory footprint. A 16 MB machine wouldn’t be able to hold the icons an average web app uses today.
Not JavaScript. Facebook.
I have a Hayes 9600kbps modem for web surfing.
I remember when I went from 286 to 486dx2, the difference was impressive, able to run a lot of graphical applications smoothly.<p>Ironically, now I'm using an ESP32-S3, 10x more powerful, just to run Iot devices.
It's probably possible to develop analog adsl chips in 1990 semi tech.
But pretty difficult.
Depends how pervasive OC3 would have gotten. A 1080p video stream is only about 7 Mbps today.
You should definitely watch Maniac: <a href="https://en.wikipedia.org/wiki/Maniac_(miniseries)" rel="nofollow">https://en.wikipedia.org/wiki/Maniac_(miniseries)</a>
There are web browsers for 8-bits today, and there were web browsers for e.g. Amiga's with 68000 CPU's from 1979 back in the day.
> One thing I do know for sure. LLMs would have been impossible.<p>Maybe they could, as ASICs in some laboratories :)
Honestly, I think we could’ve pulled off a lot earlier if GPU development had invested in GPGPU earlier.<p>I can see it now… the a national lab can run ImageNet, but it takes so many nodes with unobtanium 3dfx stuff that you have to wait 24 hours for a run to be scheduled and completed.
I was doing Schematic Capture and Layout on a 486 with <counts voice> one two three four five six seven eight 8 megabytes of RAM ah haha.
>I sometimes wonder what the alternate reality where semiconductor advances ended in the eighties would look like.<p>We would have seen much less desktop apps being written using Javascript frameworks.
tbh we'd probably just have really good Forth programmers instead of LLMs. same vibe, fewer parameters.
> Would we still get web browsers?<p>Yes, just that they would not run millions of lines of JavaScript for some social media tracking algorithm, newsletter signup, GDPR popup, newsletter popup, ad popup, etc. and you'd probably just be presented with the text only and at best a relevant static image or two. The web would be a place to get long-form information, sort of a massive e-book, not a battleground of corporations clamoring for 5 seconds of attention to make $0.05 off each of 500 million people's doom scrolling while on the toilet.<p>Web browsers existed back then, the web in the days of NCSA Mosaic was basically exactly the above
Actually real AI isn’t going to be possible unless we return to this arch. Contemporary stacks are wasting 80% of their energy which we now need for AI. Graphics and videos are not a key or necessary part of most computing workflows.
Well, we wouldn't have ads and tracking.
Prodigy launched online ads from the 1980s. AOL as well.<p>HotWired (Wired's first online venture) sold their first banner ads in 1994.<p>DoubleClick was founded in 1995.<p>Neither were limited to 90's hardware:<p>Web browsers were available for machines like the Amiga, launched in 1985, and today you can find people who have made simple browsers run on 8-bit home computers like the C64.
I actually used an Amiga to browse the web, back in 1994 or 1995. I started with Lynx, but then I switched to a graphical browser (probably Mosaic).<p>This was an Amiga 500 with maxed-out RAM (8 MB) and a hard drive.
If such an alternate reality has internet of any speed above "turtle in a mobility scooter" then there for sure would be ads and tracking.
The young WWW had garish flashing banner ads.