

the word’s not useless, it’s just that its meaning has evolved to encompass pretty much all authoritarianism, rather than just a specific subset of it.
the word’s not useless, it’s just that its meaning has evolved to encompass pretty much all authoritarianism, rather than just a specific subset of it.
This is what I’m doing. I recently switched from the email service offered by my web host to Zoho Mail. I pay them $12 a year for a couple of gigabytes storage (which isn’t a whole lot but enough for me and I’m cheap).
As someone else says elsewhere, as well as changing the MX records to the new server, you need to add SPF, DKIM and D-MARC records in your DNS to ensure mail you send is accepted by the receiver’s mail server.
Won’t it get hot as hell in there, all that wood and foam and rubber?
You can have nodes on a mesh network which act as gateways to the internet, but such nodes are going to have to go through an ISP. There’s no other way to connect to the internet at large unfortunately.
this is what the mesh networks are that people have mentioned elsewhere in this thread.
It is theoretically possible to create a purely peer-to-peer network where each individual connects to people nearby, and then any individual can in theory communicate with any other, by passing data packets to nearby people on the network who then pass it on themselves until it reaches the other person.
You can probably already grasp a few of the issues here - confidentiality is a big one, and reliability is another. But in theory it could work, and the more people who take part in such networks, the more reliable they become.
So the burn-in hit on day 534? or why else would they test for such a weird length of time…
The other approach is not to try to block out all non-approved internet sources, and instead teach your child about the dangers out there, and how to handle them.
If a young child becomes addicted to online porn for instance, it’s an indication of deeper issues and it seems to be missing the point to put the blame on network operators for not blocking children effectively enough. I don’t think a healthy well developed child would become addicted to porn in the first place.
That’s the real challenge for parents: they don’t need to be a part-time network über-wizard but rather a stable trustworthy figure for their children to rely on who can guide them through the often difficult journey of growing up.
they’re mammals though, sharing a common ancestor with pigs (who are also renown for their intelligence)
I stopped paying for YouTube when they started cracking down on free users, and stopped using them pretty much entirely. It was hard though - even though I have Netflix, I always found it easier to find interesting and informative things to watch on YouTube than Netflix. I’d watch YouTube several times a day, whereas with Netflix I usually spend about 10 or 15 minutes scrolling through their god-awful UI before closing it and finding something else to do.
Just had a look - $6 a month, based in NYC. Definitely better than giving YouTube money, for now at least. They say they have a 50/50 profit sharing model with creators - profit presumably is after salaries (including bonuses?) have been paid, so it’s not clear exactly how much of your subscription does in fact go to the video creators. Still, a better option than YouTube, if only to support competition.
If it was in Europe, people being made redundant are typically given several months pay, but it’s America so he probably just got a t-shirt and a cardboard box.
deleted by creator
A good tip is to search the Amazon reviews before you get a laptop for “Linux”. Even if you don’t buy it there, you’ll often find one or two Linux users saying how well everything worked, or didn’t.
Intelligence and consciousness are not related in the way you seem to think.
We’ve always known that you can have consciousness without a high level of intelligence (think of children, people with certain types of brain damage), and now for the first time, LLMs show us that you can have intelligence without consciousness.
It’s naive to think that as we continue to develop intelligent machines, suddenly one of them will become conscious once it reaches a particular level of intelligence. Did you suddenly become conscious once you hit the age of 14 or whatever and had finally developed a deep enough understanding of trigonometry or a solid enough grasp of the works of Mark Twain? No of course not, you became conscious at a very early age, when even a basic computer program could outsmart you, and you developed intelligence quite independently.
I’m going to repeat myself as your last paragraph seems to indicate you missed it: I’m *not* of the view that LLMs are capable of AGI, and I think it’s clear to every objective observer with an interest that no LLM has yet reached AGI. All I said is that like cats and rabbits and lizards and birds, LLMs do exhibit some degree of intelligence.
I have been enjoying talking with you, as it’s actually quite refreshing to discuss this with someone who doesn’t confuse consciousness and intelligence, as they are clearly not related. One of the things that LLMs do give us, for the first time, is a system which has intelligence - it has some kind of model of the universe, however primitive, to which it can apply logical rules, yet clearly it has zero consciousness.
You are making some big assumptions though - in particular, when you said an AGI would “have a subjective sense of self” as soon as it can “move, learn, predict, and update”. That’s a huge leap, and it feels a bit to me like you are close to making that schoolboy error of mixing up intelligence and consciousness.
“A member of the public, appreciating that the Maxwell grand jury materials do not contribute anything to public knowledge, might conclude that the Government’s motion for their unsealing was aimed not at ‘transparency’ but at diversion - aimed not at full disclosure but at the illusion of such,” the judge wrote. Another federal judge in Manhattan, Richard Berman, is weighing the Justice Department’s bid to unseal the grand jury records from Epstein’s case. Berman has not yet ruled.
I think current LLMs are already intelligent. I’d also say cats, mice, fish, birds are intelligent - to varying degrees of course.
I’d like to see examples of LLMs paired with sensorimotor systems, if you know of any
If you’re referring to my comment about hobbyist projects, I was just thinking of the sorts of things you’ll find on a search of sites like YouTube, perhaps this one is a good example (but I haven’t watched it as I’m avoiding YouTube). I don’t know if anyone has tried to incorporate a “learning to walk” type of stage into LLM training, but my point is that it would be perfectly possible, if there were reason to think it would give the LLM an edge.
The matter of how intelligent humans are is another question, and relevant because AFAIK when people talk about AGI now, they’re talking about an AI that can do better on average than a typical human at any arbitrary task. It’s not a particularly high bar, we’re not talking about super-intelligence I don’t think.
thanks for this very yummy response. I’m having to read up about the technicalities you’re touching on so bear with me!
According to wiki, the neocortex is only present in mammals but as I’m sure you’re aware mammals are not the only creatures to exhibit intelligence. Are you arguing that only mammals are capable of “general intelligence”? I can get on board with what you’re saying as *one way* to develop AGI - work out how brains do it and then copy that - but I don’t think it’s a given that that is the *only* way to AGI, even if we were to agree that only animals with a neocortex can have “general intelligence”. Hence the fact that a given class of machine architecture does not replicate a neocortex would not in my mind make that architecture incapable of ever achieving AGI.
As for your point about the importance of sensorimotor integration, I don’t see that being problematic for any kind of modern computer software - we can easily hook up any number of sensors to a computer, and likewise we can hook the computer up to electric motors, servos and so on. We could easily “install” an LLM inside a robot and allow it to control the robot’s movement based on the sensor data. Hobbyists have done this already, many times, and it would not be hard to add a sensorimotor stage to an LLM’s training.
I do like what you’re saying and find it interesting and thought-provoking. It’s just that what you’ve said hasn’t convinced me that LLMs are incapable of ever achieving AGI for those reasons. I’m not of the view that LLMs *are* capable of AGI though, it’s more like something that I don’t personally feel well enough informed upon to have a firm view. It does seem unlikely to me that we’ve currently reached the limits of what LLMs are capable of, but who knows.
because they are non-sensing, stationary, and fundamentally not thinking
I don’t follow, why would a machine need to be able to move or have its own sensors in order to be AGI? And can you define what you mean by “thinking”?
According to the article, the lie that the current Danish chair of the rotating Council Presidency is being accused of making by Patrick Breyer is that the European Parliament will refuse to extend the current soon-to-expire voluntary scanning regime unless the EU Council first agrees to implement Chat Control: