Sequences

An Apprentice Experiment in Python Programming
Inefficient Markets

Wiki Contributions

Load More

Comments

gilch177

I feel like this has come up before, but I'm not finding the post. You don't need the stick-on mirrors to eliminate the blind spot. I don't know why pointing side mirrors straight back is still so popular, but that's not the only way it's taught. I have since learned to set mine much wider.

This article explains the technique. (See the video.)

In a nutshell, while in the diver's seat, tilt your head to the left until it's almost touching your window, then from that perspective point it straight back so you can just see the side of your car. (You might need a similar adjustment for the passenger's side, but those are often already wide-angle.) Now from normal position, you can see your former "blind spot". When you need to see straight back in your side mirror (like when backing out), just tilt your head again. Remember that you also have a center mirror. You should be able to see passing cars in your center mirror, and then in your side mirror, then in your peripheral vision without ever turning your head or completely losing sight of them.

gilch227
  • It's not enough for a hypothesis to be consistent with the evidence; to count in favor, it must be more consistent with the hypothesis than its converse. How much more is how strong. (Likelihood ratios.)
  • Knowledge is probabilistic/uncertain (priors) and is updated based on the strength of the evidence. A lot of weak evidence can add up (or multiply, actually, unless you're using logarithms).
  • Your level of knowledge is usually not literally zero, even when uncertainty is very high, and you can start from there. (Upper/Lower bounds, Fermi estimates.) Don't say, "I don't know." You know a little.
  • A hypothesis can be made more ad-hoc to fit the evidence better, but this must lower its prior. (Occam's razor.)
    • The reverse of this also holds. Cutting out burdensome details makes the prior higher. Disjunctive claims get a higher prior, conjunctive claims lower.
    • Solomonoff's Lightsaber is the right way to think about this.
  • More direct evidence can "screen off" indirect evidence. If it's along the same causal chain, you're not allowed to count it twice.
  • Many so-called "logical fallacies" are correct Bayesian inferences.
gilch270

French, but because my teacher tried to teach all of the days of the week at the same time, they still give me trouble.

They're named as the planets: Sun-day, Moon-day, Mars-day, Mercury-day, Jupiter-day, Venus-day, and Saturn-day.

It's easy to remember when you realize that the English names are just the equivalent Norse gods: Saturday, Sunday and Monday are obvious. Tyr's-day (god of combat, like Mars), Odin's-day (eloquent traveler god, like Mercury), Thor's-day (god of thunder and lightning, like Jupiter), and Freyja's-day (goddess of love, like Venus) are how we get the names Tuesday, Wednesday, Thursday, and Friday.

Answer by gilch250

While an institution's reliability and bias can shift over time, I think AP and Reuters currently fit the bill. They report the facts the most reliably of any big-name general news sources I know of, without very much analysis or opinion. Their political leaning is nearly neutral or balanced, but maybe on the left side of the line (Reuters might be slightly less biased than AP, but still on the left side).

The Wall Street Journal is a little bit less reliable on the facts, also centrist, and on the right side of the line due to their business focus. If you read this too, it may help you counterbalance AP's and Reuters' slight left bias without going to the unreliable right-wing extremist sources.

If you want only one source, The Hill is about as nonpartisan as it gets (maybe a bit less reliable on the facts than the WSJ, but still pretty good). They report on both sides of the aisle. Their focus is, in their words, "on the inner workings of Congress and the nexus of politics and business".

[Epistemic status: I looked at the Ad Fontes Media Bias Chart. Exactly how impartial their judgements are, I can't say, but they do seem to try. Media Bias/Fact Check mostly agrees with these judgements, but I don't think they're any more reliable.]

That said, even an "impartial" news source (to the extent there is such a thing) is going to give you a very distorted view of the world due to selection biases and the Overton Window. "Newsworthy" stories are, by their nature, rare occurrences, and will tend to amplify your availability bias. Don't lose sight of base rates. Our World in Data should be worth exploring for that reason. They publish what they think is important rather than what is new.

gilch210

Why is Google the biggest search engine even though it wasn't the first? It's because Google has a better signal-to-noise ratio than most search engines. PageRank cut through all the affiliate cruft when other search engines couldn't, and they've only continued to refine their algorithms.

But still, haven't you noticed that when Wikipedia comes up in a Google search, you click that first? Even when it's not the top result? I do. Sometimes it's not even the article I'm after, but its external links. And then I think to myself, "Why didn't I just search Wikipedia in the first place?". Why do we do that? Because we expect to find what we're looking for there. We've learned from experience that Wikipedia has a better signal-to-noise ratio than a Google search.

If LessWrong and Wikipedia came up in the first page of a Google search, I'd click LessWrong first. Wouldn't you? Not from any sense of community obligation (I'm a lurker), but because I expect a higher probability of good information here. LessWrong has a better signal-to-noise ratio than Wikipedia.

LessWrong doesn't specialize in recipes or maps. Likewise, there's a lot you can find through Google that's not on Wikipedia (and good luck finding it if Google can't!), but we still choose Wikipedia over Google's top hit when available. What is on LessWrong is insightful, especially in normally noisy areas of inquiry.

gilch80

the problem "How do we stop people from building dangerous AIs?" was "research how to build AIs".

Not quite. It was to research how to build friendly AIs. We haven't succeeded yet. What research progress we have made points to the problem being harder than initially thought, and capabilities turned out to be easier than most of us expected as well.

Methods normal people would consider to stop people from building dangerous AIs, like asking governments to make it illegal to build dangerous AIs, were considered gauche.

Considered by whom? Rationalists? The public? The public would not have been so supportive before ChatGPT, because most everybody didn't expect general AI so soon, if they thought about the topic at all. It wasn't an option at the time. Talking about this at all was weird, or at least niche, certainly not something one could reasonably expect politicians to care about. That has changed, but only recently.

I don't particularly disagree with your prescription in the short term, just your history. That said, politics isn't exactly our strong suit.

But even if we get a pause, this only buys us some time. In the long(er) term, I think either the Singularity or some kind of existential catastrophe is inevitable. Those are the attractor states. Our current economic growth isn't sustainable without technological progress to go with it. Without that, we're looking at civilizational collapse. But with that, we're looking at ever widening blast radii for accidents or misuse of more and more powerful technology. Either we get smarter about managing our collective problems, or they will eventually kill us. Friendly AI looked like the way to do that. If we solve that one problem, even without world cooperation, it solves all the others for us. It's probably not the only way, but it's not clear the alternatives are any easier. What would you suggest?

I can think of three alternatives.

First, the most mundane (but perhaps most difficult), would be an adequate world government. This would be an institution that could easily solve climate change, ban nuclear weapons (and wars in general), etc. Even modern stable democracies are mostly not competent enough. Autocracies are an obstacle, and some of them have nukes. We are not on track to get this any time soon, and much of the world is not on board with it, but I think progress in the area of good governance and institution building is worthwhile. Charter cities are among the things I see discussed here.

Second might be intelligence enhancement through brain-computer interfaces. Neuralink exists, but it's early days. So far, it's relatively low bandwidth. Probably enough to restore some sight to the blind and some action to the paralyzed, but not enough to make us any smarter. It might take AI assistance to get to that point any time soon, but current AIs are not able, and future ones will be even more of a risk. This would certainly be of interest to us.

Third would be intelligence enhancement through biotech/eugenics. I think this looks like encouraging the smartest to reproduce more rather than the misguided and inhumane attempts of the past to remove the deplorables from the gene pool. Biotech can speed this up with genetic screening and embryo selection. This seems like the approach most likely to actually work (short of actually solving alignment), but this would still take a generation or two at best. I don't think we can sustain a pause that long. Any enforcement regime would have too many holes to work indefinitely, and civilization is still in danger for the other reasons. Biological enhancement is also something I see discussed on LessWrong.

gilch1111

Yep. It would take a peculiar near-miss for an unfriendly AI to preserve Nature, but not humanity. Seemed obvious enough to me. Plants and animals are made of atoms it can use for something else.

By the way, I expect the rapidly expanding sphere of Darkness engulfing the Galaxy to happen even if things go well. The stars are enormous repositories of natural resources that happen to be on fire. We should put them out so they don't go to waste.

gilch81

Humans instinctively like things like flowers and birdsong, because it meant a fertile area with food to our ancestors. We literally depended on Nature for our survival, and despite intensive agriculture, we aren't independent from it yet.

gilch20

How did you manage to prompt these? My attempts with Stable Diffusion so far have usually not produced anything suitable.

gilch40

If you can gently find out how he handles the internal contradictions (https://en.wikipedia.org/wiki/Internal_consistency_of_the_Bible), you've got a ready-made argument for taking some things figuratively.

If the Utah mention means the Mormons in particular, their standard answer is that the Bible is only correct "as far as it is translated correctly" (that phrasing appears in their extended canon), which is a motte they can always retreat to if one presses them too hard on Biblical correctness generally. However, that doesn't apply to the rest of their canon, so pressure may be more fruitful there. (If it's not the Mormons, the rest of my comment probably isn't relevant either.)

There is of course the "which bible?" question. Irrefutable proof of the veracity of the old testament, if someone had it, wouldn't answer the question of which modern religion incorporating it is "most correct".

The Book of Mormon would at least narrow it down to the LDS movement, although there have been a few small schisms in their relatively short history.

if he does an experiment, replicate that experiment for yourself and share the results. If you get different results, examine why. IMO, attempting in good faith to replicate whatever experiments have convinced him that the world works differently from how he previously thought would be the best steelman for someone framing religion as rationalism.

Disagree with this one. The experiment the Mormon missionaries will insist on is Moroni's Promise: read the Book of Mormon and then pray to God for a spiritual confirmation. The main problem with this experiment should be obvious to any good scientist: no controls. To be fair, one should try the experiment on many other books (holy or otherwise) to see if there are any other hits. Also, a null result is invariably interpreted as failing to do the experiment correctly, because it's guaranteed by God, see, it's right there in the book. The inability to accept a negative outcome is also rather unscientific. And finally, a "spiritual confirmation" will be interpreted for you as coming from (their particular version of) God, rather than some other explanation for a human emotional response, which we all know, can be achieved in numerous other ways that don't particularly rely on God as an explanation. Make the experiment fair before you agree to play with a stacked deck!

Load More