This profile is from a federated server and may be incomplete. View on remote instance

TauZero ,

I was apprehensive about EVs but the first time I rode in one I immediately fell in love with it. I get carsick easily, and the super-smooth ride without the chug-chug-chug of an internal combustion engine made the experience surprisingly much more pleasant for me. I do not use a car, but if I had to buy one, I don't think I could ever stomach an ICE again knowing that this alternative is available.

TauZero ,

I knew that motion sickness is triggered by frequent starts and stops and frequent turns, but even I was not aware of how big a contribution the engine vibration makes until I got to experience a ride without it.

TauZero ,

Given a radiative forcing coefficient of ln(new ppm/old ppm)/ln(2)*3.7 W/m**2 I have previously calculated that for every 1kWh of electricity generated from natural gas, an additional 2.2 kWh of heat is dumped into the atmosphere due to greenhouse effect in every year thereafter (for at least 1000 years that the resulting carbon dioxide remains in the air). So while the initial numbers are similar, you have to remember that the heat you generate is a one-time release (that dissipates into space as infrared radiation), but the greenhouse effect remains around in perpetuity, accumulating from year to year. If you are consuming 1kW of fossil electricity on average, after 100 years you are still only generating 1.67kW of heat (1kW from your devices and .67kW from 60% efficient power plant), but you also get an extra 220kW of heat from accumulated greenhouse gas.

I have wondered this question myself, and it does appear that the heat from the fossil/nuclear power itself is negligible over long term compared to the greenhouse effect. At least until you reach a Kardashev type I civilization level and have so many nuclear/fusion reactors that they noticeably raise the global temperature and necessitate special radiators.

TauZero ,

That’s how I found out that my desktop speakers consume power even with the physical button being off and status light dark. The power brick stays warm indefinitely, a good 20W feels like! I have to unplug that thing now when not in use. Any normal power brick will be <1W of course.

TauZero ,

A funny culprit I found during my own investigation was the GFCI bathroom outlet, which draws an impressive 4W. The status light + whatever the trickle current it uses to do its function thus dwarfs the standby power of any other electronic device.

Is there an easy way to generate a list of CMYK color values that will appear identical to the human eye under 589nm light?

I picked up a low pressure sodium lamp and am working on a Halloween demonstration. I’m hoping to make a display that appears one way under normal light, but looks totally different under the monochromatic 589nm sodium vapor light....

TauZero ,

That’s true! Using RGB alone will not be enough to calculate this! Two materials that might appear equally yellow under white sunlight may appear different shades of yellow under sodium light. Technology Connections did a great video about the difference: piped.video/watch?v=uYbdx4I7STg

edit: he starts talking about sodium light in particular at 11:14

TauZero ,

No, physics is never vague. Some problems are currently computationally intractable beyond a specific level of accuracy (like predicting the weather more than 2 weeks out), and some questions we do not know answers yet but expect to find answers to in the future (like why did the big bang happen). But there is never an element of the mysterious or spiritual or “man can never know”.

Popular science physics often gets mysterious, but that is a failure of popularization. Like the observer effect in quantum physics, which is often depicted as a literal eyeball watching the photons go through a double slit (?!). This may cause a naive viewer to mistakenly start thinking there is something special or unique about the human eyeball and the human consciousness that physics cannot explain. It gets even worse - one of the most popular double slit videos on youtube for example is “Dr. Quantum explains the double slit experiment” (not even gonna link it) which is not only laughably misleading, but upon closer examination was produced by a literal UFO cult, and they surreptitiously used the video to funnel more members.

Or the “delayed choice quantum eraser experiment” which confounded me for years (“What’s that? I personally can make a choice now that retroactively changes events that have already happened in the past? I have magical time powers?”), until I grew tired of being bamboozled by all its popular science depictions and dug up the actual original research paper on which it is based. Surprise! Having read the paper I have now understood exactly how the experiment works and why the result is sensible and not mysterious at all and that I don’t have magical powers. Sabine Hossenfelder video on youtube debunking the delayed-choice quantum eraser was the first and so far one of only two videos I have seen in the years since that have also used the actual paper. This has immediately made me respect her, regardless of all the controversy she has accumulated before or since.

TauZero ,

On the subject of Heisenberg Uncertainty - even there I blame popular science for having misled me! “You can’t know precise position and momentum at once” - sounds great! So mysterious! If you dig a little deeper, you might even get an explanation like that to measure the position of something you have to bombard it with particles (photons, electrons), and when it’s hit its velocity will change in a way you do not know. The smaller that something is, and the more you bombard it to get more precise position, the more uncertainty you will get.

All misleading! It was not until having taken an actual physics class where I learned how to calculate HU that I realized that not only is HU the result of simple mathematics, but that it also incidentally solves the thousands-years-old Zeno Paradox almost as a side lemma - a really cool fact that I was taught nowhere before!

Basically the wavefunction is the only thing that exists. The function for a single particle is a value assigned to every point in space, the values can be complex numbers, and the Schroedinger equation defines how the values change over time, depending on their nearby values in the now. That function is the particle’s position (or rather its square absolute magnitude) - if it is non-zero at more than one point we say that the particle is present in two places at once. What is the particle’s velocity? In computer games, each object has a value for a position and a value for a velocity. In quantum mechanics, there is no second value for velocity. The wavefunction is all that exists. To get a number that you can interpret as the velocity, you need to take the Fourier transform of the position function. And you don’t get one number out, you get a spectrum.

In one dimension, what is the Fourier transform of the delta function (a particle with exactly one position)? It is a constant function that is non-zero everywhere! (More precisely it is a corkscrew in the complex values, where the angle rotates around but magnitude remains the same). A particle with one position has every possible momentum at once! What is the Fourier transform of a complex-valued corkscrew? A delta function! Only a particle that is present everywhere at once can be said to have a precise momentum! The chirality of the particle’s corkscrew position function determines whether it is moving to the left or to the right. Zeno could not have known! Even if you look at an infinitesmall instant of time, the arrow’s speed and direction is already well-defined, encoded it that arrow’s instantaneous position function!

If you try imagine a function that minimizes uncertainty in both position and momentum at once, you end up with a wavepacket - a normal(?)-distribution-shaped curve peak that is equally minimally wide in both position and momentum space. If it were any narrower in one, it would be way wider in the other. That width squared is precisely the minimum possible value of Heisenberg Uncertainty in that famous Δx*Δp >= ħ/2 equation. It wasn’t ever about bombardment at all! It was just a mathematical consequence of using Fourier transforms.

TauZero ,

Oh yeah for sure, I don’t mean at all to say that all questions have been answered, and even the answers that we do have get more and more vague as you move down the science purity ladder. If all questions were solved, we would be out of a job for one! But I choose to interpret OP’s question as “is there anything unknowable?”. That’s the question relevant to our world right now, and I often disagree with the world view implied by popular science - that the world is full of wonder but also mystery. The mystery is not fundamental, but rather an expression of our individual and collective ignorance. There are even plenty of questions, like the delayed-choice quantum eraser, that have already been solved, and yet they keep popping up as examples of the unknowable, one even sniped me in this very thread (hi there!). Then people say “you do not even know whether eating eggs is good for you” and therefore by implication we shouldn’t listen to scientists at all. In that sense, the proliferation of the idea of mystery has become dangerous. The answer to unanswered questions is not gnosticism, it is as you said “further research” 😄!

TauZero ,

Thank you for your perspective! I found it really informative!

TauZero ,

Have we watched the same Sabine video? Delayed choice quantum eraser has nothing to do with interpretations of quantum mechanics, at least in so far as every interpretation (Copenhagen, de Broglie-Bohm, Many-Worlds) predicts the same outcome, which is also the one observed. The “solution” to DCQEE is a matter of simple accounting. And every single popular science DCQEE video GETS IT WRONG. The omission is so reckless it borders on malicious IMO.

For example, in that PBS video linked in this very thread, what does the host say at 7:07?

https://mander.xyz/pictrs/image/3c3f75a3-816e-4a7b-91e5-53a57ee5dc69.jpeg

If we only look at photons whose twins end up at detectors C or D, we do see an interference pattern. It looks like the simple act of scrambling the which-way information retroactively [makes the interference pattern appear].

This is NOT WHAT THE PAPER SAYS OR SHOWS! On page 4 it is clear that figure R01 is the joint detection rate between screen and detector C-only! (Screen = D0, C = D1, D = D2, A = D3, B omitted). If you look at photons whose twins end up at detectors C inclusive-OR D, you DO NOT SEE AN INTERFERENCE PATTERN. (The paper doesn’t show that figure, you have to add figures R01 and R02 together yourself, and the peaks of one fill the troughs of the other because they are offset by phase π.) You get only 2 big peaks in total, just like in the standard which-way double slit experiment. The 2 peaks do NOT change retroactively no matter what choices you make! You NEED the information of whether detector C or D got activated to account which group (R01 or R02) to assign your detection event to! Only after you do the accounting can you see the two overlapping interference patterns within the data you already have and which itself does not change. If you consumed your twin photon at detector A or B to record which-way information, you cannot do the accounting! You only get one peak or the other (figure R03).

It’s a very tiny difference between lexical “OR” and inclusive “OR”, but in this case it makes ALL the difference. For years I was mystified by the DCQEE and how it exposes the ability of retrocausality, and turns out every single video simply lied to me.

TauZero ,

It’s not a problem for Copenhagen if that’s the interpretation you are referring to. Yes, the first photon “collapses” when in strikes the screen, but it still went through both slits. Even in Copenhagen both slit paths are taken at once, the photon doesn’t collapse when it goes through the slit, it collapses later. When the first photon hits the screen and collapses, that doesn’t mean its twin photon collapses too. Where would it even collapse to, one path or the other? Why? The first photon didn’t take only one path! The twin photon is still in flight and still in superposition, taking both paths, and reflecting off both mirrors.

TauZero ,

Ok, I thought about it some more, and I want to make a correction to my description! The twin photon does collapse, but it doesn’t collapse to a single point or a single path. It collapses to a different superposition, a subset of its original wavefunction.

I understand it is an option even under Copenhagen. So in your two-electron example, where your have 1/sqrt(2)(|z+z-> + |z-z+>), when you measure your first electron, what if instead of measuring it along z+ you measure it along z+45°? It collapses into up or down along that axis (let’s say up), and the entangled second electron collapses too, but it doesn’t collapse into z-135°! The second electron collapses into a superposition of (I think) 1/2 |z+> + sqrt(3)/2 |z-> . I.e. when you measure the second electron along z+, you get 25% up and 75% down. The second electron is correlated to the first, but it is no longer the exact opposite to the first, because the axis you measured the first at was not z+ but inclined to it. There is exists no axis that you could measure the second electron at and get 100% up because it is not a pure state, it is still in superposition.

So back to the quantum eraser experiment, when the first photon hits the screen D0 and collapses, say at position 1.5, the twin photon collapses to a sub-superposition of its state, something like 80% path A and 20% path B. It still takes both paths, but in such a manner that if you choose to measure which-path information at detector D3 it will be more strongly correlated with path A, and if you choose to measure the self-interference signal from the mirror at D1 or D2, it will still self-interfere and will be more strongly correlated with detector D1. What do you think?

TauZero ,

They use the Lamda-CDM model which outputs the rate of expansion of the universe at every moment in past present and future. You measure the amount of light+matter+dark matter+dark energy that your universe has, plug those values into the Friedmann equation, and it spits out the rate.

You can try out an online calculator yourself! It already has those values filled in, all you need to do is enter the z value - the “redshift” - and click generate. So for example when you hear in the news something like “astronomers took a photo of a galaxy at redshift 3”, you put in 3 for “z”, and you see that the galaxy is 21.1 Gly (billions light years) away! That’s the “comoving distance”, a convenient way to define distance on cosmic scales that is independent of expansion rate or speed of light. It’s the same definition of distance that gives you that “46 Gly” value for the size of observable universe. But the light from that galaxy only took 11.5 Gyr to reach us. The universe was 2.2 Gyr old when the light started. So the light itself only traveled 11.5 Gly distance, but that distance is 21.1 Gly long right now because it kept expanding behind the photon.

TauZero ,

That’s true! There is a kind of incestuous relationship between the cosmic distance measurements and the cosmic model. Astronomers are able to measure parallax only out to 1000 parsecs, and standard candles of type Ia supernovae to a hundred megaparsecs. But the universe is much bigger than that. So as I understand it they end up climbing a kind of cosmic ladder, where they plug the measured distances up to 100 Mpc into the the ΛCDM model to calculate the best fit values for the amounts of matter/dark matter and dark energy. Then they plug in those values along with the redshift into the model to calculate the distances to ever more distant objects like quasars, the Cosmic Microwave Background, or the age of the universe itself. Then they use observations of those distant objects to plug right back into the model and refine it. So those values - 28.6% matter 71.4% dark energy, 69.6 km/s/Mpc Hubble constant, 13.7 billion years age of the universe - are not the result of any single observation, but the combination of all observations taken to date. These values have been fluctuating slightly in my lifetime as ever more detailed and innovative observations have been flowing in.

Are you an astronomer? Maybe you can help me, I’ve been thinking - how do you even measure the redshift of the CMB? Say we know that CMB is at redshift 1100z and the surface of last scattering is 45.5 GLy comoving distance away. There is no actual way to measure that distance directly, right? Plugging in the redshift into the model calculator is the only way? And how do we know it’s 1100? Is there some radioastronomy spectroscopy way to detect elemental spectral lines in the CMB, or is that too difficult?

If we match the CMB to the blackbody radiation spectrum, we can say that its temperature is 2.726K. Then if we assume the temperature of interstellar gas at the moment of recombination was 3000K, we get the 1100z figure. Is that the only way to do it? By using external knowledge of plasma physics to guess at the 3000K value?

TauZero ,

And now that they’re further apart, they separate even faster the next second.

That’s a common misconception! Barring effects of matter and dark energy, the two points do NOT separate faster as they get farther apart, the speed stays the same! The Hubble constant H0 is defined for the present. If you are talking about one second in the future, you have to use the Hubble parameter H, which is the Hubble constant scaled with time. So instead of 70 km/s/Mpc, in your one-second-in-the-future example the Hubble parameter will be 70 * age of the universe / (age of the universe + 1 second) = 69.999…9 and your two test particles will still be moving apart at 70000km/s exactly.

The inclusion of dark energy does mean that the Hubble “constant” itself is increasing with time, so the recession velocity of distant galaxies does increase with time, but that’s not what you meant. Moreover, the Hubble constant hasn’t always been increasing! It has actually been decreasing for most of the age of the universe! The trend only reversed 5 billion years ago when the effects of matter became less dominant than effects of dark energy. This is why cosmologists were worried about the idea of a Big Crunch for a while - if there had been a bit more matter, the expansion could have slowed down to zero and reversed entirely!

TauZero ,

So if you measured the second electron along z+ and got up, then if you measured the first electron again, this time along z+, it would give down.

Right! So what happens when you have two z+z- entangled electrons, and you measure one along z+45° and then the other along z+0°? What would happen if you measure the second electron along z+45° as well?

TauZero ,

Hmm interesting. I may have been mistaken about the electrons only being entangled in a single direction. I thought that if you prepared a pair of electrons in state 1/sqrt(2) (|z+z-> + |z-z+>) and then measured it in y there would be no correlation, but based on: …stackexchange.com/…/intuition-for-results-of-a-m…
…stackexchange.com/…/what-is-the-quantum-state-of…
if I had done the 90° rotation properly, the math works out such that the electrons would still be entangled in the new y+ basis! There is no way to only entangle them in z alone - if they are entangled in z they are also entangled in x and y. My math skills were 20 years rusty, sorry!

I still think my original proposition, that in the DCQEE under Copenhagen, an observation that collapses one photon, collapses the other photon to a sub-superposition, can be salvaged. In the second stackexchange link we are reminded that for a single electron, the superposition state 1/sqrt(2) (|y+> - |y->) is the same as |z+> state! They describe the same wavefunction psi, expressed in different basis: (y+,y-) vs. (z+,z-). When we take a single electron in superposition 1/sqrt(2) (|z+> + |z->) and measure it in z, and it collapses to, say, z+, we know that it is a pure state in z basis, but expressed in y basis it is now a superposition of 1/sqrt(2) (|y+> - |y->)! Indeed if we measure it now in y, we will get 50% y+ and 50% y-.

So in DCQEE when you collapse the first photon into a single position on the screen, the twin photon does collapse, but its basis is not expressed in terms of single positions! It’s some weird agglomeration of them. If you were to take that “pure” state and express it in terms of position basis, you would get a superposition of, say, 80% path A and 20% path B.

TauZero ,

50% y+ and 50% y- is how all [electrons] start

Yeah, but when you start with a 50% z+ / 50% z- electron, and you measure it and get say z+, it is now 100% z+, right? If you measure it again, you will always get z+. And then you give a bunch of them to your buddy with an identical lab and an identical Stern-Gerlach apparatus and and they say “hey, I measured your electrons that you said were 100% z+, and I’m getting 50% z+ 50% z-”. And you say “dude! your lab is in China! your z+ is my y+! you have to do coordinate rotation and basis substitution! if you look at my pure electron in your sideways basis, it’s in superposition for you”.

When the first photon hits the screen, the basis is the screen basis. Each position on the screen - 1.4, 1.5, 1.6, etc - is an eigenvector and the first photon collapses to one of those eigenvectors. The second photon collapses too, but you are wrongly equating the positions on the screen and positions on paths A/B as if they are in the same basis. They are not! You were just misled to think they are the same basis because they are both named “position”, but they are as different as the z+ axis in America is different from z+ axis in China.

The second photon collapses into the screen basis eigenvector 1.5 but that 1.5 does not correspond to any single location on path A or path B. If you do the basis substitution from screen basis into path basis, you get something like 80% path A and 20% path B (and something weird with the phases too I bet). Does that sound accurate?

TauZero ,

This is the way! It helps me to imagine what would it look like if the atmosphere consisted of a single nitrogen molecule. You place it on the ground but the ground has temperature (is warm) so your one molecule gets launched up into the vacuum on a parabolic trajectory at 500 m/s on average. If it launched at 45° it would reach 6km up and fall down, at 90° - 12km up - and that’s on average. Some would get launched faster and higher (following the long tail of the Boltzmann distribution), and hydrogen and helium even faster still because they are lighter. A few hydrogen molecules would be launched at speed above 11km/s, which is above Earth’s escape velocity, so they would escape and never fall down.

When you have many air molecules, they hit each other on the way up (and down), but because their collisions must be perfectly elastic, mathematically it works out that the overall velocities are preserved. So when your one nitrogen molecule gets launched up but on its way hits another identical molecule, you can think of them equivalently as passing through each other without colliding at all. (Yes, mathematically they can also scatter in some other random directions, but the important part is that your original molecule is equally likely to be boosted further upwards as opposed to impeded.)

The end result is that majority of the atmosphere stays below 12km, density goes down as you go up though never quite reaching zero, and hydrogen and helium continuously escape to space to the point none are left.

TauZero ,

You can’t be attractive if you never reached the food and are now dead.

TauZero ,

The moth still eats a shitton in its larva stage. You can’t cheat physics 😂.

If it were possible for some event to destroy the fabric of spacetime at the speed of light, could we still observe and be safe bc expansion?

Just a thought, if an event happened well beyond the observable universe that caused entire galaxies to be destroyed radiating from a point source event, what would it look like from our perspective and how close could it get on our observable horizon while still being unable to reach us due to expansion of the universe?...

TauZero ,

You are right! People often say “what if the sun blew up” in the context of gravity speed vs. light speed thought experiments, but what they really mean is shorthand for what if the entire sun was somehow deleted in a single instant with no trace. But in reality, “blowing up” the sun is much different than “deleting” it and leaves its entire mass behind, just spread around more.

There is even a theorem in general relativity that proves that massenergy cannot be deleted, invalidating a whole swath of such thought experiments. Forgot what it’s called though.

TauZero ,

The best we can achieve in this thought experiment is to see through a telescope some faraway alien set up a bomb with a countdown timer that will surely blow up at a specific time in the future and destroy the universe, but which we’ll never see count down to zero or explode. If we saw it reach zero it would of course kill us in the same instant as we see it, because by the rules of the thought experiment the explosion travels at the speed of light. But if the alien is far away and the countdown is long enough, the accelerating expansion of the universe due to dark energy will carry it outside of our cosmic event horizon before it explodes.

Using the cosmic comoving distance definition and the cosmology calculator, the last scattering surface of the Cosmic Microwave Background for example is 45.5 GLy away. Its light was emitted 13.7 GY ago (400kY after the Big Bang) at redshift 1100z. I was told that due to accelerating expansion, we will never see galaxies further than 63 GLy away (we don’t see them yet, the matter that we’ll see form them is beyond the CMB sphere for us at present), and if we hopped onto a lightspeed spaceship right now, we can never reach galaxies beyond 17 GLy comoving distance.

So for example if we looked at a galaxy at redshift 3z which is 21 GLy away, and whose light took 11.5 GY to reach us, and saw the alien set up the bomb timer to 11.49 GY, we know that the bomb must have surely exploded by now, but also know that we are safe because it’s far enough away and we’ll never see it explode, even in the infinite future.

Similarly, we can relish the tiny shred of joy in the knowledge that if we did fuck up something really major, like creating a false vacuum bubble in the LHC or whatever, we can never destroy more of the universe than the 17 GLy bubble around us.

In terms of kWh per kWh, by how much does greenhouse CO2 from running an air-conditioner heat up the rest of the Earth?

It is said that ACs are counterproductive in fight against global warming, in that while they may make the local environment temporarily livable, the greenhouse gases produced while making the electricity needed to operate them heat up the rest of the Earth by much more than the relief from the AC itself. By how much exactly is...

TauZero OP ,

Yes, ideally all AC will be running off solar, but that’s not the case at the moment. My state has thankfully closed its last coal powerplant, but also shut down one of its nuclear plants, using gas to replace both. We are now running at 50% gas 20% nuclear 20% hydro and 10% wind/solar. Which is why I wanted to focus on methane in this specific calculation: when deciding “is it OK for me to run the AC now, or is the longterm global heating side-effect too great?” natural gas is what is relevant to me.

How “great” that is is precisely the question here, and apparently it’s 2.2x. If you are really a stickler for exact real-life electricity production piechart distribution, multiply that by 50% gas and call it 1.1x. That is, for every year that I run my 1kW AC, that’s as if I am airdropping a 1.1kW heater to a random location on Earth that will heat it up at 1.1kW forever. 10 years = 11 random heaters. 8 billion people = 88 billion random heaters. Is that “too great”? I dunno.

Winter heating is its own problem, but at least cold can always be dealt with by more insulation and clothing. Heat can literally make whole areas of Earth unsurvivable without electrical cooling. Would I rather feel more comfortable now or choose to be able to survive without mechanical aids later?

TauZero OP ,

Numpy won’t tell me what ln(74000000000000006.7/74000000000000000).

Ran into exactly this problem for individual calculation 😆. Which is also why I multiplied by 8 billion and divided in the end - make the calculator behave. ln is linear enough around 1±epsilon to allow this.

implies that the radiative forcing from CO2 is much greater than the energy to produce the CO2 in the first place

That’s what I wanted to find out and it does appear to look exactly that way. Makes sense in retrospect since the radiative forcing is separate from the energy content of CO2 itself, same way as a greenhouse gets hot for no energy expended on its own.

TauZero OP ,

Good point! Freon (CFC-12, with 10800x warming potential of CO2) has thankfully been banned by Montreal Protocol of 1987, and HCFC-22 (5280x) is being phased out. We are using what now, HFC-32 at 2430x? How much refrigerant does an AC contain, about a mole? I’ve been taught that refrigerant should normally never leak throughout the lifetime of the appliance (technicians are even prohibited from “recharging” refrigerant without identifying and fixing the point of the leak first) and that all gas must be recovered after end-of-life, but we can’t be sure that’s really what happens every time.

In that case leaking 1 mole of HFC-32 would be equivalent to… running the 1kW AC for 360 hours?


<span style="color:#323232;">1 (mol HFC-32) * 2430 (mol CO2/mol HFC-32) * 1 (mol CH4/mol CO2) * 891 kJ/mol * 0.6 / 1 kW * (1 h / 3600 s) = 361 h
</span>
TauZero OP ,

Your skepticism is excessively cautious 😁. You can work around precision limits perfectly fine as long as you are aware they exist there. Multiplying your epsilon and then dividing later is a legitimate strategy, since every function is linear on a small enough scale! You can even declare that ln(1+x) ~= x and skip the logarithm calculation entirely. Using some random full precision calculator I get:


<span style="color:#323232;">ln((74e15+6.7)/74e15) = 0.000000000000000090540540...
</span>

Compare to the double-precision calculator with workaround:


<span style="color:#323232;">ln((74e15 + 6.7*10e9)/74e15) / 10e9 = 9.0540499...e-17
</span>

Or even:


<span style="color:#323232;">ln(1+x) ~= x
</span><span style="color:#323232;">6.7/74e15 = 9.0540540...e-17
</span>

You are worried about differences in the final answer of less than 1 part in a million! I try to do my example calculations in 3 significant figures, so that’s not even a blip in the intermediate roundoffs.

How do I calculate if a test like this is statistically significant?

I let people rate how much they like different things on a scale of 1-10. How do I actually tell if people like one thing more than another thing if the sample sizes are different? This is not about any real scientific study, more like a personal test :)...

TauZero ,

Your situation reminded me of the way IMDB sorts movies by rating, even though different movies may receive vastly different total number of votes. They use something called a credibility formula which is apparently a Bayesian statistics way of doing it, unlike the frequentist statistics with p-values and null hypotheses that you are looking for atm.

TauZero ,

Wow, you can even see the graffiti in that exact spot in the street view! Right on the rail, you know, where the side wheels go. Never thought I’d need to add “concrete rails create a flat surface attractive nuisance for graffiti” in the monorail “con” list.

https://mander.xyz/pictrs/image/5225e553-ed8a-495b-916e-5c050af01f50.jpeg

TauZero ,

If the black hole specifically disappeared, it would have no effect on us. The solar system would not even be launched on a 100 million year trajectory out of the galaxy, as galactic rotation is dependent on the masses of stellar and interstellar matter in the disk and dark matter in the halo. The supermassive galactic black holes, despite being supermassive, still only make up a tiny percentage of total galactic mass.

If you want to wow your friends, tell them about false vacuum decay. We could have bubbles of true vacuum expanding out in space from multiple directions towards us at lightspeed, and no way of knowing about them, stopping them, or outrunning them. Any point in space could nucleate a new true vacuum bubble at any time, just like a given uranium atom could decay now or in 5 billion years or never. Even spookier, by principle of quantum immortality, the Earth could have been engulfed by vacuum bubbles many times before, and we are just the one tiny sliver of probability space where by luck alone we survived long enough to talk about it here and now.

Thankfully false vacuum is just an idea and there is currently no evidence that it is real.

TauZero ,

This is correct. FTL communication using any form of quantum entanglement is provably mathematically impossible by the no-communication theorem. Most common sci-fi trope though.

How does a signing a post with a pgp key prove that you are actually the person behind the post?

I saw that people on the dark web would sign their posts with a PGP key to prove that their account has not been compromised. I think I understand the concept of how private and public keys work but I must be missing something because I don’t see how it proves anything....

TauZero ,

To help you with the terminology, the names for the two operations are “signing” and “verifying”. That’s it.

What can you do with…

public keyprivate keyEncryption:encryptdecryptSignature:verifysign“Signing” is not at all the same as “encrypting” with the keys swapped. It is a separate specific sequence of mathematical operations you perform to combine two numbers (the private key and the message) to produce a third - the signature. Signing is not called “hashing”. A hash may be involved as part of the signature process, but it is not strictly necessary. It makes the “message” number smaller, but the algorithm can sign the full message without hashing it first, will just require computation for longer time. “Hash-verifying” isn’t a thing in this context, you made that name up, just use “verify”.

@dohpaz42 is mad because you messed up your terminology originally, and thought you were trying to say that you “encrypt” a message with the private key, which is totally backwards and wrong. He didn’t know that in your mind you thought you were talking about “signing” the message. Because honestly no one could have known that.

TauZero ,

👍

TauZero ,

Probably some tab. Buggy javascript sometimes goes into infinite loops, including DDoSing its own website with 0-timeout requests, and no way to immediately tell other than the phone getting warm. You probably can’t see it now that the tab is closed, and I’m not sure if the mobile firefox has access to these features, but on desktop you can see open sockets with sent/received bytes in about:networking, and per-tab/per-addon cpu usage in about:performance, and set up logging for next time. But otherwise there isn’t a convenient chart with per-website data usage hidden somewhere.

TauZero ,

Did you toggle on toolkit.legacyUserProfileCustomizations.stylesheets? Firefox stopped parsing userChrome.css by default since several years ago.

What if solving interstellar travel isn't about figuring out faster than light propulsion, but how to extend our own lives?

So I was day dreaming and I caught a thought. What if what we understand about physics is actually all there is to understand? What if you objectively cannot move faster than the speed of light because you can’t do the time traveling things necessary. This would mean that the only way to travel amongst the stars would be to...

TauZero ,

You are behind the times on physics advancements buddy! Thanks to the recently discovered concept of relativistic time dilation, a 5000 light year trip at the speed of light will take literally 0 seconds of your lifespan. More practically, travelling in a starship that accelerates at 1G to the halfway point, turns around and decelerates to the destination, you can reach ridiculous distances within a single human lifetime:

shipboard timedistanceearth time1 year.263 LY1.05 Y2 years1.13 LY2.37 Y3 years2.82 LY4.35 Y4 years5.80 LY7.50 Y5 years10.9 LY12.7 Y10 years166 LY168 Y15 years2199 LY2201 Y20 years28.8 kLY28.8 kY25 years380 kLY380 kY50 years149 GLy149 GY100 years22.8 ZLy22.8 ZYThis is the formula to calculate the distance and time:

<pre style="background-color:#ffffff;">
<span style="color:#323232;">x(τ) = c**2/a [cosh(τ a/c) - 1]
</span><span style="color:#323232;">t(τ) = c/a sinh(τ a/c)
</span><span style="color:#323232;">
</span><span style="color:#323232;">a = 9.8 m/s
</span><span style="color:#323232;">c = 3e8 m/s
</span>

The formula is hyperbolic, which is why travel distance is not a linear relation of travel time. E.g. given τ = 10 years:

<pre style="background-color:#ffffff;">
<span style="color:#323232;">x = 3e8**2/9.8 * (cosh(60*60*24*365*10/2 * 9.8/3e8) - 1) * 2 / (3e8 * 60*60*24*365)
</span><span style="color:#323232;">  = 166 light years
</span><span style="color:#323232;">t = 3e8/9.8 * sinh(60*60*24*365*10/2 * 9.8/3e8) * 2 / (60*60*24*365)
</span><span style="color:#323232;">  = 168 years
</span>
TauZero ,

Oh yeah, it’s like flying the wrong way down the tube of the Large Hadron Collider. The tougher challenge though is like @MuThyme said maintaining 1G acceleration. Following the rocket equation, which is logarithmic, a 50 year multi-stage rocket will be bigger than the universe itself, even if you use some kind of nuclear propulsion 10000 times more efficient than our chemical rockets.

TauZero ,

Check out one of the privacy-focused Firefox forks like LibreWolf, IceCat, or Waterfox. All three disable/rip out pocket by default, I believe, and reduce number of situations they phone home to Mozilla or Google. Memory savings are minimal though.

TauZero ,

Roche limit is not really relevant here. That’s for orbiting bodies, like a satellite around a Jupiter-like planet whose orbit spirals inward due to tidal forces, and eventually crosses the Roche limit, whereby the moon disintegrates into a cloud of rocks that spreads out and forms a ring. Yes, the hyperbolic orbit of the collision trajectory here is a “type” of orbit, but really the video is about the collision itself. There is not enough time for the planet to meaningfully disintegrate under the neutron star’s gravity. “What’s that? The ground is kinda shaking. Could that be the tidal force from that neutron st-ACK!!!”.

In the video you can see the surface of the Earth bulge out towards the star under its gravity in the last second, but most of the kinetic energy of the explosion is imparted by direct physical interaction (i.e. electromagnetic) between the matter of the earth and the matter of the star, and in particular between the matter of the earth that has already been accelerated and the matter of the earth lying farther out.

Or at least it would be if the impactor really was just a chunk of iron with the density slider cranked up. This fluid simulator can’t imagine anything else of course, but you are right that it remains a question of whether a neutron star or a black hole could impart any kinetic energy onto the greater earth at all. Maybe it will just pass through and leave a circular hole, sweeping the material in front of it onto itself. The tunnel would immediately collapse, and the crust would be messed up from tidal sloshing, but maybe the ball of the earth itself will remain intact.

The hard x-rays I believe is a reference to thermal radiation of infalling matter. Just like a bullet that hits a wall while staying intact is hot to the touch because its kinetic energy got 100% converted into heat, or a meteoroid that hits the Moon creates a flash of light visible from Earth because for a second the cloud of collision debris is as hot as the filament of a lamp, the earth material impacting the surface of the star gets really hot. The impact velocity is at minimum the escape velocity of the star, which is thousands of km/s, which means the peak of thermal radiation is in the x-ray range.

TauZero ,

As a quick calculation using the Boltzman formula:

<pre style="background-color:#ffffff;">
<span style="color:#323232;">E = 3/2 k_B T
</span>

Say we imagine that the entire kinetic energy of bulk material from Earth (let’s say iron) impacting the star at 10000km/s is converted into thermal kinetic energy of individual iron atoms (atomic weight 56).

<pre style="background-color:#ffffff;">
<span style="color:#323232;">1/2 m v**2 = 3/2 k_B T
</span><span style="color:#323232;">T = 1/3 m v**2 / k_B
</span><span style="color:#323232;">k_B = 1.38e-23 J/K
</span><span style="color:#323232;">m = 0.056 kg / 6.02e23
</span><span style="color:#323232;">v = 1e7 m/s
</span><span style="color:#323232;">T = 1/3 * .056/6.02e23 * 1e7**2 / 1.38e-23
</span><span style="color:#323232;">
</span><span style="color:#323232;">T = 225 GK
</span>

Looking at the black body temperature chart that 225 gigakelvin corresponds precisely to gamma rays from neutron star collisions.

TauZero ,

What’s a fossil? Is the deer skull I found a fossil? Is the imprint of a chicken bone in wet concrete a fossil once the concrete sets?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines