Well as Krisztián pointed out, the distances actually approach infinite as the random selections approach infinite, but I was thinking more within a given range (which was where I made my mistake). But within that finite space, if you plot out 1 million points an equal distance apart from each other in both a 1d and 3d space, the distance between the two furthest points on the 1d line would be 1 million units. Where in the 3d space, you would only have 100 units of length on each side of the cube (to create a cube with 1 million 1 unit spaced points within it). So the distance between the points furthest from each other would be only 100 units.
A visual representation of a smaller scale could be thought of as a piece of paper with 64 equally spaced points in a line for a 1d space, then a piece of paper with a 8x8 square of points for the 2d space, then for the 3d space, you need a 4x4x4 cube of points. So when you measure the furthest straight line distance between the points in the different spaces, you get 64, 8 and 4 respectively. It's kind of like how the more you fold a piece of paper the "smaller" it gets.
But it was all wrong to show differences in infinite groups because as you approach infinity picking random points also creates a distance between those points that approaches infinity. I think it would actually prove the opposite, in that the different densities of infinite spaces would actually behave the same, in my example, as you approach infinity.
The order of growth stuff was correct though. It's helpful in programming so we can figure out if an algorithm is useful or not for very large inputs. If the order of growth is too great, when inputs are very large it would simply take too long, or too much memory, to use that algorithm.

## About Nick

### Languages

### Areas of Expertise

Web Programming, Programming, User Experience (web)

### An idea worth spreading

Saving the world doesn't have to be difficult. We can each do our part every day.

### I'm passionate about

The web, technology, interconnectedness, sharing, helping, creating, imagining, dreaming

### Universities

## Comments & conversations

@Krisztián Pintér Ya, my math skills aren't what they used to be. Thinking about what you said, I think you're right. As you approach infinity when picking random points, the distance between the two points would also approach infinite no matter what the density of the points would be. I wish I had time to write the proof to figure it out (or even look it up) but I don't. If that is the case, though, it would certainly be further evidence that the different infinites are still equally infinite even though one is more densely populated than the other.
This is making me wish I was back in college as a Math major. I forgot how much fun it can be.

Our brains are based in a reality of logic as well. Neurons fire because of electric and chemical reactions. All of reality obeys the laws of physics. We humans are as equally 1s and 0s as machine code in our purest source. We're just a collection of electrons and protons whirling around and bouncing between neutrons, right? So why should religion exist within us? 1s and 0s are the smallest pieces, and our world, or existence, our beings can be reduced to 1s and 0s because we're all atomic based and atoms follow rules just like 1s and 0s. To simulate our reality on computers will be more than possible in the future.
The only thing holding it back is cost and time. Because of the parallel nature of the processing, we already have the power, we'd just need to keep throwing more processors at the problem like modern day super computers. IBM threw 147,000 processors at the cat brain work above. How many processors will it take to make a human brain? Is it already possible? Our super computer FP operations per second have been climbing hugely. And simply throwing more processors at it can make it go up even more.
http://en.wikipedia.org/wiki/Super_computers#Timeline_of_supercomputers
So while the silicon size limit will put a stop to single processor speed growth, we'll instead be throwing more processors at the problem. And should some new technology (like graphene based processors) come along and replace or supplement our silicon based technology, who knows how quickly our processing speeds will increase?
But at their cores computer code and our reality aren't that different at all.

The absolute distance would be whatever straight line distance between the two points exists (after looking this up the term I was looking for was Euclidean distance). But when choosing random points in a more densely populated space, you're more likely to choose points closer together than two randomly chosen points in a less densely populated space. It's just a probability demonstration of how two infinite sets can behave differently even though both have the same unending quantity of points.
My math knowledge is somewhat limited to what I've specialized in. As a programmer, I do work with growth rates as quantities approach infinite daily. I don't remember how to make my example clear as a mathematical representation, but obviously there's no one number that represents the distance between two random points. But there would be equations that demonstrate the probable difference when selecting from the two different pools of points.

Infinite is infinite, but the absolute distance between two random points in an infinite 1d, 2d, and 3d space (which have equal point densities on their axises) would tend to be smaller as you increase dimensions. So while an infinite 1d space provides an equally unending number of locations as an infinite 3d space, the 3d space has more locations that are of near proximity than the 1d space. As you described them, these would each represent different densities in a given infinite space. But simply putting the points closer together would have the same affect. An infinite 1d line where the points where 1cm apart would be less dense than an infinite line where the points were 1mm apart; but both would have an equally infinite number of points.
Infinite can be different in rates of growth as well as densities. A line y = x grows at a rate of O(n) as it approaches infinity, but a curve y = x^2 grows at a rate of O(N^2) or exponentially faster as it approaches infinity. Rates of growth as you approach infinite (represented here by Big O Notation*) are important and they demonstrate how two different infinite quantities can behave differently. But, in the end, they both contain the same unending quantity of whatever.
Big O Notaton: http://en.wikipedia.org/wiki/Big_O_notation

Beliefs are beliefs because one is believing something that isn't proven true. If they were proven true beyond doubt they would be facts, or knowledge. The fact is, regarding religion, we don't have universal proof that isn't doubted.
The way I see it, there are so many religions in our world, and so many of them have conflicting claims that they obviously can't all be right. But most religions, from experience, tend to do more good than evil. It's only when the authority figures in powerful position abuse that power that religion turns sour. The good is undeniable. Religions build communities and help connect people.
It's just a shame that more people can't be accepting that what they believe might be as right or wrong as what the next guy believes. If we could all accept that none of us know, we can get past all this petty arguing and start working toward a better tomorrow, together. We're all humans, we're all Earthlings... It saddens me that it'll probably take a world wide catastrophe to bring us together so we can see that.

Why? I would assume that a digital duplication of a human brain would wonder about the same things a normal one does. A computer pondering existence, spirituality and religion seems possible if this were to happen. A perfect digital copy would, in theory, work the same way. So I don't really see religion going away because of such technology. In fact, some science fiction suggests that such thinking machines would even develop new religions or take our religions steps further.
The technology is not as far off as it seems either.
"Scientists perform cat-scale cortical simulations and map the human brain in effort to build advanced chip technology"
http://www-03.ibm.com/press/us/en/pressrelease/28842.wss
Taking what computing power it took to do that, apply Moore's law, and in a couple of decades our PCs will have the computing power to emulate a cats brain and a super computer will be many times more powerful and likely able to emulate more complex brains.

Eh, I completely believe if we survive long enough we'll spread out throughout the universe, but that doesn't make your false proof a true proof. ((Macro level trend != big picture trend) == proof) != true

Well, I don't know about that. Have you seen this talk yet? http://www.ted.com/talks/paul_root_wolpe_it_s_time_to_question_bio_engineering.html
It's all about bio engineering and he talks about how we can already tie a brain to a machine and make it do stuff. It seems reasonable to say that our future "computers" will be a hybrid of machine and organic matter (a brain) that we grow in a beaker.

If we map the human brain's neural network on a computer, completely and fully, then we "turn it on" and let it "think" will it think like a human? If it's mapped identically, it seems logical that it would. As our computing power and understanding of the brain increases it seems reasonable to believe that we will someday (if not relatively soon) have the ability to do just that. I'm very tempted to believe that if that should come to pass, machines will be able to think and behave exactly as we do.