The Random
class is used to create random numbers. (Pseudo-random that is of course.).
Example:
Random rnd = new Random();
int month = rnd.Next(1, 13); // creates a number between 1 and 12
int dice = rnd.Next(1, 7); // creates a number between 1 and 6
int card = rnd.Next(52); // creates a number between 0 and 51
If you are going to create more than one random number, you should keep the Random
instance and reuse it. If you create new instances too close in time, they will produce the same series of random numbers as the random generator is seeded from the system clock.
There is actually a (subtle) difference between the two. Imagine you have the following code in File1.cs:
// File1.cs
using System;
namespace Outer.Inner
{
class Foo
{
static void Bar()
{
double d = Math.PI;
}
}
}
Now imagine that someone adds another file (File2.cs) to the project that looks like this:
// File2.cs
namespace Outer
{
class Math
{
}
}
The compiler searches Outer
before looking at those using
directives outside the namespace, so it finds Outer.Math
instead of System.Math
. Unfortunately (or perhaps fortunately?), Outer.Math
has no PI
member, so File1 is now broken.
This changes if you put the using
inside your namespace declaration, as follows:
// File1b.cs
namespace Outer.Inner
{
using System;
class Foo
{
static void Bar()
{
double d = Math.PI;
}
}
}
Now the compiler searches System
before searching Outer
, finds System.Math
, and all is well.
Some would argue that Math
might be a bad name for a user-defined class, since there's already one in System
; the point here is just that there is a difference, and it affects the maintainability of your code.
It's also interesting to note what happens if Foo
is in namespace Outer
, rather than Outer.Inner
. In that case, adding Outer.Math
in File2 breaks File1 regardless of where the using
goes. This implies that the compiler searches the innermost enclosing namespace before it looks at any using
directive.
Best Answer
I think you can be assured that it will not be deterministic across all platforms if there are any where the program or the floating point hardware can have its rounding mode altered from the usual settings.
There is also a more troubling issue when the input value exceeds 2^52 so that the mantissa of a double floating point cannot accurately represent the input value
y
. For some unlucky values ofy
rounding to nearest will result in a spurious answer forx
that doesn't satisfyx*x <= y
. They are quite rare. I sampled them and get 1:10^8 at the onset of trouble and 3:10^7 for numbers >2^63.I didn't detect any failures for y < 10^11 but that is a drop in the ocean compared to the total range of an int64. It looks to me like you can safely use sqrt(y) provided that you sanitise the result to guard against the rare exceptions where rounding error causes trouble (and rounding rules or guard digits may vary slightly with some go-faster CPUs).
The refinement I suggest in integer arithmetic should be fast since it is just one multiply a sanity test and a conditional decrement. This is the test code I put together to take a quick look.
I left it running overnight and it found about 1400 bad cases in the
y>2^63
block. FWIW I found no errors at all in the first 10^11 integer values in just a few minutes (so 10^12, 10^13 would be easily brute force testable). That is still a drop in the ocean compared to the full dynamic range of10^19
.You could also defend against fail low by checking that
x2+2*x+1 > y
but I think that the way that rounding to nearest and hardware sqrt works followed by truncation to integer the chances of that ever triggering are vanishingly small.