johnramey

Don't think. Compute.

Archive for the ‘r’ Category

Pseudo-Random vs. Random Numbers in R

with 2 comments

Happy Thanksgiving, everyone. Earlier today, I found an interesting post from Bo Allen on pseudo-random vs random numbers, where the author uses a simple bitmap (heat map) to show that the rand function in PHP has a systematic pattern and compares these to truly random numbers obtained from random.org. The post’s results suggest that pseudo-randomness in PHP is faulty and, in general, should not be underestimated in practice. Of course, the findings should not be too surprising, as there is a large body of literature on the subtleties, philosophies, and implications of the pseudo aspect of the most common approaches to random number generation. However, it is silly that PHP‘s random number generator (RNG) displays such an obvious pattern nowadays because there are several decent, well-studied pseudo-RNG algorithms available as well as numerous tests for randomness.  For a good introduction to RNG, I recommend John D. Cook’s discussion on testing a random number generator.

Now, I would never use PHP for any (serious) statistical analysis, partly due to my fondness for R, nor do I doubt the practicality of the RNG in R. But I was curious to see what would happen. So, created equivalent plots in R to see if a rand equivalent would exhibit a systematic pattern like in PHP, even if less severe. Also, for comparison, I chose to use the random package, from Dirk Eddelbuettel, to draw truly random numbers from random.org. Until today, I had only heard of the random package but had never used it.

I have provided the function rand_bit_matrix, which requires the number of rows and columns to display in the plotted bitmap. To create the bitmaps, I used the pixmap package rather than the much-loved ggplot2 package, simply because of how easy it was for me to create the plots. (If you are concerned that I have lost the faith, please note that I am aware of the awesomeness of ggplot2 and its ability to create heat maps.)

It is important to note that there were two challenges that I encountered when using drawing truly random numbers.

  1. Only 10,000 numbers can be drawn at once from random.org. (This is denoted as max_n_random.org in the function below.)
  2. There is a daily limit to the number of times the random.org service will provide numbers.

To overcome the first challenge, I split the total number of bits into separate calls, if necessary. This approach, however, increases our number of requests, and after too many requests, you will see the error: random.org suggests to wait until tomorrow. Currently, I do not know the exact number of allowed requests or if the amount of requested random numbers is a factor, but looking back, I would guess about 20ish large requests is too much.

Below, I have plotted 500 x 500 bitmaps based on the random bits from both of R and random.org. As far as I can tell, no apparent patterns are visible in either plot, but from the graphics alone, our conclusions are limited to ruling out obvious systematic patterns, which were exhibited from the PHP code. I am unsure if the PHP folks formally tested their RNG algorithms for randomness, but even if they did, the code in both R and PHP is straightforward and provides a quick eyeball test. Armed with similar plots alone, the PHP devs could have sought for better RNG algorithms — perhaps, borrowed those from R.

Click the images for a larger view.

 

 

Finally, I have provided my R code, which you may tinker with, copy, manipulate, utilize, etc. without legal action and without fear of death by snoo snoo. Mmmm. Snoo snoo.

 

Written by ramhiser

November 25th, 2011 at 3:08 am

Posted in r

Tagged with , ,

Conway’s Game of Life in R with ggplot2 and animation

with 4 comments

In undergrad I had a computer science professor that piqued my interest in applied mathematics, beginning with Conway’s Game of Life. At first, the Game of Life (not the board game) appears to be quite simple — perhaps, too simple — but it has been widely explored and is useful for modeling systems over time. It has been forever since I wrote my first version of this in C++, and I happily report that there will be no nonsense here.

The basic idea is to start with a grid of cells, where each cell is either a zero (dead) or a one (alive). We are interested in watching the population behavior over time to see if the population dies off, has some sort of equilibrium, etc. John Conway studied many possible ways to examine population behaviors and ultimately decided on the following rules, which we apply to each cell for the current tick (or generation).

  1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
  2. Any live cell with two or three live neighbours lives on to the next generation.
  3. Any live cell with more than three live neighbours dies, as if by overcrowding.
  4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction

Although there are other versions of this in R, I decided to give it a shot myself. I am not going to provide a walkthrough of the code as I may normally do, but the code should be simple enough to understand for one proficient in R. It may have been unnecessary to implement this with the foreach package, but I wanted to get some more familiarity with foreach, so I did.

The set of grids is stored as a list, where each element is a matrix of zeros and ones. Each matrix is then converted to an image with ggplot2, and the sequence of images is exported as a GIF with the animation package.

Let me know if you improve on my code any. I’m always interested in learning how to do things better.

library('foreach')
library('ggplot2')
library('animation')
 
# Determines how many neighboring cells around the (j,k)th cell have living organisms.
# The conditionals are used to check if we are at a boundary of the grid.
how_many_neighbors <- function(grid, j, k) {
  size <- nrow(grid)
  count <- 0
  if(j > 1) {
    count <- count + grid[j-1, k]
    if (k > 1) count <- count + grid[j-1, k-1]
    if (k < size) count <- count + grid[j-1, k+1]
  }
  if(j < size) {
    count <- count + grid[j+1,k]
    if (k > 1) count <- count + grid[j+1, k-1]
    if (k < size) count <- count + grid[j+1, k+1]
  }
  if(k > 1) count <- count + grid[j, k-1]
  if(k < size) count <- count + grid[j, k+1]
  count
}
 
# Creates a list of matrices, each of which is an iteration of the Game of Life.
# Arguments
# size: the edge length of the square
# prob: a vector (of length 2) that gives probability of death and life respectively for initial config
# returns a list of grids (matrices)
game_of_life <- function(size = 10, num_reps = 50, prob = c(0.5, 0.5)) {
  grid <- list()
  grid[[1]] <- replicate(size, sample(c(0,1), size, replace = TRUE, prob = prob))
  dev_null <- foreach(i = seq_len(num_reps) + 1) %do% {
    grid[[i]] <- grid[[i-1]]
    foreach(j = seq_len(size)) %:%
      foreach(k = seq_len(size)) %do% {
 
        # Apply game rules.
        num_neighbors <- how_many_neighbors(grid[[i]], j, k)
        alive <- grid[[i]][j,k] == 1
        if(alive && num_neighbors <= 1) grid[[i]][j,k] <- 0
        if(alive && num_neighbors >= 4) grid[[i]][j,k] <- 0
        if(!alive && num_neighbors == 3) grid[[i]][j,k] <- 1
      }
  }
  grid
}
 
# Converts the current grid (matrix) to a ggplot2 image
grid_to_ggplot <- function(grid) {
  # Permutes the matrix so that melt labels this correctly.
  grid <- grid[seq.int(nrow(grid), 1), ]
  grid <- melt(grid)
  grid$value <- factor(ifelse(grid$value, "Alive", "Dead"))
  p <- ggplot(grid, aes(x=X1, y=X2, z = value, color = value))
  p <- p + geom_tile(aes(fill = value))
  p  + scale_fill_manual(values = c("Dead" = "white", "Alive" = "black"))
}

As an example, I have created a 20-by-20 grid with a 10% chance that its initial values will be alive. The simulation has 250 iterations. You may add more, but this takes long enough already.

game_grids <- game_of_life(size = 20, num_reps = 250, prob = c(0.1, 0.9))
grid_ggplot <- lapply(game_grids, grid_to_ggplot)
saveGIF(lapply(grid_ggplot, print), clean = TRUE)

Written by ramhiser

June 5th, 2011 at 6:04 pm

Posted in r

Tagged with , , , ,

Converting a String to a Variable Name On-The-Fly and Vice-versa in R

with 10 comments

Recently, I had a professor ask me how to take a string and convert it to an R variable name on-the-fly. One possible way is:

x <- 42
eval(parse(text = "x"))
[1] 42

Now, suppose we want to go the other way. The trick is just as simple:

x <- 42
deparse(substitute(x))
[1] "x"

Written by ramhiser

December 28th, 2010 at 8:32 pm

Posted in r

Tagged with

Automatic Simulation Queueing in R

without comments

I spend much of my time writing R code for simulations to compare the supervised classification methods that I have developed with similar classifiers from the literature.  A large challenge is to determine which datasets (whether artificial/simulated or real) are interesting comparisons.  Even if we restricted ourselves to multivariate Gaussian data, there are a large number of covariance matrix configurations that we could use to simulate the data.  In other words, there are too many possibilities to consider all of them.  However, it is often desirable to consider as many as possible.

Parallel processing certainly has reduced the runtime for simulations.  In fact, most of my simulations are ridiculously parallelizeable, so I can run multiple simulations side-by-side.

I have been searching for ways to automate a lot of what I do, so I can spend less time on the mundane portions of simulation and focus on classification improvement.  As a first attempt, I have written some R code that generates a Bash script that can be queued on my university’s high-performance computer. The code to create a Bash script is create.shell.file(), which is given here:

# Arguments:
#	shell.file: The name of the shell file (usually ends in '.sh').
#	r.file: The name of the R file that contains the actual R simulation.
#	output.file: The name of the file where all output will be echoed.
#	r.options: The options used when R is called.
#	sim.args: The simulation arguments that will be passed to the R file.
#
create.shell.file <- function(shell.file, r.file, output.file, r.options = "--no-save --slave", sim.args = NULL, chmod = TRUE, chmod.permissions = "750") {
	args.string <- ''
	if(!is.null(sim.args)) args.string <- paste('--args', sim.args)
	r.command <- paste('R', r.options, args.string, '<', r.file, '>', output.file)
	sink(shell.file)
		cat('#!/bin/bash\n')
		cat('#PBS -S /bin/bash\n')
		cat('echo "Starting R at `date`"\n')
		cat(r.command, '\n')
		cat('echo "R run completed at `date`"\n')
	sink()
 
	# If the chmod flag is TRUE, then we will chmod the created file to have the appropriate chmod.permissions.
	if(chmod) {
		chmod.command <- paste("chmod", chmod.permissions, shell.file)
		system(chmod.command)
	}
}

To actually queue the simulation, we make a call to queue.sim():

# Arguments:
#	sim.config.df: a dataframe that contains the current simulation configuration.
#	sim.name: The name of the simulation. The queued sim will be prepended to the queue name.
#	np: The number of processors to use for this simulation.
#	npn: The number of processors to use per node for this simulation.
#	email: The email address that will be notified upon completion or an error.
#	cleanup: Delete all of the shell files after the simulations are queued?
#	verbose: Echo the status of the current task?
#
queue.sim <- function(sim.config.df, sim.type = "rlda-duin", np = 1, npn = 1, email = "johnramey@gmail.com", cleanup = FALSE, verbose = TRUE) {
	sim.config <- paste(names(sim.config.df), sim.config.df, collapse = "-", sep = "")
	sim.name <- paste(sim.type, "-", sim.config, sep = "")
	shell.file <- paste(sim.name, ".sh", sep = "")
	r.file <- paste(sim.type, '.r', sep = '')
	out.file <- paste(sim.name, '.out', sep = '')
	sim.args <- paste(sim.config.df, collapse = " ")
 
	if(verbose) {
		cat("sim.config:", sim.config, "\n")
		cat("sim.name:", sim.name, "\n")
		cat("shell.file:", shell.file, "\n")
		cat("r.file:", r.file, "\n")
		cat("out.file:", out.file, "\n")
		cat("sim.args:", sim.args, "\n")
	}
 
	if(verbose) cat("Creating shell file\n")
	create.shell.file(shell.file, r.file, out.file, sim.args = sim.args)
	if(verbose) cat("Creating shell file...done!\n")
 
	# Example
	# scasub -np 8  -npn 8 -N "rlda-prostate" -m "johnramey@gmail.com" ./rlda-prostate.sh
	if(verbose) cat("Queueing simulation\n")
	queue.command <- paste("scasub -np ", np, " -npn ", npn, " -N '", sim.name, "' -m '", email, "' ./", shell.file, sep = "")
	if(verbose) cat("Queue command:\t", queue.command, "\n")
	system(queue.command)
	if(verbose) cat("Queueing simulation...done!\n")
 
	if(cleanup) {
		if(verbose) cat("Cleaning up shell files\n")
		file.remove(shell.file)
		if(verbose) cat("Cleaning up shell files...done\n")
	}
}

Let’s look at an example to see what is actually happening. Suppose that we have a simulation file called “gaussian-sim.r” that generates N observations from two different p-dimensional Gaussian distributions each having the identity covariance matrix. Of course, this is a boring example, but it’s a start. One interesting question that always arises is: “Does classification performance degrade for small values of N and (extremely) large values of p?” We may wish to answer this question with a simulation study by looking at many values of N and many values of p and see if we can find a cutoff where classification performance declines. Let’s further suppose that for each configuration that we will repeat the experiment B times. (As a note, I’m not going to actually examine the gaussian-sim.r file or its contents here. I may return to this example later and extend it, but for now I’m going to focus on the automated queueing.) We can queue the simulation for each of several configurations the following code:

library('plyr')
sim.type <- "gaussian-sim"
np <- 8
npn <- 8
verbose <- TRUE
cleanup <- TRUE
 
N <- seq.int(10, 50, by = 10)
p <- seq.int(250, 1000, by = 250)
B <- 1000
 
sim.configurations <- expand.grid(N = N, p = p, B = B)
 
# Queue a simulation for each simulation configuration
d_ply(sim.configurations, .(N, p, B), queue.sim, sim.type = sim.type, np = np, npn = npn, cleanup = cleanup, verbose = verbose)

This will create a Bash script with a descriptive name. For example, with the above code, a file called “gaussian-sim-N10-p1000-B1000.sh” is created. Here are its contents:

#!/bin/bash
#PBS -S /bin/bash
echo "Starting R at `date`"
R --no-save --slave --args 10 1000 1000 < gaussian-sim.r > gaussian-sim-N10-p1000-B1000.out
echo "R run completed at `date`"

A note about the shell file created. The actual call to R can be customized, but this call has worked well for me. I certainly could call R in batch mode, but I never do without any specific reason. Perhaps one is more efficient than the other? I’m not sure about this.

Next, for each *.sh file created, the following command is executed to queue the R script using the above configuration.

scasub -np 8 -npn 8 -N 'gaussian-sim-N10-p1000-B1000' -m 'johnramey@gmail.com' ./gaussian-sim-N50-p1000-B1000.sh

The scasub command is used for my university’s HPC. I know that there are other systems out there, but you can always alter my code to suit your needs. Of course, your R script needs to take advantage of the commandArgs() function in R to use the above code.

Written by ramhiser

December 28th, 2010 at 11:15 am

Posted in r

Tagged with , ,

Autocorrelation Matrix in R

with 3 comments

I have been simulating a lot of data lately  with various covariance (correlation) structures, and one that I have been using is the autocorrelation (or autoregressive) structure, where there is a “lag” between variables. The matrix is a v-dimension matrix of the form

$$\begin{bmatrix} 1 & \rho & \rho^2 & \dots & \rho^{v-1}\\ \rho & 1& \ddots & \dots & \rho^{v-2}\\ \vdots & \ddots & \ddots & \ddots & \vdots\\ \rho^{v-2} & \dots & \ddots & \ddots & \rho\\ \rho^{v-1} & \rho^{v-2} & \dots & \rho & 1 \end{bmatrix}$$,

where \(\rho \in [-1, 1]\) is the lag. Notice that the lag decays to 0 as v increases.

My goal was to make the construction of such a matrix simple and easy in R.  The method that I used explored a function I have not used yet in R called “lower.tri” for the lower triangular part of the matrix.  The upper triangular part is referenced with “upper.tri.”

My code is as follows:

autocorr.mat

I really liked it because I feel that it is simple, but then I found Professor Peter Dalgaard’s method, which I have slightly modified. It is far better than mine, easy to understand, and slick. Oh so slick. Here it is:

autocorr.mat

Professor Dalgaard’s method puts mine to shame. It is quite obvious how to do it once it is seen, but I certainly wasn’t thinking along those lines.

Written by ramhiser

December 26th, 2010 at 4:55 am

Posted in r,statistics

Tagged with , ,