MetaOptimize - A Machine Learning Q&A Community (Similar to StackOverflow)

Using bootstrap in cluster analysis

Git Magic – A Git Tutorial as a Video Game Analogy

Bayesian Reasoning and Machine Learning – Textbook with **free **online textbook (in beta) by David Barber.

Bioconductor Experiment Data Packages – A list of packages with experiment data (a lot of microarray)

Bioconductor One-day Overview Course – From Harvard Biostatistics Department (PDF)

Clustering and Visualization of Microarray Data – This is the best presentation I have seen of the topics, including clustering evaluation (PDF)

Statistical Microarray Data Analysis – This excellent presentation from the same guy includes the one above and discusses a much broader scope. (PDF)

Han-Ming Wu’s Site – This is the professor that released the above two presentations. He has more information on his site. (Only some English)

Very slick poster with ggplot2 graphics – Note the github project at the bottom.

Concentrations of Measure – This is Prof. Tyrone Vincent’s great presentation on probability inequalities from PASI

Machine Learning Video Lectures and Notes – Professor Tom Mitchell at Carnegie Mellon

Bayesian Statistics – Scholarpedia Entry (Recommended by Prof. Andrew Gelman)

]]>Don’t settle.

Stay hungry. Stay foolish.

You’ve got to find what you love, and that is as true for work as is for your lovers. Your work is gonna fill a large part of your life, and the only way to truly be satisfied is to do what you believe is great work, and the only way to do great work is to love what you do.

It’s good to be reminded of things like this once in a while.

]]>Now, I would never use PHP for any (serious) statistical analysis, partly due to my fondness for R, nor do I doubt the practicality of the RNG in R. But I was curious to see what would happen. So, created equivalent plots in R to see if a **rand** equivalent would exhibit a systematic pattern like in PHP, even if less severe. Also, for comparison, I chose to use the **random** package, from Dirk Eddelbuettel, to draw **truly random** numbers from random.org. Until today, I had only heard of the **random** package but had never used it.

I have provided the function **rand_bit_matrix**, which requires the number of rows and columns to display in the plotted bitmap. To create the bitmaps, I used the **pixmap** package rather than the much-loved **ggplot2** package, simply because of how easy it was for me to create the plots. (If you are concerned that I have lost the faith, please note that I am aware of the awesomeness of **ggplot2** and its ability to create heat maps.)

It is important to note that there were two challenges that I encountered when using drawing **truly random numbers**.

- Only 10,000 numbers can be drawn at once from random.org. (This is denoted as
**max_n_random.org**in the function below.) - There is a daily limit to the number of times the random.org service will provide numbers.

To overcome the first challenge, I split the total number of bits into separate calls, if necessary. This approach, however, increases our number of requests, and after too many requests, you will see the error: **random.org suggests to wait until tomorrow**. Currently, I do not know the exact number of allowed requests or if the amount of requested random numbers is a factor, but looking back, I would guess about 20ish large requests is too much.

Below, I have plotted 500 x 500 bitmaps based on the *random* bits from both of R and random.org. As far as I can tell, no apparent patterns are visible in either plot, but from the graphics alone, our conclusions are limited to ruling out obvious systematic patterns, which were exhibited from the PHP code. I am unsure if the PHP folks formally tested their RNG algorithms for **randomness**, but even if they did, the code in both R and PHP is straightforward and provides a quick eyeball test. Armed with similar plots alone, the PHP devs could have sought for better RNG algorithms — perhaps, borrowed those from R.

Click the images for a larger view.

Finally, I have provided my R code, which you may tinker with, copy, manipulate, utilize, etc. without legal action and without fear of death by snoo snoo. Mmmm. Snoo snoo.

]]>

One-liners which make me love R: Make your data dance (Hans Rosling style) with googleVis

Response Surface Plot Example in R with **rgl**

Excellent Set of ‘Data Mining’ Notes from Professor Shalizi at Carnegie Mellon

Annotated Computer Vision Bibliography – A HUGE list of links from various disciplines related to pattern recognition, machine learning, facial recognition, etc. Highly recommended for exploration.

Fast SVD for Large-Scale Matrices

Spectral Variation, Normal Matrices, and Finsler Geometry - Provides a great discussion on the development of the Hoffman-Wielandt theorem and describes several inequalities related to the Frobenius norm of the difference of two matrices

A Note on the Hoffman-Wielandt Theorem for Generalized Eigenvalue Problem - An interesting development of diagonalizable pairs of Hermitian matrices.

Seminar Materials for Bayesian Reinforcement Learning

The Shame of College Sports – An article that has been highly recommended to me about corruption in college sports

UCSDs Computational Mass Spec Blog – I like how they compile papers and comment on them in blog form with various details about each. I am tempted to adopt their method.

]]>Mommy, I found it! — 15 Practical Linux Find Command Examples

Introduction to Machine Learning – book by Alex Smola (Yahoo)

Applied Multivariate Statistical Analysis – Excellent book by Hardle and Simar (PDF!)

Primer on Matrix Analysis and Linear Models – Excellent resource for more rigorous approach to matrices!!!

Applied Statistics for Bioinformatics using R – Book (PDF)

Distributed Computing with R Using Snowfall – Presentation (PDF)

How can you learn mathematics for machine learning? - Quora

What are some good resources for machine learning? – Quora

Introduction to Neural Networks – comp.ai.neural-nets newsgroup

Statistical Data Mining Tutorials – Slides by Prof. Andrew Moore at CMU

]]>It seems limited for statistics though, as JSM is not even listed.

]]>The basic idea is to start with a grid of cells, where each cell is either a zero (dead) or a one (alive). We are interested in watching the population behavior over time to see if the population dies off, has some sort of equilibrium, etc. John Conway studied many possible ways to examine population behaviors and ultimately decided on the following rules, which we apply to each cell for the current tick (or generation).

- Any live cell with fewer than two live neighbours dies, as if caused by under-population.
- Any live cell with two or three live neighbours lives on to the next generation.
- Any live cell with more than three live neighbours dies, as if by overcrowding.
- Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction

Although there are other versions of this in R, I decided to give it a shot myself. I am not going to provide a walkthrough of the code as I may normally do, but the code should be simple enough to understand for one proficient in R. It may have been unnecessary to implement this with the foreach package, but I wanted to get some more familiarity with foreach, so I did.

The set of grids is stored as a list, where each element is a matrix of zeros and ones. Each matrix is then converted to an image with ggplot2, and the sequence of images is exported as a GIF with the animation package.

Let me know if you improve on my code any. I’m always interested in learning how to do things better.

library('foreach') library('ggplot2') library('animation') # Determines how many neighboring cells around the (j,k)th cell have living organisms. # The conditionals are used to check if we are at a boundary of the grid. how_many_neighbors <- function(grid, j, k) { size <- nrow(grid) count <- 0 if(j > 1) { count <- count + grid[j-1, k] if (k > 1) count <- count + grid[j-1, k-1] if (k < size) count <- count + grid[j-1, k+1] } if(j < size) { count <- count + grid[j+1,k] if (k > 1) count <- count + grid[j+1, k-1] if (k < size) count <- count + grid[j+1, k+1] } if(k > 1) count <- count + grid[j, k-1] if(k < size) count <- count + grid[j, k+1] count } # Creates a list of matrices, each of which is an iteration of the Game of Life. # Arguments # size: the edge length of the square # prob: a vector (of length 2) that gives probability of death and life respectively for initial config # returns a list of grids (matrices) game_of_life <- function(size = 10, num_reps = 50, prob = c(0.5, 0.5)) { grid <- list() grid[[1]] <- replicate(size, sample(c(0,1), size, replace = TRUE, prob = prob)) dev_null <- foreach(i = seq_len(num_reps) + 1) %do% { grid[[i]] <- grid[[i-1]] foreach(j = seq_len(size)) %:% foreach(k = seq_len(size)) %do% { # Apply game rules. num_neighbors <- how_many_neighbors(grid[[i]], j, k) alive <- grid[[i]][j,k] == 1 if(alive && num_neighbors <= 1) grid[[i]][j,k] <- 0 if(alive && num_neighbors >= 4) grid[[i]][j,k] <- 0 if(!alive && num_neighbors == 3) grid[[i]][j,k] <- 1 } } grid } # Converts the current grid (matrix) to a ggplot2 image grid_to_ggplot <- function(grid) { # Permutes the matrix so that melt labels this correctly. grid <- grid[seq.int(nrow(grid), 1), ] grid <- melt(grid) grid$value <- factor(ifelse(grid$value, "Alive", "Dead")) p <- ggplot(grid, aes(x=X1, y=X2, z = value, color = value)) p <- p + geom_tile(aes(fill = value)) p + scale_fill_manual(values = c("Dead" = "white", "Alive" = "black")) } |

As an example, I have created a 20-by-20 grid with a 10% chance that its initial values will be alive. The simulation has 250 iterations. You may add more, but this takes long enough already.

game_grids <- game_of_life(size = 20, num_reps = 250, prob = c(0.1, 0.9)) grid_ggplot <- lapply(game_grids, grid_to_ggplot) saveGIF(lapply(grid_ggplot, print), clean = TRUE) |

Machine Learning Data Set Repository

Material for Jieping Ye’s Machine Learning Course – Lots of papers, links, data sets, and tutorials.

Data sets from “Elements of Statistical Learning”

Benchmark Data Sets for Supervised Classification

Rosetta Code (Translation of Various Coding Tasks into Many Programming Languages)

]]>A Random Matrix-Theoretic Approach to Handling Singular Covariance Estimates

Shrinkage Discriminant Analysis and Feature Selection (along with sda package on CRAN)

Bayesian Model Averaging: A Tutorial (PDF)

Statistical Learning Based on High Dimensional Data (PDF: Master’s Thesis focused on Regularized Discriminant Analysis)

Objective Bayesian Analysis of Kullback-Liebler Divergence of Two Multivariate Normal Distributions with Common CovarianceMatrix and Star-shape Gaussian Graphical Model (PDF: Dissertation)

]]>

To get started, I purchased a copy of Baseball Hacks. The author suggests the usage of MySQL, so I will oblige. First, I downloaded some baseball data in MySQL format on my web server (Ubuntu 10.04) and decompressed it; when I downloaded the data, it was timestamped as 28 March 2011, so double-check if there is an updated version.

mkdir baseball cd baseball wget http://www.baseball-databank.org/files/BDB-sql-2011-03-28.sql.zip unzip BDB-sql-2011-03-28.sql.zip |

Next, in MySQL I created a user named “baseball”, a database entitled “bbdatabank” and granted all privileges on this database to the user “baseball.” To do this, first open MySQL as root (mysql -u root -p)

CREATE USER 'baseball'@'localhost' IDENTIFIED BY 'YourPassword'; CREATE databas bbdatabank; GRANT ALL PRIVILEGES ON `bbdatabank`.* TO 'baseball'@'localhost'; FLUSH PRIVILEGES; quit |

Note the tick marks (`) around bbdatabank when privileges are granted. Also, notice the deliberate misspelling when I constructed the db. WordPress freaks out on me because mod_security steps in and says, “Umm, no.” For more info about this, go here and here (see the comments as well).

Finally, we read the data into the database we just created by:

mysql -u baseball -p -s bbdatabank < BDB-sql-2011-03-28.sql |

That’s it! Most of this code has been adapted from the Baseball Hacks book, although I’ve tweaked a couple of things. As I progress through the book, I will continue to add interesting finds and code as posts. Eventually, I will move away from the book’s code as it focuses too much on the “Intro to Data Exploration” reader with constant mentions of MS Access/Excel. The author means well though as he urges the reader to use *nix/Mac OS X.

]]>