Saturday, December 22, 2012


DIhPWCJ
e tliha
aatesey
rmpahea
  :s rn
ft/eyst
rh/ o,h
irwju 
eiwo  
nlwia 
dl.nl 
sew l 
,dhm  
  ieh 
 tt a 
 oeip 
  hnp 
 io y 
 nuc  
 fsoh 
 oeno 
 r.gl 
 mgri 
  oad 
 yvta 
 o/uy 
 utls 
  ha  
 teta 
 h-in 
 apnd 
 trg  
  e a 
 osJ  
 usih 
 r-me 
  o a 
 ofol 
 wfnt 
 ni h 
  cty 
 Jeh  
 i/ia 
 m2sn 
  0 d 
 G1r  
 a2ij 
 t/co 
 e1hy 
 s2lo 
  /yu 
 h2 s 
 a1d  
 s/e2 
  ps0 
 bre1 
 eer3 
 esv. 
 nie  
  dd  
 ce   
 hnh  
 oto  
 s-n  
 eoo  
 nbr  
  a.  
 bm   
 yaH  
  -e  
 Ph   
 rod  
 eno  
 soe  
 irs  
 ds   
 e-s  
 nno  
 ta   
  tm  
 Oiu  
 boc  
 anh  
 m-   
 asf  
  -o  
 ttr  
 oo   
  pu  
 b-s  
 es!  
  c   
 ti   
 he   
 en   
  t   
 ri   
 es   
 ct   
 is   
 p-   
 ia   
 en   
 nd   
 t-   
  i   
 on   
 fn   
  o   
 tv   
 ha   
 et   
  o   
 Nr   
 as   
 t    
 i    
 o    
 n    
 a    
 l    
      
 M    
 e    
 d    
 a    
 l    
      
 o    
 f    
      
 S    
 c    
 i    
 e    
 n    
 c    
 e    
 ,    
      
 t    
 h    
 e    
      
 h    
 i    
 g    
 h    
 e    
 s    
 t    
      
 h    
 o    
 n    
 o    
 r    
      
 b    
 e    
 s    
 t    
 o    
 w    
 e    
 d    
      
 b    
 y    
      
 t    
 h    
 e    
      
 U    
 n    
 i    
 t    
 e    
 d    
      
 S    
 t    
 a    
 t    
 e    
 s    
      
 g    
 o    
 v    
 e    
 r    
 n    
 m    
 e    
 n    
 t    
 .    

Friday, August 17, 2012

The S-Matrix, Time Reversal, And All That

The S-Matrix the description of physical scattering.  The way it works is this:  there is an "incoming" Hilbert space of states, and an "outgoing" Hilbert space of the same size of (possibly different) states.  These states are asymptotic, in the sense that they are thought to be infinitely far away from the region where any interaction occurs.

Suppose a generic incoming state is described by


so that the vector  ⃗α contains complete information about an incoming state, and that a generic outgoing state is described by
so that the vector ⃗a contains complete information about an outgoing state.

The S matrix is the matrix which takes incoming states and maps them to outgoing states:


Conservation of probability implies that S is unitary, ie S(S†)=1.

If we express the incoming and outgoing states in a momentum eigenbasis, it is easy to consider the action of time reversal.  First, the two Hilbert spaces change roles, so that the outgoing states become incoming states, and vice versa.  The coefficients ⃗a and ⃗α get complex conjugated (and change roles), and the momentum of each state gets reversed.  There is some other unitary matrix   ̃S satisfying

Time-reversal invariance is the statement processes that happen can also legitimately happen in reverse, meaning   ̃S = S.  If we multiply both sides of the original S-matrix relation by S† and then complex conjugate that relation, we conclude that S = ST.

That is, time reversal invariance implies that S is symmetric under transposition.

Monday, November 29, 2010

The 15-Puzzle

Is it possible to take a standard 15 puzzle (the numbers 1-15 on those sliding tiles, with one blank space) in the starting position:

1234
5678
9ABC
DEF_

and (without taking the pieces out of the puzzle) transpose the pieces so that the puzzle reads

159D
26AE
37BF
48C_

? It it possible to transpose the 3x3 8-piece puzzle in this way?

I know the answer... leave your thoughts in the comments.

Wednesday, November 3, 2010

O(1) Space Algorithm For Transposition

The cheapest way to transpose a matrix in a computer is to simply change the labelling of the locations. To understand this, first we should understand how a generic way computer stores a vector and a matrix.

Suppose we had a column vector with n entries, and the computer stores the entries in memory slots 67 to 67+n-1. Then, when you ask the computer for the ith entry, it does not go to memory slot 67 and count i slots, it simply goes to slot 67+i (assuming 0-indexing) directly. This is different from how you or I do it when looking at a vector written on a page---we have to count some number of slots from the beginning (or end!) to get to the entry we want.

Suppose we had a matrix with n rows and m columns and the computer stores the entries in memory slots 85 to 85+mn-1. Then the canonical thing to do is to store the matrix in row-major order, which means storing the first row and then the second row and then the third, etc. Then, when you ask the computer for the (row,column)=(i,j)th entry, it does not count in two dimensions in a grid like you or I (the memory doesn't even know the data is laid out in a grid). Instead, it just goes to slot 85+im+j.

Suppose you want to transpose the above matrix. The fastest way is not to actually move any of the entries. Instead, the fastest way is to change the algorithm for accessing them. It is easy to see that if you want the (i,j)th entry of the transposed matrix, you should simply look at entry 85+jn+i---we have simply switched the role of rows and columns in the addressing system.

This is quite handy. But it's not always the best thing to do. There are a number of tasks where you need a whole row. Suppose you need the ith row. Then just go to memory slot 85+im, the first entry in the row, and read until you get to 85+(i+1)m-1, the last entry in the same row. Reading these entries is fast because they're all in order. But if you need a whole column, you have to jump around the memory quite a bit because the entries of a single column are sprinkled evenly throughout the memory, and that is quite slow. SO, if you're going to need to access whole columns frequently and whole rows infrequently, you're better off moving the entries around so that the columns are stored contiguously. Thus, we need to come up with an algorithm for transposing a matrix in memory.

Suppose you had a matrix that took up more than half your memory (for example, you're google and you're trying to diagonalize the PageRank matrix, which is so big you have to store it on multiple computers). Then you can't just make a copy in a different order because you wouldn't have enough room to write it down. Suppose you've used up 99.999% of your memory and yet you want to transpose a matrix. Is it possible to do so without needing more space than the matrix actually occupies (plus just two ((for example)) entries worth of scratch space)? This question is usually phrased as: "is it possible to transpose a matrix in place?" or "is it possible to transpose a matrix in O(1) space?" The problem is this: every time you want to write a value in a memory slot, if you don't store the entry currently stored there in your scratch space you fail: you will have overwritten a piece of the matrix that you cannot get back. So before writing anywhere, you'd better rescue the piece that you used to occupy that memory location. BUT, that dictates the next entry you need to move because you just don't have enough space to keep too many rescued pieces of information in memory. If you were moving something from memory slot 331 to 743, your next immediate step is to move the entry in 743 someplace else.

For square matrices, this is really easy, because the size of a row in memory at the beginning will be the same as the size of the row after transposition (because it is square, the columns have the same number of slots as the rows). So, if you were moving the entry at memory slot 331 to slot 743, then just move the entry from 743 to slot 331. Do this for a well-chosen half of the matrix (the upper triangle is a fine choice) and you will have accomplished the task.

However, it becomes significantly more complicated when the matrix is not square, because you can't just swap two entries. It's conceivable that every time you make an unforced move, that could force you to move many of the entries. Interestingly, Wikipedia has an article dedicated to the solution of this problem. The REALLY interesting part is that it is solved and the solution is nice.

Tuesday, August 3, 2010

The Pfaffian

The Pfaffian has come up recently in a colleague's research. It happens equal to the square-root of the determinant (it is not defined this way, but in another way), but it is only defined on a 2n x 2n antisymmetric matrix. I began investigating it and I noticed this interesting property:



Of course, the determinant Det(A), which is the square of Pf(A), we know to be invariant under transpose, so that power of (-1)^n is quite fortunate! It will square away!

Also notice this: Taking the transpose of an antisymmetric matrix is equivalent to multiplying it by -1. That means that for some n Pf and -1 commute and for other n Pf and -1 anticommute! This may, therefore, be intimately tied to bosons and fermions!




Monday, September 14, 2009

IMPORTANT NEWS!

"According to one of my professors, the transpose is equal to the conjugate transpose. This is a fundamental breakthrough in the science of transposes."

- Kyle Wardlow

Sunday, May 24, 2009

Poetry