Hi Mr. Powell, I hope you had a great Christmas and New Year. Anyway, given that the number of rows is considerably larger than the number of columns, I would recommend the "lexical ordering" method. If n is the number of rows and m is the number of columns, the time complexity of this method is O(n m log n). If m were large enough, you might want to be more clever .,.. I can explain that, too. I will write the stuff in Pascal. Sorting the Rows of a Matrix Lexically: You have a global array priorCol giving the priority of the columns: priorCol: array[1..numCol] of int; where priorCol[1] is the column index that is most important, priorCol[2] is the column index that is next most important, etc. You first define a function "rowLess" of type Boolean, which takes two parameters of type row: function rowLess(var row1,row2:rowType):bool; var i:int; begin i := 1; while (i < numCol) and (row1[priorCol[i]] = row2[priorCol[i]]) do i := i+1; rowLess := row1[priorCol[i]] < row2[priorCol[i]]; end; What the function "rowLess" does is return "true" if row1 must go above row2 in the sorted matrix, false otherwise. It operates by running through the columns in priority order, stopping when it finds one where row1 and row2 have different entries, or when it's at the last column. It then checks the entries of row1 and row2 at that column to return a boolean value. You then apply any comparison-based sorting algorithm on the list of rows. The time complexity of "rowLess" is O(m) in the worst case, and if your sorting algorithm takes O(n log n) time, that gives you O(n m log n) time. However, in practice, it might be much better. Suppose that the entries of the matrix are chosen at random of a global set of N values, where N > 1, time complexity of the function "rowLess" is O(1) between two randomly chosen rows. However, your sorting algorithm is usually not comparing two randomly chosen rows, but rather rows that agree in the first k columns, where k increases as the algorithm progresses. If N >= n, and if the entries are truly chosen at random (unlikely in real life) the expected time complexity of "rowLess" is still O(1), since when we compare two rows, it is very likely that they differ in the first column. If 1 < N < n, and if the entries are truly chosen at random, the expected time complexity of "rowLess" will be O(log n/log N), making the overall time of the algorithm O(n log^2 n / log N), which is perfectly fine in practice. In real life, if the number of columns is large, this would be a dangerous assumption. In that case, you would want a more sophisticated version of "rowLess" which makes use of how many entries of the two rows are already known to be identical. Let me send this message off to you now, and get back to you on that.