Programming Interview Questions

I sometimes study programming questions that may show up during an interview. Some of them have funny or surprising solutions, so I thought I’d list them here. Some of the solutions may be my own (in the sense that I didn’t use a reference) and in either case they may be wrong, so be aware. Comments are appreciated as always.

Let me start by giving you a list of the questions – in case someone wants to try it without accidentally peeking at my solution:

1. Propose a data structure that supports the stack push and pop operations and a third operation find_min, which returns the smallest element in the data structure, all in O(1) worst case time. Next see if you can modify the structure to also let you identify the second smallest element in O(1) time.


2. Write an algorithm to check whether a given unsigned number is a multiple of 3, without using division and modulo operators.


3. Find the second smallest element in an array using only “n + log_2 n” comparisons (note: no big-oh here, we may for example not use 2n comparisons).


4. In linear time for an array of size n, find the majority element if it exists; i.e. an element occurring more than n/2 times.


5. Implement division without using division.


6. Find or determine non existence of a number in a sorted list of N numbers where the numbers range over M, M >> N and N large enough to span multiple disks. Algorithm to beat O(log n) bonus points for constant time algorithm.


7. Give a fast way to multiply a number by 7.


8. You are given a small sorted list of numbers, and a very very long sorted list of numbers – so long that it had to be put on a disk in different blocks. How would you find those short list numbers in the bigger one?


9. Find the longest increasing sub-sequence in an array of N elements. Or put another way: Remove as few elements as possible from the array so that the remaining items are sorted from small to large.


10. Implement Quick-Sort in-place.


11. Test if integer is a power of 2.


12. Print a spiral of numbers, with 1 in the middle and spiralling outwards with increasing numbers, using constant memory.


13. Find the k:th smallest element in an array using linear time and constant extra memory (may reorder elements of the array).


14. Write a small lexical analyzer.


15. Given an array of length N containing integers between 1 and N, determine if it contains any duplicates.


16. Somewhat tricky questions about linked lists.


17. Given an array of integers whereof all but one occurs an even number of times, find the one that occurs an odd number of times.


18. Suppose I give you the sequence of elements obtained from an in-order traversal of a tree; reconstruct the tree.


19. Suppose I give you the sequence of elements obtained from an in-order traversal of a tree; reconstruct the tree.




1. Propose a data structure that supports the stack push and pop operations and a third operation find_min, which returns the smallest element in the data structure, all in O(1) worst case time.

Next see if you can modify the structure to also let you identify the second smallest element in O(1) time.

Solution: Maintain two normal stacks S and SM. Pushes and pulls are always performed on S as normal, but in addition if the item pushed or pulled is minimal, also push/pull it to/from SM. The top of SM will always be minimal and thus find_min can be performed in constant time. I was initially tricked as I thought of “delete_min” (which would be impossible) rather than “find_min”.

The follow up problem can be solved similarly: Let the secondary stack contain pairs of elements (s1, s2) such that whenever (s1, s2) is on top here then s1 is the minimum element and s2 the second smallest. Alternatively maintain three stacks in total where the top of the second is always minimal and the top of the third is always the second smallest.

[stacks.h, stacks.cpp]



2. Write an algorithm to check whether a given unsigned number is a multiple of 3, without using division and modulo operators.

Solution: There are tons of inefficient ways (e.g. multiply by three until you get a bigger number) and probably quite a few ways to get % by means of bit-wise operations. The niftiest solution, however, seems to be based on intentionally overflowing a uint. I’m not a pro on this kind of arithmetic, but my understanding is as follows. Suppose we wish to check for divisibility by d=3. There is a uint M such that “uint(M*d)=1”. If d=3 and uint is 32 bits, then M=0xaaaaaaab; I’m not going to motivate further, just check it yourself. But if our input number “I = k*d+s” where s<d and k<(0xFFFFFFFF/d) (otherwise I is larger than any uint) then “M*I = k*(M*d) + M*s = k + M*s” but this is <0x55555555 exactly when s=0 or equivalently when I is divisible by d. In short C:

/* Assumes int is 32-bit */
int divs3(unsigned int n) {
  const int M = 0xAAAAAAAB;
  return (n*M<0x55555555);
}

So we have a solution that only requires two relatively simple operations (although the multiplication could be expensive, but I doubt there is an equally nifty way to do it without multiplication). I would like to generalize this to various d>3 but there are some problems. It would be interesting to know if there is a class of numbers for which a similar trick can work – ideas?



3. Find the second smallest element in an array using only “n + log_2 n” comparisons (note: no big-oh here, we may for example not use 2n comparisons).

Solution: I find this solution elegant, that’s why I include it even though it doesn’t seem very usefull in practice. Compare all the elements in pairs and retain the smallest from each pair: n/2 comparisons. The winner (smallest) of each pair must be linked to the loser in some way, e.g. by a linked list. Recurse: “n/2 + n/4 + … <= n” comparisons. Now you have the smallest element – but the second smallest must have been “eliminated” during a comparison with the smallest! This is a collection of less than “log n” numbers and trivially we can find min with “log n” comparisons.

This solution seems useful only if comparisons are unusually expensive. I say it is not very useful in practice because I have never encountered a situation in which I would want the second smallest element, other than in the case of numbers; and numbers are as far as I can tell always cheap to compare relative to the cost it would take to keep track of all the loosers in the comparisons. Strings we usually want to sort, don’t we?

This question relates to question 13: to find the k:th smallest element.



4. In linear time for an array of size n, find the majority element if it exists; i.e. an element occuring more than n/2 times.

Solution: Another fairly elegant problem not likely to appear in real life very often. Like in the previous case, divide the elements into pairs; discard any pairs that are not equal and then arbitrarily discard half of each remaining pair. Repeat until one number remains, check if this number is a majority element. The time this takes is O(n+n/2+n/4+…)=O(n) because with each repetition at least half of the list is discarded. Further, suppose X is a majority element, then X continues to be a majority element through each reduction: Discarding unequal pairs discards at most one X for each other element and the remaining list must obviously have more pairs of the type (X, X) than any other kind of pair (so (X,X) is a majority element in the list of pairs).



5. Implement division without using division.

Solution: This shows up time and again, I will give my solution in C. There are probably much niftier ways to do it, but I think this should be a good tradeoff between comprehensible and efficient. If T is to be divided by B:

int mydiv(int T, int B) {
  if (B>T) return 0;

  int d = B;
  int count = 1;

  while (d*2 <= T) {
    d *= 2;
    count *= 2;
  }

  return (count + mydiv(T-d, B));
}

Note that I write “*2” for readability, the compiler should turn it into “<<1” so actually this method doesn’t use multiplication either (only subtraction, shift and comparisons). The recursion is also easy to get rid of, if undesirable. The time it takes is obviously O((log(T/B))^2): The while loop stops at the i:th step when “B*2^i > T”, i.e. as soon as “i > log(T/B)” and this number must decrease for each level of the recursion (exercise: prove the last statement).

Note: For large numbers this can be done much faster by changing the present “while” loop to a binary search. I.e., shift left/right by 16, 8, 4… to find the correct d in each recursion step.



6. Find or determine non existence of a number in a sorted list of N numbers where the numbers range over M, M >> N and N large enough to span multiple disks. Algorithm to beat O(log n) bonus points for constant time algorithm.

Solution: This has confused many people and in order to not get it wrong I decided to copy it with the exact wording as I found it. At a first glance it seems impossible – a binary search will take time O(log n), but we are asked to beat that and even encouraged to try for constant time(!) One trick could be some argument about hardware I don’t perceive fully, as the problem is presented in an external memory type of model (maybe computation is essentially free and we only need to worry about block reads from the disks, but this doesn’t solve it I think). A more likely approach is to give a randomized algorithm with good expected behaviour for reasonable assumptions about the data.

Assume therefore that the N numbers were choosen uniformly at random from [0,M-1]. Suppose we are looking for a number D=a*M for some non-negative a<1. Then shorely we should start looking at position a*N in the list (If you want to look up “zebra” in an old dictionary, do you start looking in the middle of the book?). Now however comes the difficullt part: How close are we after having picked L[a*N]? This will require tools that you probably need an advanced university course (statistics and ramdonized algorithms or the like) to understand well, but I’ll try to keep it simple.

Since M is very large, the number of elements smaller than (a*M) is essentially a binomially distributed random variable Xa~BIN(N, a). This variable is said to be “sharply concentrated” about its expectation a*N; applying e.g. Chernoff bounds, we can show that the probability that “Xa < a*N – 2*log(N)” is very small, e.g. something like “N^(-c)” for some positive constant c. A better analysis should of course sort out all the cases, but since the probability above is so small let’s just assume we are within a distance of 2*log(N) of the right answer. This has therefore (in constant time with high probability) reduced the problem to only 4*log(N) numbers!

Now do a normal binary search with worst case time O(log(log(N)). Note: Although one may be tempted to recurse with the above method, it doesn’t work because once the interval becomes log(N) the probability of a relatively large deviation is no longer so small.

In short we end up with an algorithm that terminates with the right answer in time O(log(log(N)) with probability greater than (1 – N^(-c)) for some constant c. This, by the way, is a stronger result than just saying the expectation is O(log(log(N)). (Exercise: Calculat [bound] the expectation). Furthermore, since M>>N note that log(log(N)) << log(log(M)) and even if M is the range of a whooping 256-bit word, this is less than 8. Yes, that’s right: we should need no more than an order of 8 tries to find our result. Make M the range of a (ridiculous) 65536-bit word and it becomes 16 tries. Although a little inprecise, it is not unreasonable to say that our algorithm is O(1) constant time for all practical purposes.

I think this is a nice example to demonstrate the power and signifficance of randomized algorithms and average case analysis.

[my sample code here…]



7. Give a fast way to multiply a number by 7.

Solution: A couple of solutions:

uint mult7(uint n) { return (n*7); }
uint mult7(uint n) { return (n+n+n+n+n+n+n); }
uint mult7(uint n) { return ((n<<3) - n); }  //probably the one they want
uint mult7(uint n) { return ((n<<2) + (n<<1) + n); }



8. You are given a small sorted list of numbers, and a very very long sorted list of numbers – so long that it had to be put on a disk in different blocks. How would you find those short list numbers in the bigger one?

Solution: This smells akin to number 6, doesn’t it? Let N be the size of the smaller list and M be the size of the larger list. If N is really very small, it is hard to see how we could do much better than binary searches. But if it is not so small then it constitutes a sample of the domain of the elements and if we make assumptions about the distribution (uniform) then we can apply the method of #6 to succeed in time O(N log(log(M)) with a probability that goes to 1 as N increases. Again we expect the first element to be at position M/N in the list – it’s silly to start looking in the middle, isn’t it?

But I guess the expected answer to this question is different, because there is a way to do it that you might miss at a first glance. Obviously we can do N binary searches in time O(N log(M)), but this discards much information after each step. Say instead we do like this: Pick the midpoint of the small list and binary search for it in the large list: Time O(log(M)). Suppose this splits the large list into two pieces of sizes M1, M2: M1+M2=M. We now have two problems with parameters like (M1, N/2) and (M2, N/2) respectively (well except we found the first number already) and we can find two more items of the small list in time O( log(M1) + log(M2) ) <= O( 2*log(M/2) ), in the process splitting the long list into 4 pieces that, similarily, in the worst case have sizes M/4. Recursing in this way we find the “1+2+4+…+(N/2) = N”:th element roughly in log(N) steps, the i:th of which took time O( 2^i log( M/2^i ) ). Let T be the total time.

T <= log(M) + 2*log(M/2) + 4*log(M/4) + ... + (N/2)*log(M/(N/2))
  <= log(M^N) - log(2^2 * 4^4 * 8^8 ... * (N/2)^(N/2))
  <= N*log(M/N)

Not terribly impressive if N is “small”, it may seem, but obviously better than doing the N searches independently. Again note that this is a worst case that we get only if an evil person gave us a tricky list – if we assume some distribution of the elements we can do much better. Oh, and if “small” means e.g. N=M/1000… Well then we have time O(N), pretty neat eh!

Come to think of it, a cheeky way to answer #6 would be “sure, I can give you ammortized constant time if you give me enough input numbers in sorted order!”.



9. Find the longest increasing sub-sequence in an array of N elements. Or put another way: Remove as few elements as possible from the array so that the remaining items are sorted from small to large.

Solution: Once upon a time I believe I found this one out on my own in an algorithms course, but this time around I had to look it up. A sign that I’m getting old: depressing. Dynamic programming is the solution, of course:

Start from the left; for each potential sequence length (say L in [0,N]), keep track of a sequence with that length and with a smallest possible last element. That is, there may be many sequences of a specific length L, but we only need to remember one that has the smallest possible last element – the others are redundant. Intuitively, a sequence with small last element is best because any element that can “extend” one of the other sequences can extend that one.

So when we move right in the list of numbers and test a new element X, then for every “best” path (say again of length L) that X can extend we check if the new path (of length L+1) is “better” than the old best path of length L+1. If it is, we replace it. In the end, the answer is the sequence associated with the largest length that is associated with any sequence. Since we have to go through the list of lengths for every new element, the algorithm is O(n^2) in the worst case.

[my sample code here…]



10. Implement Quick-Sort in-place.

Solution: I got it into my mind to try and make the most elegant Quick-Sort C-implementation I could come up with. Here it is:

int array_qsort(int *a, int n) {
  if (n<=1) return 0;
  int i = 0, j = n-1;
  int pivot = a[j];

  while (i<j) {
    while (i<j && a[i]<=pivot) i++;
    a[j] = a[i]; //now i points to empty cell
    while (i<j && a[j]>=pivot) j--;
    a[i] = a[j]; //now j points to empty cell again
  }

  a[j] = pivot;
  array_qsort(a, j);
  array_qsort(&a[j+1], n-j-1);
}

(Actually it is prudent to use a random pivot but I excluded that here – just begin by swapping a random element with the last.) Though simple enough, it took me a while to come up with this code because at first I didn’t want the “i<j” checks in the inner while-loops. Unfortunately I see no pretty way around them as i or j could in the odd case run outside the array otherwise. Btw., by “elegant” I mean a solution that is a good trade-off between (locally) optimal and short understandable code. Several solutions I’ve seen waste a memory operations by moving elements unneccessarily (e.g. moving a[i] to a[j] even if it is smaller than the pivot; though it would save some code, I wanted to avoid that).



11. Test if integer is a power of 2.

Solution:

x>0 && (x-1)&x == 0, or
x>0 && x&(-x) == x

both five operations. Slightly niftier is

x && !(x & (x - 1))

with only four operations.



12. Print a spiral of numbers, with 1 in the middle and spiraling outwards with increasing numbers, using constant memory.

Solution: This is just engineering of course, no particular cleverness involved, though I had to keep my tounge checked. The output space is a grid, let’s introduce (x,y) coordinates in the natural directions and let’s say that the spiral should start upwards, that is, that 1 is at (0,0), 2 at (0,1), 3 at (-1,1) and so on. Note that now the coordinates (1,1), (2,2), (3,3)… correspond to completed squares of radi 1,2,3… respectively. But the the number at this position must be 9,25,49… and so on. That is the number at (x,x) for non-negative x is “(2(x+1)-1)^2”. Other numbers can be calculated easily from this by sorting out certain cases. Check the following function for yourself!

int spiral_seq(int x, int y) {
  if (x==y && x>=0) return ( (2*(x+1)-1) * (2*(x+1)-1) );
  if (y>0 && -y<=x && x<=y) return ( spiral_seq(y-1, y-1) +y-x );
  if (x<0 && x<=y && y<=-x) return ( spiral_seq(x, -x) -x-y );
  if (y<0 && y<=x && x<=-y) return ( spiral_seq(y, y) +x-y );
  if (x>0 && -x<=y && y<=x) return ( spiral_seq(x, -x) +y+x );
  else fprintf(stderr, "Error, logical in spiral_seq(%d, %d)\\n",x,y);
}

int print_spiral_n(int rad) {
  int max = (2*rad+1)*(2*rad+1);
  int dig = (int)log10((double)max) + 1;
  char format[10];
  int x,y,p;

  sprintf(format, "%%.%du ", dig);

  for (y=rad; y>=-rad; y--) {
    for (x=-rad; x<=rad; x++) {
      p = spiral_seq(x,y);
      printf(format, p);
    }
    for (x=0; x<dig/4+1; x++) printf("\\n");
  }
}

And here is some sample output:

57 56 55 54 53 52 51 50 81
58 31 30 29 28 27 26 49 80
59 32 13 12 11 10 25 48 79
60 33 14 03 02 09 24 47 78
61 34 15 04 01 08 23 46 77
62 35 16 05 06 07 22 45 76
63 36 17 18 19 20 21 44 75
64 37 38 39 40 41 42 43 74
65 66 67 68 69 70 71 72 73



13. Find the k:th smallest element in an array using linear time (worst case) and constant extra memory (may reorder elements of the array).

Solution: This one I had to look up – they go under the name selection algorithms. The idea builds on the quick sort implementation I showed earlier (question 10) but of course avoids sorting elements that are on the “wrong” side of the pivot. Suppose we are looking for the 10:th largest element and find a pivot p that splits the array in two parts, (S, {p}, L) where |S|>10. Well then obviously our target is in S and we can discard L. Even if we discard only 10% of the elements in each step, the time will be linear: O( N + 0.9N + (0.9^2)N + …) = O(N) (I have used this time and again, look up “geometric series” if you’re unsure).

Using a random pivot it is not hard to get a linear expected time, so the difficullty here is to “improve” that “expected” to a “worst case”. I write “improve” because I understand the worst case optimal algorithms have slightly worse expected case behaviour in general. Hybrids like introspective sort and introspective select try to get the best of two worlds. I will now try to explain a straightforward way of obtaining a worst case linear time algorithm. The idea is to find a guaranteed good pivot. The best pivot would be exactly the k:th smallest element that we are looking for, but to keep things a little simpler, I will settle for approximating the median. We need to show that our approximation discards at least a portion of the elements in each step.

We need a median finding algorithm but that’s also what we are designing so the answer is recursion! The median is the (N/2):th smallest element and as a base case, we can find this easily by any reasonable sorting algorithm as soon as the array has less than, say, 10 elements. If the array is larger than that we can do as follows:
1. Make groups of five elements and calculate the median for each group, this is N/5 numbers.
2. Find the true median of all the (N/5) medians by using this entire algorithm that we are designing recursively (even though it is not finished yet).

Once we have the true median, M, of the N/5 group medians we use this as a pivot. Why is this a good pivot? N/10 group medians are larger than M, but for each such group median there are also two numbers that must be larger still.. Therefore there are at least 3N/10=0.3N elements that are larger than M, which is not too shabby. Vice versa, of course, there are also 0.3N elements that are smaller than M. Note that I have ignored some rounding issues.

But with a good pivot, we are free to recurse as mentioned above, and we are done! The two different recursions performed by this algorithm are confusing and we need to be a bit careful with step 2. Say that the algorithm takes time bounded by T(N) when applied to N numbers. Then

T(N) ≤ O(N) + T(0.2N) + T(0.7N) ≤ O(N)(1 + 0.9 + 0.9^2 + 0.9^3 + ...) ≤ O(N)

Sorting out the induction there may not be entirely obvious, but I don’t want to go into details. Just make the assumption that “T(N) ≤ cN+d” for some constants c and d, and the proof should yield. Why did we use the constant 5 by the way? Let’s try three, then pivot is guarranteed to cut of (1/3)N elements and the expression becomes “T(N) ≤ O(N) + T(N/3) + T(2N/3)” but this is not good because the recursive steps take roughly as long time at each level and the algorithm fails to be linear! If we pick 7 for the groups, we get “T(N) ≤ O(N) + T(N/7) + T(2N/7) ≤ O(N)(1 + (3/7) + …)” which seems better so it is not obvious why we should desire a small group size. On the other hand, sorting the groups is rather expensive e.g. c*log(c) for group size c and the improvement during the recursion seems rather marginal in comparison. Nontheless, I won’t try to pursue this further. The strongest result I’m aware of says we can find the k:th smallest element with less than 2.95N comparisons (look up Dor and Zwick: “Selecting the Median”), but at this level of optimization I’d think other issues become more important in practice.



14. Write a small lexical analyzer.

Solution: That’s a bit vague and I’m not sure what it’s supposed to do. Less booring, I decided, would be something that parses simple math expressions.

double _str_evaluate(char *s, int n) {
  int i=0, j=n-1,k,d,c;
  double ans,op1,op2,dec;
  int open, priority, op;
  int test;

  //some base cases and spaces
  if (j<i) return 0;
  while (s[i]==' ') i++;
  while (s[j]==' ') j--;
  if (j<i) return 0;

  //parantheses
  if (s[i]=='(' && s[j]==')') {
    test=-1; open=1;
    for (k=i+1; k<j && open>0; k++) {
      if (s[k]=='(') open++;
      else if (s[k]==')') open--;
    }
    if (open==1)
      return _str_evaluate(&s[i+1], j-i-1);
  }

  //is number?
  for (k=i,ans=0,c=-1,test=0; k<=j; k++) {
    if (s[k]=='.') {
      if (c==-1) { c=10; continue; }
      else { test=-1; break; }
    }
    d = s[k]-'0';
    if (d>9 || d<0) { test=-1; break; }
    if (c<0) ans = ans*10 + d;
    else { ans += d*(1.0/c); c*=10; }
  }
  if (test>=0) return ans;

  //find lowest precedence open operator
  open = 0; priority = 100; op = -1;
  while (j>i) {
    if (s[j]==')') open++;
    if (s[j]=='(') open--;
    if (open==0) {
      if (s[j]=='^' && priority>50) {
	priority = 50;
	op = j;
      } else if ((s[j]=='*' || s[j]=='/') && priority>40) {
	priority = 40;
	op = j;
      } else if ((s[j]=='+' || s[j]=='-') && priority>30) {
	priority = 30;
	op = j;
      }
    }
    j--;
    //assert: open>=0
  }

  //evaluate
  op1 = _str_evaluate(s, op);
  op2 = _str_evaluate(&s[op+1], n-op-1);

  if (s[op]=='^') return pow(op1, op2);
  if (s[op]=='*') return op1*op2;
  if (s[op]=='/') return op1/op2;
  if (s[op]=='+') return op1+op2;
  if (s[op]=='-') return op1-op2;

  return 0; //FAIL
}

double str_evaluate(char *sz) {
  int n = str_length(sz);
  return _str_evaluate(sz, n);
}

I’ve ignored any error handling aspects so there are evil expressions that make it crash and burn. It does well with normal things like “(3+4.2)-1.2*(4.01-3.2/2)” or “1 + 1/2 + 1/4 + 1/8 + 1/16 + 1/32” etc.



15. Given an array of length N containing integers between 1 and N, determine if it contains any duplicates.

Solution: This is trivial in O(N^2) time, O(1) memeory. We can also do it in O(N*log N) time in-place if we may reorder the array. We can do it with a pigeon-hole technique in time O(N) using an O(N) look-up table. These are all fairly trivial observations, but several sources seem to suggest that we may be asked to do it in “linear time without using any extra memory”. I will give a solution to this that requires some “light” assumptions. Let’s call the array “a=(a(1), a(2), …, a(N))”:

If we may destroy the data we only need one reserved value to use, that is a single value outside [1,N]. Since the question is posed with the range starting from 1 rather than 0 as would be natural for most programmers, maybe this is what they had in mind. It’s still rather silly if you ask me: Just go through the list and for each a(i) encountered write 0 to a(a(i)) unless it’s 0 already, in which case we would have our answer. This solution doesn’t appeal to me though.

There could be a really legitimate solution possible, suggestions are welcome. Someone pointed out to me that the array defines a linked list that must be circular if there are no duplicates… The problem is that more precisely, it may define several linked lists that are indeed all circular when the array is without duplicates. Great, so what’s the problem? Let’s consider the suggested algorithm. Start with a(i), go to a(a(i)) and so on. Use Floyd’s cycle finder (next question) to detect when the linked list intersects itself. With some more cleverness we can find the shape of the linked list.
(i) If it is a P (lollipop shape), one position has multiple incoming links which would mean that there is a duplicate and we are done.
(ii) If it is an O (circular), then we stop and proceed to try some other element.
But here’s the problem, how do we find “another” element? If we try to start from every element the algorithm may become O(N^2) again. We can’t mark any positions, if we could there are simpler solutions. That said, there are certainly many things that could be tried. I’d be happy to learn about a solution!



16. Possibly tricky questions about linked lists:

(i) Given the head of a singly linked list, find out in linear time and using only constant memory, the length of the list – which might contain a cycle. Identify the length and location of the cycle if it exists.

(ii) Given a pointer to some element in a singly linked list, but neither the head or anything else of it, describe how to delete the element. Do it in constant time, and describe any neccessarry contingency.

Solution: (i) This one became really popular in my circles a few years ago – presumably because Google used it as an interview question. It is conveniently solved with Floyd’s cycle finding algorithm [Wikipedia].

(ii) This one I got recently at a phone interview and it took a kick in the butt before I saw it was possible. The idea is not to get hooked on the containing nodes of the LL but to realize that the important thing is the order of the elements. Go to the node we have a pointer to – overwrite that element with the element of the next node, then delete the entire next node by redirecting the “next” pointer of its predecessor (the “current” node). Be careful if asked to delete the last node; this method requires that a node can be flagged as an end node somehow (I often implement linked lists with the last node pointing to NULL, but that wouldn’t work here).



17. Given an array of integers whereof all but one occurs an even number of times, find the one that occurs an odd number of times.

It’s so simple that it’s silly, except it’s also easy to miss. Just XOR all the numbers together and you will be left with exactly the one number that occurs an odd number of times. You can do this in linear time with a single pass and constant extra memory. With sufficient processors working in parallel you can get it to O(log N) time (XOR N/2 pairs together in parallel, then repeat with N/4 pairs of the previous answers and so on).



18. Suppose I give you the sequence of elements obtained from an in-order traversal of a tree; reconstruct the tree. How about pre-order and post-order?

This is a trick, the answer is that it is impossible in general. Probably the simplest way of proving that is to consider a sequence of two elements (a, b). This sequence can represent two different trees, and that fact remains true regardless of whether sequence is in-, pre- or post-order.

Curiously, if two sequences are given whereof one is in-order and the other either post- or pre- order, there is a solution that can be explained with relative ease: We know the root element since this is the tail of the post-order or the head of the pre-order sequence. Knowing the head we can use the in-order sequence to partition the remaining elements into those of the left and right sub-trees respectively. Tongue checked, we might from here be able to argue that the reminder follows by recursion.

Perhaps even more surprisingly at this point, pre-order + post-order is not sufficient. Consider (a, b) as pre-order together with (b, a) as post-order; there are again two possible trees that can yield these sequences.

One Comment to “Programming Interview Questions”

Leave a reply to vishnujayvel Cancel reply