VERY VERY ROUGH DRAFT
INTRACTABILITY MADE EASY

 

 

                

BACKGROUND          

In the solar system there is the planet Flat Earth. Here a global village has arisen and firmly taken root. In the global village there are the wise men, who are elected to form an exclusive club, called the T club. We will refer to the pronouncements of members of this club as the sayings of the Wise Man. The Wise Man lived in the first fifty years of the last century. Before the wise men there were the prehistoric Wise Men who lived in the first half of the last century. 

We start with the concept of a computation. We model whatever is going on in the Universe by a computation carried out by a computational device. Whatever nature does is a computation which can be modeled and simulated by a computational device. Whatever an entity of nature like an insect, reptile, animal does is a computation which can be modeled and simulated by a computational device. We have conceptual entities like spirits, fairies, demons, monsters, giants, Jinn, Rakshasas, Asuras etc. drawn from folklore, myths and legends of cultures over the ages. We model what they can do by computations carried out some suitable computational device. We model the lesser supernatural beings who have oracles and can foretell the future to some extent also by computational devices to the extent we can imagine them.

In the global village outsourcing of work  is common. A job is split up into parts and carried out in different parts of the village. The geographical locations may be far away in space. The parts of the job may be done at different times. There is thus a spatial and temporal separation of the parts into which the job is split up.

About 40 years ago a Wise Man, visiting the Third World,  identified the requirements in the outsourcing of software, and in due course all IT based activities, over the global village. Let the brainy Small be an entity who wants some software or some IT related activity  to be developed or maintained. Then Small should spend a substantial amount of time is planning and deciding what is to be done down to the last detail. Then he should outsource the job in parts. For this he selects an entity Big in the geographical village. It initially appeared that as Big has to do routine work it was sufficient for Big to be hare brained. Not so, as per the Wise Man. Big has to be young, motivated, competently educated, communicative, extroverted, skilled to some reasonable degree, with nominal two years experience, willing to work for low wages and of the highest IQ possible. Countries that failed to see that Big should have high IQ were forced to outsource their lower level software activities and in due course IT related activities.

We study the outsourcing industry for about  three decades. We see that high IQ for Big who is doing the outsourcing work for Small is desirable. From a theoretical point of view we would like to know the actual requirements of IQ of Big  and the IQ of Small to carry out the outsourcing work.

We consider old cartoon movies. Take for example Snow White and the Seven Dwarfs. Technology was limited those days. The movie consists of thousands of slides painfully painted by hand. From on slide to another there is a small variation. These slides when run at a suitable speed give the illusion of motion to the human eye.

Let us say the painting of  slides has to be outsourced. The slides will consist of a table of squares. Each is painted some color. There are two paint boxes. The first one has a palette 7 or more colors, the second  one has a palette of 5 or more colors. There is a white color for blank squares. An entire slide will have only one square with a color from the first palette. The remaining squares are colored using the second palette.  Small will think up everything in detail. Then he will outsource a slide to Big with instructions. This is called a Service Request(SR).

Big now has an SR. He is given a time of perhaps a week to service the request. He will isolate the square colored from the first palette, using a software tool. Then he will consider the  squares to the east, west, north and south of the square, using another software tool. If required he will add a row or column of blank squares. For this he uses software akin to MS-Word insert rows and columns in a table. He will now look up the book of instructions, using another software tool. Depending on the central square one adjacent square is to be chosen- east, west, north or south. The software tool will tell him which one. Then the entire slide is copied ditto but for the two squares. These two squares are modified and reproduced in the new slide as per the rule book. It is verified both by a software tool and manually that the final slide only has one square with a color from the first palette. Whatever Big has done is inspected, verified and reviewed   by a chain of persons depending on the importance Small attaches to the job and the size of his pocket. The two slides are now sent back to Small. He goes through the process of inspection, verification and review. The SR has been serviced.

The next slide will involve the next SR. The process is repeated for all the slides. It  is Small’s responsibility to assemble the slides into the movie. Suppose he forgets to number the slides at times, suppose the slides get mixed up then what happens. Horror stories as we shall see in our study of intractable problems!

It turns out that if the slides are given to Big in proper sequence then Big is effectively capable of universal computation. So the basic IQ expected from him from the Theory of Computation people is that he should be able to identify a particular colored square. Then he should look up a rule book. Select the appropriate adjacent square. The he has to alter the pair in the new slide. A software tool will copy the entire contents of the old slide to the new one but for the pair selected and in question. This is all that is required for outsourcing of any IT related activity.

From a practical point of view it does not work. A very high IQ and other constraints as mentioned earlier are actually required from the Software Industry point of view.

The choice of Small and Big parallels a Tom and Jerry show. The brainy Small Jerry puts down the  hare brained Big Tom to work. Alternatively in the Jurrasic Park the brainy Small mammal puts down the hare brained Big Dinosaur  down to work. 

COMPUTATIONAL MODELS

Let us now try to develop models for Small and Big that are computational devices of different types.

Let us take Big and pack him off to college. After some time we want to appear for an examination. For purposes of classifying as models we  have four varieties of Big: Type 0 Big, Type 1 Big, Type 2 Big and Type 3 Big. This classification is based on a study of  a prehistoric Wise Man named Chomsky. It is adapted from what is called after him as the Chomsky hierarchy.         

We will start with Type 0 Big. Type 0 Big is given a question paper to answer. The number of questions is the size of the input given to him. He is to answer each question by an essay. There is no time limit and no limit on the paper he can consume. We will create a paper factory, invest sufficient funds, guard against inflation etc. and whenever Type 0 Big wants extra paper we him give him a ream. What Type 0 Big does is to read a word from a question in the beginning of the answer script, cross it out, then write an essay on it on the blank paper at the end of the answer script.. This he repeats  without getting tired. Type 0 Big is modeled by our Pentium computer, with sufficient disk storage. This is no problem, disk capacity is increasing, costs are falling. Whenever the computer wants more disk space we will give it a terabyte disk. A terabyte can store a million books. in due course we can give it a petabyte  which can store a billion books or exabyte disk which can store a trillion books.  Future generations can easily give it a yottabyte disk whenever it wants more space. This can store all associations the human race has had for over the ten thousand years. We say Type 0 Big computes the PROCEDURES. Type 0 Big models what is called the TURING MACHINE( TM ).

When we see the massive reams of paper that Type 0 Big turns out as answers to the question paper in the examination, we feel some pity for the teacher. She has to evaluate his entire answer script. Let us restrict Type 0 Big to a finite amount of paper and put a time limit on the examination. After all the teacher has to go home some time and cannot be spending her  entire life in the examination hall. Then we have modeled a time bounded computation. Type 0 Big has to stop churning out his essays some time. This model is what we call an ALGORITHM. We say Type 0 Big computes the ALGORITHMS. This variant of Type 0 Big is modeled by a Pentium computer which neatly switches itself off when the job is over without hanging. It is also modeled by a HALTING TURING MACHINE.

Let us consider another model for Big. This is the restricted Type 1 Big model. Here  Type 1 Big is given an objective question paper like GRE, TOEFL, GATE, JEE,CAT etc. He his given the question paper, the answer sheet and perhaps one blank piece of paper for rough work. The size of the input is the number of questions plus 2. He cannot ask for more paper. He can scribble anywhere on whatever paper  is given to him. This restricted Type 1 Big model is not as powerful as the Type 0 Big model. This is primarily because of the space restriction on the size of the answers the candidate can turn in response to the question paper. Type 1 Big is called a restricted Linear Bounded Automata(LBA).

The next model we consider is the restricted Type 2 Big model. Here the restricted Type 2 Big is given the question paper which is read only. He is not to scribble on it. He can read it only once as it scrolls left to right.

Here the restricted Type 2 Big is like a nursery school child. He is given the list of questions and their answers before the examination. He prepares for the examination by mastering the list. This is similar to the use of question answer based examination guides students all over India follow.

The question paper in the examination consists of a special kind of list of questions and their answers. The restricted Type 2 Big examines this list from the beginning to the end. Whenever in this list  a question comes restricted Type 2 Big puts it in a queue. If an answer comes he sees if it matches the last question he has seen. If it does not he fails in the examination. If it matches he deletes the question and proceeds his scan of the question paper.

This type of examination is how a grocery storekeeper operates in small towns and villages. The storekeeper rests and customers who come in are ignored. They are just added to a queue of persons to be serviced. Then the storekeeper wakes up and services the customer or some of them. He selects some of the persons  who has waited for the least amount of time in the queue. Then the store keeper rests for some time and repeats the operation.

Let us describe the computation in a different way now. Restricted Type 2 Big is given a tray which can hold a stack of paper. Initially only one marked sheet of paper is at the bottom of the tray. The mark indicates it is the last sheet of paper. In response to a question in the input, or without caring for the input question, restricted Type 2 Big  (1) removes a paper from the tray—a POP operation, (2)  puts a marked paper on top of the tray—a PUSH operation. If he finds the tray empty he has finished the examination successfully.

 Restricted Type 2 Big modeled by this computational device is very adept at taking care of precedence and priorities. Let us say your are attending some formal occasion. The hostess is very attentive to you and talking to you. Some more important guest comes in. Your are ignored and attention  is on that guest. Then before that guest is taken care of another more important guest comes in. Attention is shifted to this new person. Once this guest is taken care of the next in line is attended to and then finally you. Taking care of the order of precedence is very important in a formal occasion. The military, bureaucracy, diplomats and politicians go haywire over it. The restricted Type 2 Big is good at handling the order of precedence. It is not as powerful as the Type 1 Big or Type 0 Big. It is called a restricted Push Down Automata(PDA).

When too many VIP’s turn up in any order then the problem of managing the protocol of precedences becomes difficult. The wrong garland may be given to a VIP, the wrong associations may be recalled and uttered, the names may get mixed up and various such bloomers take place.

To handle the situation we will have a Master of Protocol Management. He will allow only arrival patterns of VIP’s he approves of. This job is given to a Type 3 Big which we describe below.

A Type 2 Big is nothing but a restricted Type 2 Big with the aid of a Master of Protocol Management.

The next variant in the Type 3 Big. This is nothing but the Type 0 Big without any ink. It was pointed out by a Wise Man that if you take the Type  0 Big model and we take away the ink we have the   Type 3 Big model. Here Type 3 Big has only an oral examination where he essentially recalls the answers from memory  perhaps with some modifications. Type 3 Big can store only a finite amount of information in its memory. This model of computing device for Type 3 Big is called the FINITE AUTOMATA(FA).

 

 

 

 

 

 

 

 

 

 

 

 

 

 


NONDETERMINISM

We will now extend the family of models to the Chomsky hierarchy. First we will study the concept of a nondeterministic computing device versus a deterministic computing device.

Consider the classic problem involved in a Tom and Jerry show. Tom just does not seem to get the better of Jerry. We can identify the problem possibly as being that Tom has only one option that he can exercise  in any situation as to do next. We say Tom is deterministic.

Let us try to give some magical powers to Tom. Let us make him some sort of a wizard. This new Tom we can call WizTom. He can make multiple copies of himself when there is a choice in some situation. One copy for each choice. All the copies act in parallel. If any copy has a choice it makes multiple copies of itself. One copy is made for each choice in the situation as to what to do next.  All the copies attack Jerry parallel in different ways. If any copy succeeds we say WizTom has succeeded. We only count the copy that succeeds in the shortest span of time. Copies that fail we do not count. If all the copies fail then we say WizTom has failed. In such a case WizTom does not make any copies of himself. The question that arises is whether WizTom is more powerful than our old Tom. WizTom is called the nondeterministic variety and the old Tom the deterministic variety.

The same situation arises in Bluto versus Popeye. Bluto just cannot get the better of Popeye. Suppose we create a new Bluto by giving him some magical powers. This new Bluto is called WizBluto. He can make multiple copies of himself when there is a choice. All the copies can simultaneously tackle Popeye in different ways. If a copy has a choice of what to do next it replicates itself into copies: one for each choice. If any copy succeeds then WizBluto has succeeded. If a copy succeeds by a long path and another copy by a short path, then the shortest path is automatically taken. If all the copies fail then WizBluto has failed. In such a case WizBluto will not make any copies of himself. We say Bluto is deterministic and WizBluto is nondeterministic. The question is as to whether WizBluto is cleverer than Bluto.

Opening the newspaper every day shows that Mr Wilson is in perpetual terror as to what trick Dennis will play next. He just cannot keep track of all the places where Dennis, planning or confiding with his friends. Suppose we create a new Mr Wilson, WizWilson who is nondeterministic and has some magical powers. He can make multiple copies of himself, one copy for each possible choice of where Dennis will be. A copy can make multiple copies of itself whenever there is a choice. All the copies act in parallel. If one copy succeeds then the other copies cease to operate. The failures do not count. Only if all the copies have failed then WizWilson has failed. In such a case WizWilson does not make any copies of himself.

A student has the problem of having to attend class to mark attendance. The student has many appointments: one in the park, one in the mall, one in the canteen and one in the supermax theatre all at the same time. The student being deterministic can only do one thing at a time. Suppose we create a hypothetical student by adding magical powers. The new student is called WizStudent. This WizStudent by replication has multiple copies: one for each choice as to what to next in any situation. All the copies act in parallel. If any copy succeeds to mark the attendance then it is enough. If there is success by a long path and by a short path then the shortest path is automatically selected. Copies that fail do not count. If all the copies fail WizStudent has failed and in this case no copies are made. WizStudent is said to nondeterministic.  The question is as to whether WizStudent is really cleverer than the old deterministic student.

In a typical school or  college, the world over, which is lower down in the hierarchy the teacher has a problem in controlling the students. The teacher has to mark the attendance. Some students are talking, some sleeping, some throwing chalk pieces at each other etc. The teacher has to do multiple things at the same time. This is difficult for the deterministic teacher. If we give the teacher some magical powers and create a new WizTeacher who is nondeterministic it will possibly help. WizTeacher can make multiple copies of herself. One copy for each choice of what to do next in any situation. All the copies act parallel. If a copy has a choice as to what to do next in any situation, it will make copies of itself. If any copy is able to mark the attendance then the job is done. If multiple copies manage to succeed  then the one by the shortest path that succeeds counts. Failures do not count. If all copies fail then WizTeacher has failed. In such a case no copies are made. The question is as to whether the nondeterministic WizTeacher is cleverer than the deterministic teacher.

In the 26/11 terrorist incident in the crowded Bombay railway station the gun toting terrorists had a field time. A policeman trying to control them was basically deterministic. Such a policeman can only do one thing in any situation. Suppose we add magical powers and create a new hypothetical WizPoliceman. He can make multiple copies of himself to surround the terrorist from all sides  in different ways. If one copy succeeds in controlling the terrorist it is enough. If a copy has a choice as to what to do next then it makes copies of itself. All the copies act in parallel. Copies that fail do not count. If one copy succeeds by  a long path and one by a short path then the shortest path only is considered. If all the copies fail then the WizPoliceman has failed, and in this case no copies are made. The question is as to whether the nondeterministic WizPoliceman is more effective and cleverer than the deterministic Policeman.

There was the  long classic hunt in the ravines of the Chambal Valley, Madhya Pradesh in central India for the bandit queen. This went on for decades by the deterministic Policeman. In South India there was the decades long hunt in the forests for the elusive elephant and sandalwood poacher Verappan. The deterministic Policeman tried for decades in vain. The question is as to whether  a nondeterministic WizPoliceman would have done better in the classic hunts.  

The standard Pentium PC as we know it  is deterministic. We do not find it making copies of itself. In any particular situation the same machine carries on. Let us consider a new PC called WizPentium PC. This will have the capability of making multiple copies of itself if there is a choice of what action should be taken next in some situation. A copy can make copies of itself when it has a choice of action. The copies act in parallel. If a copy succeeds then the computation is a success. The other copies stop. If many copies succeed the one that computed by the shortest path is taken. Failures do not count. If all copies fail then failure is declared without making any copies at all. Such a PC is hypothetical. It is a theoretical tool and may not be actually realizable by human beings. However it is useful in studying a set of problems that plague industry. They  have over the centuries made mathematicians lead a lifetime of tears, toil, sweat and despair.

Nondeterminism is common in our thinking. We do not consider the failures at the end, only the success. Let us say  we have a notorious urchin next door. He has done only one good thing in his life. We ask his parents about him.  They only talk about the good thing and not the bad things. The bad things are forgotten. If we see a desert and a blade of grass, we will talk only about the blade of grass and not the plethora of bare zones in the desert. We consider King Bruce a success and we do not count his failed attempts.

In any household the Mother is a very busy person in the morning. She has to get the children ready for school, the elder ones ready for college, Father ready for office, attend to various household chores in the kitchen and various gadgets. When you make a stream of demands she regular warns you she does not have a dozen hands and cannot be in a dozen places at the same time. She can do only one thing at a time. In any situation even if there is a choice of what to do next she can only do one thing as the next action. Mother is deterministic. Suppose we equip Mother with some magical powers. We now have a new WizMother. She can make multiple copies of herself when required, each carrying out a different line of activities. A copy when required can make multiple copies of itself. All the copies act in parallel. If one copy succeeds in satisfying a demand for which she replicates herself it is enough. If multiple copies succeed then the one that succeeded by the shortest path only counts. Copies that fail do not count. If all copies fail then WizMother has failed, and in this case no copies are made. The question is whether this nondeterministic WizMother is more powerful than the deterministic  Mother.      

The world over the responsibility of the Father in a family is to bring in the money to run the house. In ancient days this was called the necessity ‘to keep the pot boiling’. With rising inflation, competition, economic instability, social unrest, political upheavals and jobs migrating all over the global village Father needs part time jobs. He has however only one copy of himself and can be at one place only. Suppose we equip Father with magical powers. We now have a nondeterministic WizFather. He can make multiple copies of himself whenever there is a choice of what to do next. All the copies act in parallel. If a copy has a choice of what to do next then it makes copies of itself. If one copy succeeds by a long path and one by a short path only the copy that succeeds by the shortest path counts. Copies that fail do not count. If all the copies fail then WizFather has failed and no copies are made in this case. The question is as to whether the nondeterministic WizFather is cleverer than the deterministic Father.

Nondeterministic computers may not exist in reality. They exist in folklore, legends and  myths of all cultures and regions of the world. There are in legend and folklore monsters, giants, fairies etc  who are will die in  finite amount of time but can multiple copies of themselves. In the folklore and legends of India we have the festivals celebrating the destruction and death of Rakshasas and Asuras who have nondeterministic capabilities. In some parts of India some of these Asuras and Rakshasas are worshipped.  The Jinn of the desert, who live for hundreds of years,  are reputed to make copies of themselves even in Greek mythology. They have nondeterministic capabilities.

 

It is summer and Rahim enters a hot classroom. He would like to switch on the fan. There are twenty switches and Rahim does  not know which one to press, so he presses the switches one after another. One of the switches he presses turns on the fan. We model Rahim by a deterministic computer. In any situation he has no choice of what is to be done.

Consider the same classroom and now Ashok who enters the room. He is nondeterministic. He sees twenty switches so he makes 20 copies of himself, i.e. there are 20 Ashoks now and all of them will operate in parallel. Each one of them presses a switch and one of the copies chooses the right switch. There are 20 copies of  Ashok, 19 of them fail to find the switch and only one succeeds. We do not count the failures and look at only the success. Thus Ashok in a single operation has been able to switch on the fan where as Rahim in the worst case took 20 attempts.

We say Ashok is very clever. He is able to guess the correct switch and press it, unlike Rahim who has to try each switch. So we say a nondeterministic machine can guess the correct answer and then verify it.

Nondeterministic thinking is very much part of all of us. Consider the report in the Times of India of the peon in Bombay who passed SSLC in December 2010, in his 44th attempt. We do not count the 43 earlier failed attempts but count only the successful attempt. A nondeterministic computer makes multiple copies of itself, all operate in parallel, and only if all fail we say the machine fails. One or more copies may succeed then we say the machine succeeds. Also if there if a shortest path to succeed the machine will find it. One a copy succeeds the other copies stop working.

Consider an undergraduate BE student. He has flunked in exam after exam. A lot of subjects are in backlog. We are not interested in how many backlogs he has. Ultimately if he gets his degree the backlogs don’t matter.

 

Let us say we have our cousin Ashok. He has landed himself with a big job in a MNC in some country with advanced infrastructure. We are all very proud of him and wish him well.

Now the recession has started and Ashok has lost his job. The native has landed up at Secunderabad railway station. Now that he is jobless we will have nothing to do with him. We will not give him our house address or directions to our house.

He is outside Secunderabad station and there are three roads: once to Monda market, one to Eliphinstone bridge and one to Clock tower. If Ashok is a deterministic computer he may just not know which way to go and quietly go back to his village and we have got rid of him. Suppose however that Ashok is made of tougher material. He is a nondeterministic computing device. Then he will make three copies of himself, one going to Monda market, one to the Eliphinstone bridge and one to Clock tower. All three copies act in parallel. Note that we are talking of three Ashoks now

When a copy reaches clock tower it has three further choices of roads so this copy makes three copies of itself. The copy to Eliphinstone bridge has 6 choices of roads and makes 6 copies of itself. At the end of Monda market the copy has three choices of roads at RP road and makes three copies of itself. Now we have 12 copies of Ashok. The process carries on.

Some copy goes to Delhi, then to Calcutta then falls into sea and dies. Some copies go to a dead end of a road and die as they cannot proceed further.

In nondeterminism we do not bound the number of copies, all copies work in parallel. Some copies find our house. Even if one copy of Ashok lands up in our house we say Ashok has succeeded and all the other copies stop working. Some copy may find our house by a long route, some by short routes and in particular one copy finds our house by the shortest route.

We say a nondeterministic computing device is clever enough to find our house by the shortest route. The only way a nondeterministic computing device can fail is if all the copies fail. If we run away from Secunderabad then all the copies of Ashok will fail to find our house and Ashok fails.

Ashok is responsible for most of the mischief in the neighbourhood and has done only one good thing in his life. If we go and ask his mother what sort of chap Ashok is she will only talk about the single good thing he has done and not about his faults.    Of course if we ask the neighbour he will tell us all the faults of Ashok. Ashok has been given an assignment by his professor. He struggles with it, makes a lot of mistaken attempts and ultimately gets the answer.

 

To summarise:

1)           A nondeterministic computing device has a choice of what to do next in some situations.

2)           For each choice it makes a copy of itself, and we do not bound the  number of copies.

3)           All the copies work in parallel.

4)           Even if one copy succeeds we say the device has succeeded.

5)           If there is a shortest path and a long path to success the nondeterministic device will automatically guess the shortest path to success and other copies immediately terminate.

6)           The only way a nondeterministic device can fail is if all the copies fail.

7)           A deterministic device is trivially a nondeterministic device with only one choice in each situation.

8)           Nondeterministic devices do not exist in nature but nondeterminism is very much there in our thinking.

Nondeterminism can be modeled by saying that we guess the answer and then verify it to be correct. Such an imaginary strategy is sometimes followed by all of us, even if we have used brute force methods for a solution.

Whenever we see someone sitting down struggling over a crossword or word oriented puzzle we notice the strategy followed. Part of it is guessing the answer. Part of it is the brute force method of enumerating all the permutations and combinations possible. A model we can use is only the exhaustive enumeration is followed. Then the person who solved the problem quietly claims to have cleverly guessed the answer and verified it.

When Ashok submits the assignment the professor asks him how it was. Ashok says it was very easy, he was easily able to guess the solution to the assignment. This Ashok does to impress his professor. A nondeterministic Ashok will guess all possible solutions and try them in parallel. One copy of Ashok finds the correct solution by the shortest route. Some copies may find the correct solution by longer routes. In nondeterminism we only consider the shortest route.

Rahim has been invited to the marriage feast of our local rich, powerful, popular and famous chairman of our locality. The  directions to the location of the feast are vague in the  invitation card and Rahim spends hours going in all possible wrong directions, cursing his luck  and then ultimately ends up in the right location. The chairman thanks Rahim for coming and asks him if he had any difficulty in finding the location. Rahim has to be formal and put on his best behaviour. So he says that he had no difficulty at all in finding the location. It was so easy, he easily guessed the path and came directly. A nondeterministic Rahim will create copies of Rahim to try all possible paths to the marriage feast location. The copies act  in parallel. One of the copies succeeds in finding the directions by the shortest path and this is the only one that counts. We say nondeterministic Rahim guesses the correct path and then verifies it.

The British have a way for this. It is called the typical understatement. A lifetime of toil, sweat, tears and effort end up with some result. This can be dismissed of as the effort involved was ‘nothing much’. It was just necessary to guess the result and verify its correctness.

Whenever we are to be on our best behavior and be formal we only look at the positive and good things and ignore the  negative and bad things. If there is only one good thing like talking about the weather we stick to it as ‘success’ and ignore all else as ‘failure’.

Careers and the way of life one can lead depend on  competitive examination. This is especially true in India where opportunities are limited to the middle and lower classes. This is the  GATE examination with 150 multiple choice questions to be answered, with each question having 4 choices. Let us say we have the key which we obtain from the examiner. If Abdul is to answer the gate paper and he is nondeterministic, he will make copies of himself. He is allowed to make as many copies of himself as necessary, and all the copies act in parallel. The first question results in 4 copies of Abdul, each choosing one of the possible choices which are the  answers. The second question results in 4^2 copies of Abdul. Each of the 4 copies of the previous question make 4 copies of themselves choosing a choice of answer of the second question and so on. The 150th question results in 4^150 copies of Abdul all action in parallel. Of these copies 4^150-1 will not choose all the correct answers to all the questions. Only one copy chooses all the correct answers. Since one copy succeed we say Abdul is very clever and scores 100% in the exam. We say Abdul being a nondeterministic computer can guess the correct key to the question paper and then he verifies it.

 

Robert Bruce and the spider

“If at first your don’t succeed, try try again.”

If the spider was nondeterministic it would have made seven copies of itself all acting in parallel. Six of them would have failed but the seventh one succeeds and this is the only one that counts. Only the seventh copy succeeds the first six stop working.

 

If Robert Bruce was nondeterministic, he would have made seven copies of himself and seven parallel wars against England would have been fought. Only the copy that succeeds counts.

 

In folklore and mythology the world over it has been normal to assume that supernatural beings can duplicate themselves and manifest themselves in multiple copies.

In Hindu mythology Krishna makes 15000 copies of himself to be with 15000 gopis. He is called the ‘sidhvilasudu’ who manifests himself by duplication in multiple locations wherever good is to be done.

Tolystoy in one of his parables talks about a good Samaritan who makes two copies of himself one for this good work and the other for the pilgrimage.

In Arabic mythology we have Jinns and in Greek mythology  ( see Ovid, Metamorphososes) various beings who can make multiple copies of themselves each copy behaving differently.

 

“Big brother is watching you”—George Orwell

In his famous book 1984 George Orwell talks about the famous fictional Big Brother who in effect makes multiple copies of himself, one copy to watch each person, by a telescreen (possibly a webcam). Big Brother is an example of unbounded nondeterminism he can make as many copies of himself as required.

The language of palidromes.

Palidrome are strings which are the same read left to right or right to left.

Don't nod
Dogma: I am God
Never odd or even
Too bad – I hid a boot
Rats live on no evil star
No trace; not one carton
Was it Eliot's toilet I saw?
Murder for a jar of red rum
May a moody baby doom a yam?
Go hang a salami; I'm a lasagna hog!
Satan, oscillate my metallic sonatas!
A Toyota! Race fast... safe car: a Toyota
Straw? No, too stupid a fad; I put soot on warts
Are we not drawn onward, we few, drawn onward to new era?
Doc Note: I dissent. A fast never prevents a fatness. I diet on cod
No, it never propagates if I set a gap or prevention
Anne, I vote more cars race Rome to Vienna
Sums are not set as a test on Erasmus
Kay, a red nude, peeped under a yak
Some men interpret nine memos
Campus Motto: Bottoms up, Mac
Go deliver a dare, vile dog!
Madam, in Eden I'm Adam
Oozy rat in a sanitary zoo
Ah, Satan sees Natasha
Lisa Bonet ate no basil
Do geese see God?
God saw I was dog
Dennis sinned

Palidromes can be reconised by reading the string left to right and using a stack to stack the first half of the string. When the second half of the input comes we unstack using the stacked input to match.

The difficulty comes with even length palidromes. We do know where the center is.

This is where a nondeterministic computer fits in. It is able to guess the center and then verify it.

Consider the palindrome MalayyalaM. A nondeterministic computer guesses that the center is possibly between (M,a), or (a,l), or (l,a), or (a,y) or (y,y) or (y,a) or (a,l) or (l,a) or (a,M). For each guess it makes two copies of itself, one guessing the center has come and the other that the center has not come.  This results in 2^9 or 512 copies of the computing agent. All of them computes in parallel. One of them guesses correctly and that is sufficient.

THE CHOMSKY HIERARCHY

We start with Big, an animal, human, fairy or some entity. To fix our ideas let us say Big has problems to solve or is given problems to solve. Big can be modeled by a deterministic computing device. There are problems Big can solve and those are are beyond Big’s capability. We give Big the powers of a wizard and create WizBig. WizBig is modeled by a new computing device.  The question is as to whether this can this solve a larger class of problems.

Now let us consider the models of computational devices which we adapted for the Chomsky hierarchy earlier. We had four essentially four varieties of computational devices: Type 0 Big, Type 1 Big, Type 2 Big, and Type 3 Big. Now with WizBig we have an equivalent four models. Type 0 WizBig, Type 1 WizBig, Type 2 WizBig and Type 3 WizBig.

Type 0 WizBig is the case where WizBig is given a question paper to answer. The answers can be essay type. There is no limit on the time or the paper consumed. WizBig can make multiple copies of himself based on the input  and  what he has already computed. Some copies may compute forever, some may turn out plain gibberish. However it is sufficient if one copy gives the correct answers to the questions. The copies that do not halt or turn out junk do not count. If multiple copies give correct answers we count the one which gave the answer in the least amount of time. Type 0 WizBig is thus  nondeterministic. However it turns out that nothing much has really been gained. It turns out that computing capability wise Type 0 WizBig and Type 0 Big are the same in  power. We can say Type 0 WizBig=Type 0 Big. Both compute only the PROCEDURES. They both model TURING MACHINES or Pentium PCs. So it turns out that nondeterminism does not add any extra power either to the turing machines or to the Pentium PC, insofar as what can be computed.

This is some consolation. We will not have to save up all over again to buy the nondetermininstic Pentium computer. It is a theoretical model, imaginary but we never know technology. However the deterministic one is as powerful as the nondeterministic one.

The next model is the one related to halting Type 0 WizBig computations. Here also it turns out that nondeterminism does not add any power. The Type 0 WizBig is given a time limit for the examination or a bound on the paper that can be consumed. It is no more powerful than the Type 0 Big computations with restricted time or answer sheets. Both compute the ALGORTIHMS. Thus halting Type 0 WizBig= halting Type 0 Big. Alternatively Halting Turing Machines=Halting Pentium computers in computing power. Another way of stating it is Halting deterministic Pentium PCs=Halting nondeterministic Pentium PCs.

Next we want to model Type 1 WizBig computations. Here it turns out even the Wise Man has run into a problem. He has been unable to determine if Type 1 WizBig is more powerful than  Type 1 Big. Type 1 WizBig is modeled by an objective type question paper with nondeterminism. Here the student can make multiple copies of himself for each question. If there are four choices for a question he can make four copies of himself, each copy choosing an option. The copies act in parallel. For the next question each copy makes four copies of itself one for each choice. So if we have 200 questions, overall we will have 4^200 copies. Of these one copy gets everything correct. 4^200-1 fail to answer all the questions correctly. In nondeterminism the failures do not count, only success counts. So we say Type 1 WizBig easily gets 100% marks in any objective examination. If we remove the nondeterminism from WizBig then Type 1 Big is obtained. We do not know at present if the latter has the same power somehow. According to the intuitive guess of the Wise Man most probably they do not have the same power.

The next model we want to find is for Type 2 WizBig. This is nondeterministic. Here it has been determined that nondeterminism does indeed make Type 2 WizBig more powerful than its cousin Type 2 Big which is deterministic. For this the question paper given to them is a palindrome. This is a string which reads the same left to right and right to left. The left half is mirrored by the right half. They have to check if the left half is properly mirrored by the right half. These are sometimes called the mirror languages. Type 2 WizBig and Type 2 Big are to determine if the input string is a palindrome. If we do not know before hand where the center of the string is i.e. where to place the mirror, then Type 2 Big is confused and cannot proceed. It does not know where the left half should stop and where the mirrored image should start.  Type 2 WizBig has no problem as it makes multiple copies of itself, placing the mirror in all possible places. One copy guesses correctly and Type 2 WizBig clears the examination. The Type 2 WizBig is modeled by the nondeterministic Push Down Automata(PDA) and the Type 2 Big by the deterministic Push Down Automata(DPDA). They latter is less powerful.

Restating the above, we can say the question paper given to Type 2 WizBig and Type 2 Big, is a list of questions  followed by their answers in reverse order. There are no question numbers and no answer numbers. Type 2 WizBig can guess where the list of questions ends and where the list of answers starts. Type 2 Big cannot handle the problem at all. This example shows that Type 2 Wiz Big is indeed more clever than Type 2 Big.

The next models we want to try relate to Type 3 WizBig. This is modeled by the oral examination where the student is to recall from memory with some little modifications. Type 3 WizBig is nondeterministic. The student can make multiple copies of himself. Some copies may answer gibberish, some almost correct. However if at least one copy answers correctly then the student has cleared the examination. It turns out that Type 3 Big has the same power as Type 3 WizBig. So nondeterminism does not give any extra power here. We say Type 3 WizBig is modeled by the nondeterministic FINITE AUTOMATA(NFA) and Type  3 Big by the deterministic FINITE AUTOMATA(DFA). They are equivalent in power.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


MODELS FOR ALGORITHMS

Now we will concentrate only on ALGORITHMS. These are the problems Type 0 Big can solve when there is a time or space limitation. This can be modeled by a deterministic Turing Machine that will always halt on all inputs.

The concept of the Turing Machine has been hailed as a great intellectual achievement of the human race. Along with democracy and the rule of law, it has been hailed as the third major contribution of Western civilization. This concept has been that of universal computation. It is noised about that it is the greatest intellectual achievement of the 20th century. Detractors will be excommunicated and exiled from the society of human beings.

There seem to be some social limitations for the Turing Machine model. The difficulty with the Turing Machine is that most practical computer scientists, software and IT professionals have found it to be a model that is a drudge, is slow, unfriendly, dimwitted and perhaps extraterrestrial. Ana imbecile that makes simple computations laborious and extraordinarily complex. This has led to the unfortunate situation that it has become an unmentionable in industry. A person in industry trying to study it has to do it clandestinely and surreptitiously, burning the midnight oil, like the witches and wizards of old. The alternative is to lose all creditability as an IT professional, if one openly associates with the turing machine.

Let us consider an example of a job the Turing Machine will undertake. As an extreme example consider the problem of emptying the Pacific Ocean into the Atlantic Ocean. The turing machine is ready to undertake the job or any job that can be effectively described in steps. It picks up a thimble. It marches left to right and back tirelessly,  spanning the American continent West to East and back,  for eons transferring the waters from one ocean to another a thimbleful at a time. It has a very high opinion of itself that it comes from a family of machines that can compute all the partial recursive functions. It just cannot understand why human beings get tired watching the family of drudges it belongs to at work.

Alternative models were popularized by the Wise Man half a century ago. These are the RAM and RASP models. These models are stripped down versions of the von Neumann random access stored program computer like the Pentium PC of today. These models are more acceptable in industry. They are more alert and intelligent than the turing machine. If the RAM or RASP take time ‘t’ for some computation then the turing machine takes time ‘t^4’. If we take one second as the unit for an action then if the RAM or RASP have to compute for 1 hour to get the job done the turing machine takes five million years.

Other model were used in India in an indigenous industrial setting. In the seventies of the last century, in the indigenous computer industry in India, they were used for a decade to teach and learn programming in machine and assembly language. This was  when no reliable computer was available to build the computers that were being designed and manufactured.

 The RAM and RASP were experimented upon in a global industrial setting. The RAM and RASP were used in the last decade of the last century for outsourcing purposes. The rise of the outsourcing industry in India was very rapid. The explosive requirements of the Y2K problem were sudden. This  required thousands of programmers to be developed for adaptive maintenance and preventive maintenance of legacy assembly language code. This is about 10% of all the mainframe legacy software. All MNC’s have software which has mainframe assembly language components. This software had evolved over three decades of effort in the West and Japan.

Rapid development of programmers in thousands was required on a crash basis. There was no access to the mainframe for exposition or training. The mainframes accessible were in production mode. One could not risk trainee programs on the same. The RASP was used to introduce mainframe assembly language programming by using universal instruction sets and stepwise refinement. A crude simulator was used for testing the programs. The programmers were then released for outsourcing and the outcome was very successful. This is especially so as the software professionals trained had to immediately work. They had to undertake  adaptive and preventive maintenance of production software. This colossal software was 10-100 million lines of code across the entire spectrum of MNC’s legacy code.

OTHER MODELS OF UNIVERSAL COMPUTATION--

**************
*****************************

COMPLEXITY OF COMPUTATION

THE DEFINITION OF TRACTABLILITY

 A person may be confronted with a problem which is to be solved within a fixed time limit. The capabilities of the person are limited. All problems cannot be solved.

We classify the problems that a person can solve into three types. One collection of problems that are truly incorrigible and will just take too much time whichever way we attack them. These problems we say are truly intractable.

 Another set of problems are those which are seemingly incorrigible or difficult. The can be called seemingly intractable. It is not clear but these problems can perhaps be attacked by methods still to be discovered.

The last class of problems are those which can be attacked with the proper skills and tools. These problems are said to be tractable.

The time taken to solve a problem is naturally dependent on the input size. Let us start with a book which has 1000 pages. The input size is now ‘n’ which is 1000.

We will start with a definition of tractability. That is to say the problem can be tackled or solved with reasonable demands on the time resource.

To describe tractability in a user friendly way we will describe it using a nursery rhyme. This rhyme with its variants occurs in many diverse cultures and languages all over the world.

The Farmer's In His Den

The farmer's in his den,
The farmer's in his den,
E I E I O - The farmer's in his den.
(with one child in the centre of a circle)

The farmer wants a wife,
The farmer wants a wife,
E I E I O - The farmer wants a wife.
(the farmer picks a wife)

The wife wants a child,
The wife wants a child,
E I E I O - The wife wants a child.
(the wife picks a child)

The child wants a nurse,
The child wants a nurse,
E I E I O - The child wants a nurse.
(the child picks a nurse)

The nurse wants a dog,
The nurse wants a dog,
E I E I O - The nurse wants a dog.
(the nurse picks a dog)

The dog wants a bone,
The dog wants a bone,
E I E I O - The dog wants a bone.
(the dog picks a bone)

We all pat the bone,
We all pat the bone,
E I E I O - We all pat the bone.

The farmer is given the 1000 page book to read. He just looks at it and ignores it. He just claims to have read it. This we say takes 1 unit of time. The upper bound for the time is represented by the big-Oh notation and is therefore O(1).

Now the teacher comes along and forces the farmer to read it. He goes through it page after page, as he would through a novel. We say the farmer makes one pass over the input i.e. the book. The time taken now is the number of pages. This can be represented by O(n). This takes care of the different times taken by the farmer for some classes of books. In some he may take a few minutes per page, in some quarter of an hour, and perhaps in some one hour per page. At the time unit of one hour per page this works out to 1000 hours.

The farmer now takes a wife. The wife is the new boss. The same book is presented to her. She turns the pages of the book one after another sequentially. Whenever she selects a page the farmer is made to read the book once more, all over again.  The farmer reads the book as many times as there are pages in the book. The time taken now, called the time complexity is O(n^2). At the time unit of one hour per page, this works out to 1,000,000 hours. This can be called one mega hour of time.

The wife picks a child now. The child is presented with a book. The child starts the process of turning the pages of the book, one after another. For each page the mother is made to start with the closed book, turn its pages, one after another. For each page the mother selects the Farmer is made to read the book once. The Farmer reads the book O(n^2) times for each page in the book. The time complexity is now O(n^3). For a time duration of one hour per page this works out to 1,000,000,000 or a billion hours. This can be called one giga hour of time.

The child picks a nurse now. The nurse is given the gift of a copy of the book. She turns over the pages, page after page. For each page the nurse comes across she informs the child. The child will start with the closed book. The child will then turn the pages of the book, one by one. For each page that it comes across the child informs the mother. The mother will start with a closed book. She will turn the pages of the book. For each page in the book the Farmer is made to read the book once. The Farmer now reads the book with a time complexity of O(n^4). Assuming it takes the Farmer one hour to read a page of the book this works out to 1,000,000,000,000 or a trillion hours. This can be called one tera hour of time. 

The child picks a dog now. The dog gets the present of a book. The dog turns over the pages of the book. For each page in the book the dog growls at the nurse. The nurse starts the process with a closed book. Then it turns over the pages of the book, page after page. For each page she informs the child. The child starts with a closed book. The child thumbs through the pages of the book, page after page. The mother is informed of each new page that the child encounters. For each of them the mother starts with a closed book, thumbs through the pages, and makes the Farmer read the book once. The Farmer now has done the computation of reading the book with a complexity of  O(n^5). At one hour per page the Farmer requires 1,000,000,000,000,000 hours or one peta hour of time.

The dog picks  a bone now. Dennis the neighboring kid wanders in. He is given a copy of the book. He uses the bone to turn the pages of the book, page after page. For each page of his book  he arouses the dog. The dog starts with the closed book given to it by Dennis, and chews the book page after page. For each page in the book, the dog wakes up the nurse and informs her by barking at her.  The nurse will start with a closed book, thumb through  the book and for each page inform the child that a new page has come. For each new page, the child starts with a closed book, thumbs through it and informs the mother that a new page has come. For each page the child has seen, the mother starts with a closed book, thumbs through the book page after page. She informs the Farmer that he  has to read his copy of the book once, for each page she encounters in her book. The Farmer now has in his computation simulating the reading of the book a complexity of O(n^6). At the time allotment of one hour to read a page of the book this works out to a time requirement of 1,000,000,000,000,000,000 or an exa hour of time. (It may be noted that all the information that the human race generates in  a year can be stored in a exabyte of disk storage.)

The nursery rhyme has stopped with the bone but it can carry on indefinitely. The complexities O(1), O(n), O(n^2), O(n^3),O(n^4), O(n^5),O(n^6) etc are called polynomial complexities. They are said to be in the class P. The problems that have polynomial time complexity are said to be tractable. Tractable problems on a deterministic computer like the Pentitum PC are said to be in the class P. If we allow a hypothetical nondeterministic computer the WizPentiutm PC, which can replicate itself when there is a choice, once copy per choice. The copies compute in parallel. If one copy succeeds we are done. If multiple copies succeed we take the one which is fastest. Copies that fail do not count. If all copies fail then we declare failure.  Tractable problems which this computer can solve are said to be in the class NP. NP stands for nondeterministic polynomial time.

In the case of the Farmer having to read the book with different requirements we find the concept of intractability. We talk about the book being read as a novel, once from beginning to end. A text book is one which requires multiple readings, the need to read a collection of pages, the need to relate some pages widely separated in the book and so on. If the wife demands the Farmer read the book once for every different sample of pages of the book she puts together then we have ticklish situation.

For every one page sampled by the wife the Farmer has to read the book once. This is O(n).

For every two pages sampled the wife the Farmer has to read the book once. This is O(n^2).

For every three pages sampled by the wife the Farmer has to read the book once. This is O(n^3).

This goes on and on.

The total number of times the Farmer has to read the book is  O(n+n^2+n^3+…….+n^(n/2)+…)=O(2^n). This makes life very difficult for the Farmer even with small books.

For a 10 page book  he has to read the book 1000 times.

For a 20 page book he has to read the book 1,000,000 times (i.e. a million times).

For a 30 page book he has to read the book 1,000,000,000 times (i.e. a billion times).

For  a 40 page book he has to read the book 1,000,000,000,000 times(i.e. a trillion times).

For a 50 page book he has to read the book 1,000,000,000,000,000 times (i.e. a thousand trillion times.)

For a 100 page book he has to read the book a million  trillion trillion times.

For a 1000 page book our Farmer has to read the book a billion trillion trillion times.

Beyond 40 pages  we find it very difficult to record the number of times the Farmer should read the book. We say the problem is intractable when the time requirement is exponential or higher than the polynomial. Problems in the actual computing world in government, military, industry etc. turn out to require time equivalent to the Farmer having to read a 100 or 1000 page book once for every sample of pages of the book being considered. Such problems are described as we go along.

 The basic difficulty is in combinatorial problems. The basic problem in combinatorial computation is that we start with a gadget consisting of a number of parts. This  is dismantled into its individual parts. The parts are then to be assembled back into the gadget with no record of the history of how it was dismantled. The only way it appears is to try all combinations, with some constraints to aid us. For a deterministic computer this takes an exponential amount of time as all combinations have to be sequentially attempted in the worst case. A nondeterministic computer will make multiple copies of itself. For each possible combination it will have a new copy and try the combinations in parallel, and thus can do it in time which is polynomial in the  number of parts in the gadget. We abbreviate the behavior and say the nondeterministic machine will guess the solution.  Then we verify the solution.

Optimisation versus decision problems.

We classify problems in two categories, decision problems and optimization problems. Decision problems have a binary answer, yes or no. A solution exists or does not exist.

Optimization problems are slightly different. Let us consider a user friendly example. We would like to know what weight a weightlifter can lift. We call this an optimization problem. We convert it to a decision problem by giving the weightlifter two weights on small and one large. He easily lifts the small weight.  Then if he cannot lift the heavier one then we try something midway. This partitions the weights between the heavy and light weight into a lower and upper half. We proceed to that half and repeat the process. At the end we end up with the weight he can lift. Thus we convert an optimization problem to a decision problem.

This technique is used in all Quality Assurance departments in factories the world over. We would like to know how much water, vibration and heat our pendrive can stand. Will it tolerate being put into the washing machine? Will our cell phone work when soaked in oil in some student’s lunch box in some corner of the earth? To what extent of stress and strain we can put some gadget or its part through? These optimization problems are converted to decision problems and thus solved.

The importance of the classification of decision problems  versus optimization problems is that the NP-complete problems are all decision problems. These require a yes or no solution to the existence of a solution. In the weightlifting problem the question of what weight the sportsman can lift is an optimization problem. If we give him a specific weight and then want to decide whether he can lift it or not lift it, we have a decision problem.

The same technique can be tried to determine the mamimum number of doughnuts Tom can polish off at one sitting. It can be used to see how far can the teacher be so that the yelling cannot be heard. What is the minimum amount of bribe to be given to the government employee for a routine job to be done?(May not work in this case.)

Let us now consider the classic struggle between Brer Fox and Brer Rabbit. Brer Fox is modeled by a deterministic RASP, RAM or Penitum PC. He is just unable to handle Brer Rabbit. If we define a hypothetical WizBrer Fox who can make operate nondeterministically then it may be possible that Brer Rabbit can be tackled.

Brer Fox can handle all the problems in P, and only the problems in P. These are problems which take a polynomial amount of time on the Pentium PC. WizBrer Fox can handle problems which are in NP, and only the problems in NP. These are problems which take a polynomial amount of time on the WizPentium PC.

The classic problem of computer science today is whether WizBrer Fox=Brer Fox. If this equality is satisfied then Brer Fox does not gain anything with nondeterministic capabilities. They are just unnecessary frills. If the equality is not satisfied then Brer Fox gains a lot by having nondeterministic capabilities.

The problems we want to describe are called the NP-complete problems. These  are decision problems. They seem to be problems which WizBrer Fox can handle easily but Brer Fox cannot. An optimization problem can be converted to a decision problem as in the above strategy.

The evolution of the concept of NP-completeness.

In all areas of knowledge in Arts, Sciences, Engineering and Technology a common modeling device is combinatorial optimization. In the area of combinatorial optimization some 3000 hard problems have been isolated by Computer Scientists and mathematicians which though deceptively easy are extremely hard to solve by the computer or by hand. Among these problems are the  NP-complete problems. Thousands of approximations to the solutions of NP-complete problems have been attempted but it was shown by Prof. Sahni that the approximations to these problems are also NP-complete.

The evolution of the P=NP problem.

The NP-complete problems have the property that if one of them can be solved in polynomial time on a deterministic computer then all of them can be solved in polynomial time on a deterministic computer.

It has been opined by the Wise Man that the mathematical tools to resolve the problem may not have been invented so far. Overwhelming evidence seems to exhibit that the problems may not have a polynomial time algorithm on a deterministic computer. If we postulate the existence of a nondeterministic computer then the NP-complete problems are in NP i.e.  tractably solvable on a nondeterministic computer. The deepest problem of theoretical computer science is a resolution of the question, whether P=NP.

The NP-complete problems are problems practically arising in computation in industry, research and all activities of human endeavor. They have the interesting property that all attempts to solve them over the centuries have yielded problems of exponential time complexity. This restricts the input problem size to about 40 elements. Practical models of problems have 100 to 1000 elements.

To understand the concept of NP-complete problems let us consider the same from  a user friendly point of view. Let us consider the actions of the early Christians and Jesuits in their pursuit of new converts to their religion.

When they go round the world to a new kingdom or tribe consisting of a million persons, a long, laborious and time consuming  method is to convert each one of the million persons. Alternatively a much simpler method may exist and this is what they followed.  What they did was to isolate a kingpin person in the new kingdom or tribe. This could be the king or chief himself or a very influential noble. If that person could be converted then all the million people fall in line and are converted. The king or noble involved is called the “-complete”. The same technique as per Sociology is followed when one religion wants to swallow up another, or one culture would like to swallow up another. Isolate a kingpin and convert him/her then all fall in line. This is what happened as per Russian history in Russia. The kinpin converted himself to European culture and everybody fell in line and switched.

The same problem is followed by a teacher in the class. If the class if unruly the long way is to convince all the students in the class to behave. For undergraduates this could be 200-300 students to be convinced. This is a long and laborious. Alternatively, get hold of one kingpin student, the leader and discipline him. Then all the students fall in line. This is what we say is to “make an example of”.

The method is followed by the police to control the goons in town, or break up an agitation. Isolate the kingpin and discipline him. Everyone falls in line. This is the concept behind the NP-complete problems.

We have one class of problem in one culture called the seemingly intractable problems. Let us call them the goons. These are demanding an exponential amount of time for a solution on a Pentium computer. The exponential is like money taken at compound interest. It grows very rapidly. For an input size beyond 40, the time taken or the space required is too large to be considered. The problems can however be solved tractably on a hypothetical nondeterministic computer which represents a new culture. The Pentium computer on the other hand demands that a problem be of polynomial complexity to be tractable. This is akin to taking money at simple interest. That is the Pentium computer dealing with tractable problems deals with ordinary citizens who are not goons. To swallow up the new culture into the tractable Pentium world we isolate a kingpin problem, the NP-complete problem in the  new culture. That is in the set of goons we isolate a kingpin: the NP-complete problem. This NP-complete problem being a kingpin if  it can be efficiently solved on the Pentium computer, then all the problems the hypothetical nondeterministic computer can solve can also be efficiently solved on the Pentium computer. That is if the kingpin goon can be converted to a well behaved citizen then all the goons will fall in line. Then all the problems can now be solved tractably and reasonably on our Pentium computer. If however the kinpin problem can be shown not tractably solvable on the Pentium computer then all the problems that were intractable earlier remain intractable on the Pentium computer. If the kingpin goon can never be disciplined then we can never hope to discipline the goons and the goons will remain goons.

The existence of these kingpin problems was originally shown by Cook, who showed the existence of the first NP-complete problem, called satisfiability. Karp isolated 21 classic problems which were also shown to be kingpins or NP-complete problems. These were problems studied for centuries and found to be extremely hard to solve.

In simple terms, we have two classes the goons and ordinary citizens. We want to reform the goons and make them ordinary citizens. Cook isolated the kingpin goon, the satisfiability problem. If this goon can be reformed to an ordinary citizen then all goons can be reformed. If satisfiability has a deterministic polynomial time algorithm on the Pentium computer then all the seemingly intractable problems will become tractable. If the kingpin good is incorrigible and cannot be converted by any means to an ordinary citizen then the goons form a distinct class and will contain many goons who cannot be reformed.

7. The classic 21 NP-complete problems.

Karp went further and isolated 21 goons who had resisted reformation for centuries. Mathematicians were in despair over these 21 problems. Karp showed that these 21 problems were also kingpin goons. If any one of them can be reformed then all of them can. If any one of them is beyond reform then all are and the class of goons contains many goons who are beyond reformation.

8. The collection of NP-complete problems.

Today over 3000+ kingpin goons are known in the class of goons. Another attempt was to try approximation algorithms. To see if a goon can be approximately an ordinary citizen most of the time with some liberty as a goon. Prof Sahni showed that it is not possible: once a kingpin goon always a kingpin goon, and either we reform him fully or he remains a kingpin goon. Thus all the approximation algorithms are also kingpin goons.

Another interesting property of the NP-complete problems is that no efficient polynomial complexity algorithm has been found so far, though the problems deceptively seem to have one. Over the centuries a variety of persons, from hoi polli to gliteratti to geniuses have spent substantial time trying to solve the problems efficiently. Some  have studied it for days, some months, some years, some lifetimes to no avail. The problems are akin to an extended Tom and Jerry show, or an extended Bluto and Popeye cartoon. A study seems to lead to the existence of a polynomial algorithm, even if one starts firmly with the negation of the same as an assumption. Tom or Bluto seem to be winning easily. The polynomial algorithm seems to have been found, only on closer study it proves to be wrong. Tom and Bluto are flattend. This is repeated every few days. This is what led the great Irish mathematician  Hamilton to despair about two centuries ago. It also appears from theoretical studies that no proof may exist whether an efficient algorithm exists or does not exist. In such a case the problem  will be called undecidable, and no algorithm exists to resolve the same. Then no computer program on the Pentium computer can decide the same in a finite amount of time.

13. The exact cover problem.

In this study we give special importance to one NP-complete problem, viz. the exact cover problem. This problem requires the determination of the existence of a solution. Thus it is a decision problem. Given a set and a collection of pair wise disjoint sets can we select some sets from the disjoint collection covering all the elements. A user friendly rendering of the problem is the example of the SUDUKO, where blocks of squares are to be arranged diagonally in a matrix in proper order.

 The genome map assembly problem in a nutshell.

Another user friendly view of the exact  cover problem occurs in the Genome Map Assembly. Two average DNA sequences are taken. They can be modeled  as two strings of 300 million symbols. The symbols of the two strings are paired together giving 300 million pairs. We have to combine the two strings but can only handle a couple of thousand pairs at one time. So we cut the strings into small parts and combine the small parts. This can be represented as  squares on the diagonal of a matrix. We are to combine the squares and we do not know how to do it except by trial and error. A solution to the exact cover problem gives us the proper combination. The problem is like a layman, or a group of them, more or less blindly dismantling a Boeing 747 and not being able to assemble it again into one proper working piece.

 A dozen NP-complete problems.

From the set of 3000 NP-complete problems augmented by a couple of thousand approximations to the same, we select a basic set of six problems and an augmented set of another six problems, making a dozen overall. We list the problems below with both a user friendly rendering of the problem and a formal statement of the problem.

15.1 The satisfiability problem.

The First problem is the Boolean satisfiability problem. There a collection of ‘k’ political parties and ‘2n’ politicians. The politicians occur in pairs. Each politician has his complement. If a politician supports a bill his complement will oppose it. The politicians are distributed across the ‘k’ political parties. Some politicians may be absconding and we do not consider them. As expected and commonly seen in India a politician may be in multiple parties simultaneously. A bill is to be passed and requires simple support from ‘all’ the ‘k’ political parties. Simple support means that each political party must have at least one politician supporting the bill. The Boolean satisfiability problem is whether we can assign to the politicians values:’yes, support’ or ‘no, oppose’ so that the bill is passed. 

This can be related to circuit design where  we have a combination of conditions: signals all present, one signal at least present, the complement of signal taken. These are called AND, OR and NOT gates. Will we get an output signal for some combination of inputs signals? This is the satisfiability problem.

15.2 The graph clique problem.

The Second problem is the clique problem. In any social network there will be some special individuals who form a clique. For example in the area of Information Science we have the concept of the Invisible College or High Priests in some area of knowledge who form a clique. In the area of Information Science we learn that all organizations have their Invisible Colleges, which keep track of all information in the organization. This can be modeled by a graph where the nodes are the people with one node per person. Two nodes have an edge or are connected if the two persons have a connection in the social network. We are to determine if a clique of some give size k exists in the graph and hence in the social network it models. This requires the determination of a subset of k nodes in the graph where each node is connected to the remaining nodes. This is a decision problem as we want to know if a clique of size k exists-yes or no.

15.3 The vertex cover problem.

The Third problem is the vertex cover problem. In a social network we isolate  a number n of persons. These persons are key persons with a wide network in the social circle. The problem is to determine if a subset k of these n persons cover the entire population in the social circle. In formal terms we take a graph and have to determine if a subset k of the n nodes of the graph covers all the edges of the graph. This is a decision problem as we want to know is a vertex cover of size k exists—yes or no.

 The Hamiltonian cycle problem.

The Fourth problem is the Hamiltonian cycle problem. We are  to start from home,  visit the houses of n relatives one after another and reach home. No relative should be visited twice.  This problem is modeled as a graph problem where we are to visit the n nodes of a graph, enter and leave each node exactly once. This naturally occurring problem is extremely difficult to solve. Consider the genome map assembly problem. We have millions of small pair lengths. Each pair length can be modeled as a node of a graph. If a pair length comes after another then put an edge between the latter and former. Finally, have an edge between the last pair and the first. If we can find a Hamiltonian cycle in the graph we have found the correct assembly. For another example consider the old Walt Disney cartoon movie, Snow White and the Seven Dwarfs. This movie is made up of thousands of laboriously hand painted slides, which vary slightly from each other. They are to be shown in the proper sequence. Suppose they got mixed up and we are to rearrange these thousands of slides in the proper order. Each slide can be modeled as a node of a graph. If a slide follows another then put an edge between the latter and former. Finally, we have an edge between the final slide and the first one. If a Hamiltonian cycle exists in the graph then we have found the proper order of assembling the mixed up slides. This is  a decision problem as we want to know if a Hamiltonian cycle exists—yes or no.

 The Travelling Salesman Problem.

The Fifth problem is the Travelling Salesman Problem(TSP). This is a generalization of the Hamiltonian cycle problem. If each bus stop in the city is the node of a graph, and a bus from one bus stop to another exists then introduce an edge between the nodes. The cost of travel between the stops is the weight of the edge. We are to make a circuit or ‘chakkar’ in the graph of least cost, starting from an initial bus stop visiting all the bus stops once and coming back to the first stop, with the total cost of travel a minimum over all the possible cycles. This problem has been used in USA for minimize the cost of travel  of a person who had to collect all the money from the pay phones every day in a city or town. A milkman in the Third World will find it useful if he has to deliver milk in  a hundred houses every day, or  newspaper boy who has to deliver the newspaper in over a couple of hundred houses every day with the minimum recurring cost. Globalization and liberalization has burdened the milkmen and newspaper boys in the Third World with gas guzzling vehicles for transport. This requires them in due course to use the Travelling Salesman Problem. This is an optimization problem as we want the circuit of least cost. To convert to a decision problem we ask if a Hamiltonian cycle of cost less than k exists---yes or no. We proceed as in the weight lifting problem. Then we convert the problem to a decision problem.

 The graph coloring problem.

The Sixth problem is the Graph Coloring problem. If the map of the world/universe is to be colored and two adjacent countries/worlds are to be colored differently, we are to determine  if for N countries/worlds a palette of k colors is sufficient. Another rendering of the problem is the determination if k colors are sufficient for  traffic lights, in a junction  of N roads. The  problem is formalized by a graph with one node for each country. Two adjacent countries are modeled by an edge between the nodes representing the countries. We are to color the nodes of the graph with the palette of k colors with the constraint that two adjacent nodes do not have the same color. This is a decision problem. We want to know if k colors will do—yes or no.

 The Feedback Node Set problem.

The Seventh problem is the Feedback  Node Set(FNS) problem. The Indian Railways have one of the largest  railway networks in the world. Let us count the railway stations and assign to them the number 1 to N. We are to determine if for a given number k, we can select k railway stations with the constraint that a person can make all the  tours in the Indian Railways,  originating from one of the k stations, taking connecting trains and ending up finally at the starting station. The problem has  applications in code  generation in compilers. A compiler translates a high level language program to the machine language of a computer. The computer has  either locations in RAM or in registers. If data is stored in the registers the computation is ten times faster than if the data is stored in the RAM. Registers require extra circuitry and are expensive. The problem is to minimize the number of registers required to generate the code. This problem can be reduced to the feedback node set problem. The problem is modeled by having a graph of N nodes. We are to determine if for a given k, we  can select k nodes such that all cycles in the graph contain at least one  node from the k selected. This is a decision problem we want to know if a FNS of size k exists—yes or no.

 The Feedback Edge Set problem.

The Eighth problem is the Feedback Edge Set(FES) problem. Consider again the Indian railway network. Let there be |E| trains and N stations. We are to determine if for a given k, we can select k trains such that they include all tours in the Indian railways. This is modeled by a graph of |E| edges and N nodes, and we are to determine for a given k, if we can select k edges such that all tours include at least one edge. This is posed as a decision problem. We want to know if a FES of size k exists---yes or no.

 The Set  Cover Problem.

The Ninth problem is the Set Cover problem. We have N  goons, who form themselves into gangs G1, G2,---,Gn, with the possibility one goon being in more than one gang. We are to determine a subset of gangs that contain all the goons. The problem is modeled by having a set S and a family of subsets of S. We are to determine  from the family of subsets, if some of them cover the set S. This is a decision problem.

The Exact Cover problem.

The Tenth problem is the Exact Cover problem. We have N  goons, who form themselves into gangs G1 G2,---,Gn, with the possibility of one goon being in more than one gang. We are to determine a subset of gangs which are pairwise disjoint that contain all the goons. A collection of gangs is said to be pairwise disjoint if when we take any two gangs in the collection they do not have a goon in common. The problem is modeled by having a set S and a family of subsets of S. We are to determine  disjoint subsets from the family of subsets to cover the set S. The problem is a decision problem. Does an exact cover exist? The answer is yes or no. The next step is to compute  all the  solutions.

 The Sum of Subsets problem.

The Eleventh problem is the Sum of Subsets problem. We are to determine from a given set of integers a subset that sums up to a sum M that is initially specified. This problem is akin to finding some money in hard times that adds up to M. This is  with the bits and pieces of monies that have been tucked away over the years in various places. This problem has been called the easiest hard problem and has proved difficult to solve. Alternatively we want to pool up an amount M for a splurge from various sources. What are the sources that will add up to the amount? It can be efficiently solved by dynamic programming methods if the numbers are small in size but if the numbers have as many digits as the cardinality of the given set then the problem becomes difficult. Normal requirements are for 100 or 1000 digits wheras we can handle 40 digits easily. This is a decision problem. Does a subset exist? The answer should be yes or no.

 The 0/1 Knapsack problem.

The Twelfth problem is the 0/1 Knapsack problem. We are to pack up our suitcase for a long international air journey with some allowed N items. Each item has a profit if it is packed and transported but associated with it is a weight. We have to stay within the weight restriction of the airlines and maximize our profit. Let the n items have the profits p1,p2,…pn and weights w1,w2,--wn and the weight restriction is M. We are to determine a subset of the items 1..n,  whose cumulative weight is less than or equal to M and the profit is maximal. This is an optimization problem. We convert it to a decision problem by asking whether a solution exists greater than for a lower bound of Ml and less than an upper bound of Mu. The answer is yes or no in each case. So it is a decision problem. Just as in the weightlifting example we  proceed to obtain the solution, by considering a weight inbetween. Thus we convert the optimization problem to a decision problem.

Other NP-complete problems.

The above twelve sample problems are from the list of 3000+ problems that are either NP-complete or are approximations to NP-complete problems in the  lists compiled to date by various experts. They  have the drawback that all attempts to solve them have complexity that is exponential. This restricts the input size of the problem to about 40, for exact solutions for all inputs. The practically occurring requirement is an input size of 100 to 1000.

 The dozen problems are in NP.

We will demonstrate that the above twelve problems can be solved in polynomial time by our hypothetical nondeterministic computer.

 The satisfiability problem is in NP.

The First problem is the satisfiability problem. We have n Boolean variables. Our machine will make two copies of itself for the first variable. For combining with the second variable each copy makes two copies of itself and so on. For n variables 2^n copies exist for all the combinations. These correspond to the 2^n rows of a truth table. The copies operate in parallel. For one row to be evaluated we can use syntax directed parsing techniques with semantic routines and in time O(n) evaluate a row. As all copies act in parallel the machine takes O(n) time. Thus we say the problem is in NP. This problem has many more properties. Every problem in NP can be reduced to this. So if this problem has an efficient solution on a Pentium computer then every hard problem i.e. all problems in NP will have an efficient solution. It is thus some sort of a kingpin problem. If we solve it we have solved an infinite number of problems in NP efficiently. Thus it is called an NP-complete problem.

 The graph clique problem is in NP.

The Second problem is the clique problem. We are to determine if a graph G=<V,E> has a  clique of size k. We select k nodes of the graph in all possible ways. This we do by having a nondeterministic computer make a copy of itself for each choice of k nodes. The copies act  in parallel. Each copy checks if each node is connected to every other node in time O(k^2). If k=n the the complexity of the nondeterministic computer is O(n^2) or the problem is in NP. This problem has other properties. It can be shown to be a  kingpin problem. If it can have a polynomial complexity algorithm on a deterministic computer then all the problems in NP have a deterministic polynomial complexity algorithm. So it is called an NP-complete problem.

 The vertex cover problem is in NP.

The Third problem is the vertex cover problem. Given a graph G=<V,E> does there exist a subset of k vertices with the property that every edge in E is incident on one of these  k vertices. For a nondeterministic computer to solve it, we create a copy of the nondeterministic computer for each choice of k nodes from V. The copies act parallely. Each copy checks if all the edges are incident on the subset of k nodes selected. This can be done in time |E| which if O(n^2). Thus the problem is in NP. An additional property of the problem is that it is some sort of a  kingpin problem. If it has a efficient polynomial time algorithm on a deterministic computer then every problem in NP is in P, and we say P=NP.

 The Hamiltonian cycle problem is in NP.

The Fourth problem is the Hamiltonian cycle problem. We will demonstrate that a nondeterministic computer can determine if a graph G=<V,E> has a Hamiltonian cycle in polynomial time. The machine starts with the initial node say numbered 1. From the remaining nodes the second choice is to pick the next node from n-1 nodes. For this the machine replicates itself to n-1 copies. Each copy has n-2 choices for the next choice. For each choice a copy is created so we have (n-1)*(n-2) copies and so  on. So we end up with (n-1)! Copies operating parallelly. Each copy checks if the choice of nodes form a Hamiltionian cycle. Failures are of no consequence in nondeterministic computation, only the copy that succeeds counts. If all copies fail then no cycle exists. If a cycle exists then it is determined in O(n) time. Thus the Hamiltionian cycle problem is in NP. The problem is more than this. It can be shown to be a kingpin problem. If it is in P then every problem in NP is in P and P=NP. It  is thus called an NP-complete problem.

 The Traveling Salesman Problem is in NP.

The Fifth  problem is the Travelling Salesman Problem. We are  to  determine the Hamiltonian cycle of least cost. We will use a nondeterministic computer. We convert it to a decision problem by determining if a Hamiltonian cycle exists which is less than Cu in cost. This is a decision problem, and the answer is yes or no. Then we try for a lower bound of size Cl, which is also a decision problem. Then we try for something midway and thus proceed as in the weightlifting problem. We convert the optimization problem to a decision problem. The repeated trials are just an example of binary search and takes time O(log n). Thus a nondeterministic computer can solve it in polynomial time and the problem is in NP. If it is in P then it can be shown that as the TSP problem is NP-complete we will have P=NP. 

 The Feedback Node Set problem is in NP.

The Sixth problem is the Feedback Node Set(FNS) problem. In a graph G=<V,E> we are to determine is a set of k nodes cover all the cycles  in G. We construct a matrix with the rows and columns as edges. Two edges are related if there is a path from one to another with a node in between. The transitive closure of the matrix is computed. The rows and columns of the matrix are partitioned into blocks. A block contains all the edges emanating from a node. For a cycle to exist we must proceed from an edge travel some blocks and return to the edge. This can be determined from the blocks in the matrix. A nondeterministic computer will select k nodes of the graph nondeteministically, i.e. it will make a copy of itself for each choice. For each edge we check if a node from the set of k nodes is in the path. Thus we can determine if a FNS of size k exists. It takes the nondeterministic computer a polynomial amount of time and the problem is thus in NP. The problem can be shown to be NP-complete and thus if it is in P then P=NP.

 The Feedback Edge Set problem is in NP.

The Seventh problem is the  Feedback Edge Set(FES) problem. In a graph G=<V,E> we are to determine a set of k edges that cover all the cycles in G. Proceed as in the FNS case discussed earlier only change being that the nondeterministic machine will choose k edges nondeterministically. The problem is thus in NP and is known to be NP-complete. If somehow it can be shown to be in P then P=NP.

 The Graph coloring problem is in NP.

The Eighth problem is the  Graph Coloring problem. Here given a graph G=<V,E> we are to determine if the nodes of the graph can be colored with a palette of k colors with no two adjacent nodes having the same color. We solve the problem in polynomial time using a nondeterministic computer. The nondeterministic computer makes k copies of itself for the first node, a copy for each color assigned to  the node. Each copy makes k copies of itself for coloring the second node. For n nodes k^n copies are made all operating in parallel. For each assignment of the colors we check in polynomial amount of time if two adjacent nodes have the same color. Failures do not count. If one copy succeeds then a coloring is possible. If all copies fail no coloring is possible. Thus the problem is in NP. It can be shown that the problem is some sort of a kingpin problem. If it has a deterministic polynomial time algorithm then every problem in NP will have a efficient algorithm on a deterministic computer and P=NP.

 The Set Cover problem is in NP.

The Ninth problem is the Set cover problem. Given a set S and a collection of subsets s1,s2,----,sj does there exist a subset of the subsets that cover the set S? This is a decision problem. The answer is yes or no. We consider a nondeterministic computer that makes 2^n copies of itself one for each subset of the subsets. The copies act in parallel and each copy sees if the union is the set S, this takes a polynomial amount of time. Thus the problem is in NP. The problem can be shown to be NP-complete and thus if it is P then P=NP.

 The Exact Cover problem is in NP.

The Tenth problem is the exact cover problem. Given a family of subsets s1,s2,..sn and a set S does there exist a subset of pairwise disjoint sets that covers the set S. We take a nondeterministic computer, that makes 2^n copies of itself one for each possible choice of the powerset of the subsets. The copies act in parallel and determine if the subset covers and also is the collection of sets chosen is pairwise disjoint. The  time is polynomial for the nondeterministic computer. If it can be somehow shown that a polynomial time algorithm exists for a deterministic computer also then the problem is in P. The problem can be shown to be in NP and hence if it is in P then P=NP. The problem is a decision problem.

 The sum of subsets problem is in NP.

The Eleventh problem is the sum of subsets problem. Given a set of n numbers and a sum M, does there exist a subset that sums up to M. It is a decision problem where we want the answer yes or no. We consider a nondeterministic computer which will make 2^n copies of itself one for each subset of the numbers. The copies act in parallel. Each copy checks in polynomial time if the sum of its subset sums to M. Failures do not count. If any copy succeeds then we have a yes for an answer. Thus the problem is in NP. It can be shown that the problem is in NP-complete. Thus if it is in P then P=NP.

 The 0/1 Knapsack problem is in NP.

The Twelth problem is the 0/1 knapsack problem. This is an optimization problem. To convert it to a decision problem we proceed as in the weightlifting competition. We first determine if we can pack our knapsack with an upper weight limit of M, and an upper profit of Pu. Then we try for a low profit of Pl. Then we  proceed for a profit halfway etc. In O(log n) attempts we have determined the profit that is maximum. Thus we reduce the problem to a decision problem. The  determination at each stage is by a nondeterministic computer. It makes 2^n copies of itself  corresponding to the 2^n possible choices of choosing or not choosing an element from a list of n elements. For each choice a nondeterministic polynomial amount of time is spent to determine if a solution is possible. Thus the problem is in NP. If we can somehow show it is in P and as this is known to be an NP-complete problem we will have P=NP.

 Some difficulties with NP-complete problems.

 The size of the problem that can be handled for NP-complete problems that can be handled, can be increased by the use of massive parallelism, and distributed computing in a network or even the cloud,  which the problem solutions should  easily adapt to. In the formulation of our solution using the exact cover problem we see that massive parallelism can easily be incorporated.

 Suggest dynamic variation of input data.

Another drawback of all studies so far has been the fact that the input size and the problem specification is statically fixed. In reality the graphs and sets modeling the problem can dynamically change. This requires us to maintain a history of the computation. The data structures and the record of the history of the computation should be so chosen so that change in the input problem specification can easily be incorporated. The problem is akin to what happens in the software outsourcing industry. Large legacy software incorporating the business rules of a large organization can build up to a massive 10 to 100 million lines of code. Continuous changes in the business rules require adaptive maintenance of the software. These are small changes to  be incorporated in the large piece of software. In a similar situation we require small changes in the input problem formulation of the above twelve problems to be reflected in small changes in the history of the computation without having to redo the computation from start. These small changes could be addition or deletion or nodes and edges in the ADT of a graph. Alternatively they could be addition or deletion of elements of sets or subsets in the ADT of a set. In a table ADT these could be addition or deletion of rows and columns as in MS-Office tables.

 Algorithms and dynamically varying input.

It was shown by Prof Hopcroft that all algorithms are basically based on the ADT’s of integers, sets, graphs, matrices and polynomials. These have the interesting property that they can be recursively defined as a combination of partitions which are also integers, sets, graphs, matrices and polynomials respectively. This allows subdivision of the problem into subproblems of smaller size but with the same problem characteristics if we use the representation of the problem in the exact cover formulation. If the results of subproblems can be easily combined then this divide and conquer approach allows for easy implementation of massively parallel or distributed computing. This divide and conquer approach also facilitates massive distributed computing like in the outsourcing field. We envisage that a problem like Genome Map Assembly may be a mega project spanning centuries.  In such a case the software development and maintenance processes developed by leading software companies can be adapted. Such problems may be a civilizational effort involving massive outsourcing.

 An example of dynamically varying input.

A practical application of the input data changing occurred for example in  triggering applications of a nuclear reactor. The connectivity of a dynamically varying graph had to be determined in milliseconds of time. The solution used then was to recomputed the connectivity in the changed graph and squeeze the data structure of the graph into the control structure of the algorithm for real time speed requirements.

 Adapting computations for dynamically varying input.

 Many NP-complete problems and especially the above dozen problems lend themselves easily to formulation by the exact cover problem and adaptive maintenance of the history of the computation can be incorporated. Modification of the table is the addition or deletion of rows and columns. We partition the table with row  and column partitions for each change in the input. A new partition leads to a new exact cover problem. A stream of changes means a stream of exact cover problems  which must be combined by using an exact cover problem.

To accommodate adaptive maintenance of the history of the computation, we use a dynamic programming algorithm to combine the rows of the table for the exact cover computation. This leads to a binary tree structure with the leaves as the rows and the interior nodes as combinations of rows. By maintaining the computational history as a tree, both parallelism and adaptive maintenance can be incorporated.

To accommodate adaptive maintenance of the history of the computation, the columns of the table encoding the problem are partitioned. We can start with partitions of size n/ceil(log(n)). Most encodings of the problems yield a natural SUDOKO game type of partitioning with blocks of data along the diagonal of the table. The column partitions can be combined by using the exact cover problem solution recursively. A stream of changes yield new tables, one table per change, with data along the diagonal to which the exact cover problem solution should be applied.

 An application to code generation in compilers.

An alternative view of the requirement  is  to consider the register optimization problem in the code generation phase of a compiler. The feedback node set problem is useful in determining register optimization. If we vary the number of registers then the solution given here with a history of the computation maintained allows experimentation with varying the number of registers.

Some problems are based on the graph coloring problem. We start with an initial palette. The next step is to  map the problem to the exact cover problem. Then solve the same, maintaining a history of the computation. Then we can vary the palette easily in our formulation and study the effects.

 The first step of our work.

We start our study by showing how the above twelve problems can be converted to the exact cover problem in polynomial time. In the process we find that we have to extend our concept of the exact cover problem. The exact cover problem is used as a scaffolding to hang semantics, akin to the problem of syntax directed compiling. For this for each cell in the table we add semantics. For each column and row we add semantics. We combine the semantics by semantic routines. The semantic routine computation is restricted to polynomial time complexity.

In the area of databases we have the relational database technique. This also uses a table. In the study of constraint data bases the semantic attributes and routines associated with each cell, row and column are called constraints. Thus the exact cover problem is a special case of the constraint databases techniques.

Another modification we make in the extended exact cover formulation is that for some columns or blocks of columns we drop the restriction of pairwise disjointedness. We allow multisets in these columns of block of columns. This is required in mapping the node cover problem to the exact cover problem.

We develop a dynamic programming solution to the exact cover problem. In this solution we maintain the history of the computation in a tree data structure that effectively models a hierarchical data base. We add attributes and constraints to the data in the nodes. So we end up with a record of the computation as a constrained hierarchical database.

Then we examine modifications to the technique by the use of divide and conquer strategies. We consider partitioning the columns and obtain smaller exact cover problems. The combination of the partitions is considered by using exact cover problems solutions recursively.

We consider the effect of the solution to the exact cover problem on the 12 NP-complete problems that are polynomially reduced to the exact cover problem.

 The dancing links program application.

As an alternative we consider the DLX or dancing links programs of Knuth. Knuth has developed an elegant recursive back tracking algorithm to determine all the combination of rows in the table of the exact cover problem that yields a solution. At times when combining rows we require that we compute a constraint involving the attributes of two cells, one in each row but different columns. This occurs in the mapping of the Hamiltonian cycle problem, the feedback edge set and feedback node set problem, where we have to determine if a pair of edges are in some cycle by using transitive closure. This requirement demands that we actually combine the rows. So we do not find Knuth’s recursive algorithm convenient as it does not allow easy combination of arbitrary cells. Another disadvantage of Knuth’s recursive rendering of the algorithm is that the history of the computation is not maintained to incorporate adaptive maintenance. The recursive formulation though elegant has the disadvantage of the overhead associated with recursive calls and stack manipulation. In very large computations using massive arrays, the recursive calls and stack manipulation overhead can become substantial as in real time applications. The iterative version will be superior. We use an iterative version for another purpose, to incorporate massive distributed parallelism, and to incorporate massive distributed adaptive maintenance like in the software outsourcing area. Knuth’s iterative implementation of the dancing links program with doubly linked lists will be of elegant use. Since most of the exact cover tables are like SUDOKO blocks over the diagonal, we can partition the table into subtables or subproblems. The subproblems are exact cover problems which can be recursively partitioned into smaller problems. Combining partitions is an exact cover problem, hence a recursive use of exact cover solution will be useful for a divide and conquer software solution. The software solution will span all phases of the software life cycle and is eminently suitable for being used with the software processes of the IT industry, which are now proven to be of use for the development and maintenance of large software efforts.

 Architectural unity in the computational model.

Repeated use of divide and conquer in the exact cover problem has another advantage. It was pointed out by Prof Brooks that the main difficulty of the OS/360 software engineering problem was the absence of an architectural unity in the software life cycle. In the exact cover problem the exact cover problem is the unit. Repeated partitioning leads to units, combination of partitions is a unit, combination of a set of partitions is a unit. All this easily adapts itself to massive distributed computing, massive parallelism, and for implementation via Cloud computing.

 Extending the work for dynamically varying input to the entire spectrum of algorithms.

The exact cover formulation of the problem and its solution with dynamically varying data has a general application. It was pointed out by Prof Hopcroft in his classic work on Algorithms that all computer algorithms could be reduced to about 40 fundamental algorithms. However in his exposition only static data was considered. The Update function in a constrained data base arises from the input data changing. Thus we must consider dynamically varying input data for any computer algorithm. It easily appears that all the basic fundamental problems discussed by Prof Hopcroft can be formulated as instances of the exact cover  problem with dynamically varying data.

 Using software industry processes.

The formulation as an exact cover problem allows scaling to any level, divide and conquer application to any level of granuality in the number of partitions chosen, uniformity in the combinations of the results of partitions, use of large teams of IT professionals to any level of distributed computing-both spatially and temporally and any degree of massive  parallelism.

 Using modularization and information hiding.

Another requirement of large software is modularization and information hiding. The unit of the exact cover problem if given to a node in the Cloud, can be computed as a module. This allows different platforms to be used if required for the computation.

 Using massive outsourcing.

In his visit to India in the seventies Prof Hoare pointed out the requirements of IT professionals for successful development of large software. We augment his conditions and give the requirements as Hoare’s Conditions (HC). The horde of IT professionals chosen should be of the highest IQ possible, should be young, motivated, competently educated, possess good communication skills, have basic skills in programming, have good team spirit, have a basic two years experience and be prepared to work long hours for low wages. Today in the software outsourcing industry we see that these requirements are easily satisfied by the hordes of IT professionals from the BRIC countries. In India the local SWITCH  group of companies specialize in this horde without the two years experience which is made up for by rigorous training and development programs augmented by long working hours.

 Simulations of turing machines with our device.

The  next phase of our study is a recasting of Cook’s theorem. Given an arbitrary nondeterministic turing machine(NDTM) of polynomial time complexity we show how to construct an exact cover problem that simulates the turing machine. The construction takes only a polynomial amount of time. From this we conclude that the exact cover problem is NP-complete. From the construction by a suitable modification we show that the Hamiltonian cycle problem is NP-complete by directly reducing the NDTM computation to it.

The construction of the exact cover simulation of the NDTM simulates the ID’s of the machine. The ID’s form SUDOKO type blocks and are along the diagonal of the table. This allows extensions in three  directions.

 Adapting for dynamically varying turing machines.

In the first direction we can use a record of the history of the computation of the turing machine for adaptive maintenance of the turing maching. Changes in the input turing machine and be incorporated in the history of the computation. For an arbitrary turing machine we require a growing exact cover problem as the turing machine can use an arbitrary amount of tape. This is simulated by adding rows and columns to the exact cover problem table.

 Adapting for modifications in turing machine models.

In the second direction modifications of the turing machine like multitape, multiheads, two dimensional tape etc can also be reduced to the exact cover problem with visual understanding.

 Extending to automata of the Chomsky hierarchy.

In the third direction we have the hierarchy of the families of automata of the Chomsky hierarchy. These are the finite automata with its variants, the pushdown automata with its variants, the linear bounded automata with its variants and the turing machine with its variants. A computation in any given automata can be described by a sequence of instantaneous ID’s. These ID’s occur along the diagonal of the exact cover problem table like SUDOKO blocks. The exact cover algorithm we propose maintains a history of the computation and changes to the input automata, result in adaptive maintenance which can be incorporated in the data structures and the history of the computation. The difficulty being that in the case of automata with low time complexity the method may increase the complexity unless P=NP.

 Extending to all automata.

In their classic text the Cinderella book, Hopcroft and Ullman have pointed out that a bewildering variety of automata have been studied. Since basically an automata in its dynamic behavior results in a sequence of ID’s, we can easily map the computation in polynomial time to the exact cover problem. The ID’s occur like SUDOKO type blocks along the diagonal of the table of the exact cover representation. By extension we can incorporate adaptive maintenance techniques for streams of changes in the automata being simulated.

The sorting problem solution.

In general consider the sorting problem. A set S of N distinct numbers is to be sorted in increasing order. Consider a N node graph with a node having a value and label corresponding to a distinct element in S. An edge between an ordered pair of nodes is introduced iff the latter node in the pair follows the former node in the pair. The maximum node is connected to the minimum node by an edge. If we can determine a Hamiltonian cycle in the graph then the sorted order has been found.

Relating any computational device to sorting problem.

Any computation on any computational device will have a sequence of IDs as a trace of the computational history. These IDs can be represented by the nodes of a graph. An ID can be connected by an edge in the graph to an ID that follows it. The last ID is connected to the first ID. If we can find a Hamiltonian cycle in the graph then we have found the history of the computation, and have located the relevant trace of the computation.

In the exact cover formulation we can for a time instant I consider all possible conditions in the snapshot of the computational device and its value at I+1. Each condition is the node of a graph. Two nodes are connected by an edge if the ID is the same at that particular instant of time. These form SUDOKO type blocks on the exact cover table. The existence of a Hamiltonian cycle in the graph gives the trace of the computational history. Thus any computation on any computational device can be naturally simulated by the exact cover device by reducing it to the Hamiltonian cycle problem.

The computing potential of the exact cover device.

Many models for universal computation have been studied a century ago. Among these are the symbol manipulation systems of Post, the Lambda Calculus of Church, the Markov algorithms, the general recursive functions of Kleene, etc. All of them compute the class of partial recursive functions only. This led to Church’s Thesis that the Turing Machine be considered to the model for a procedure.

The HSM machine.

An alternative model to computation for Cook’s theorem was suggested by Hopcroft and incorporated by Horowitz and Sahni in their text on Algorithms. This is a modification of the RAM model. We will call it the Horowitz Sahni Machine(HSM).

In our next step we will reduce a computation of a problem demanding nondeterministic polynomial time on the HSM in polynomial time to the exact cover problem. This allows us to get alternative formulation for Cook’s theorem. The formulation results in blocks representing the ID’s of HSM in SUDOKO type blocks along the diagonal of the exact cover matrix. By maintaining a history of the computation, we can incorporate adaptive maintenance of the changes in the input problem.

Generalizing the HSM machine.

The HSM type of machine can be used by generalization for  both study and training in assignment oriented programming languages by a bootstrapping method. We start with small universal subsets of the HLL and bootstrap all the way up, layer by layer, to all the features of the HLL.

NP-complete universal machines with dynamically varying input.

One can consider augmenting the collection of universal computing devices by the 3000+ NP-complete problems and their approximations. Any NP-complete problem can be modeled as a computing device capable of universal computation. The problem specification can use the ADTs of integers, graphs, matrices, sets and polynomials. We can consider additional ADTs based on collections of the computable real numbers. If the input problem size is statically fixed then we can compute the recursive functions or algorithms. If the input problem size can vary dynamically then we can compute the partial recursive functions or procedures. This is evident as a fixed input size formulation of the problem can store the tape of the turing machine being simulated for the recursive function being computed. The dynamically varying input size can allow the input size to grow lockstep with the requirements for additional blank tape of the turing machine simulating the procedure being computed.

NP-complete problem based universal machines.

Thus we can have a Boolean satisfiability computer, a graph clique computer, a node cover computer, a Hamiltonian cycle computer, a traveling salesman computer, a graph coloring computer, an exact cover computer, a sum of subsets computer, a set cover computer, a 0/1 knapsack computer and 3000+ more. All these are alternatives to the von Neumann general purpose digital computer like the Pentium PC. If the input is statically fixed then these compute the algorithms or recursive functions. If the input can vary dynamically then the above computational models compute the procedures or partial recursive functions. So if the input is allowed to vary they can compute whatever a C, C++ or Java program can compute.

Seeming advantages of NP-complete problem based machines.

These new computers can easily make use of nondeterminism, distributed computing in the Cloud, massive parallelism, the proven software engineering and IT processes of the IT industry. A divide and conquer approach for many of these computers allows for easy modularization. There is an  architectural unity and the subproblems are the same problem on a smaller scale. Combining partitions is combining problems of the same type by a problem of the same type.

The use of  Optical computers.

A practical type of computer for the above is in the area of optics. Optical computers have been studied for the last 50 years, and are seemingly well understood. They have inbuilt massive parallelism which is useful for NP-complete problem based universal computing devices. The massive investment of industry in traditional von Neumann type digital computers may postpone the advent of optical computers by a few decades.

Extending the work to all hard problems.

Future studies are to be made if all the 3000+ NP-complete problems can naturally be mapped to the exact cover problem. This allows easy visual understanding of the problem for educational purposes and possible future computation. The exact cover problem also allows ease of adaptive maintenance which means the effect of dynamically varying input can be determined without having to recompute for every small change.

 On the practicality of the exact cover computer.

The rapid growth in the storage capacity in the computer and the effect of Moore’s law in the cost of computer hardware are already having an effect. The Google computer shows and uses the parallelism of half a million garden variety PC’s. The fall in the price of PC’s and Moore’s law may yield Google computers that are commonplace. Disk capacity of a yottabyte is being talked about. The massive parallelism and massive storage capacity may allow the input barrier size of 40 to be crossed. The input size of 40 as a limit to what is computable and what is not is mentioned by Knuth in his classic work. This was circa 1970 when computer speed was low and storage capacity small. The rapid changes in hardware may allow 100 or 1000 as input size to be commonplace. In such cases the P=NP question may remain only an academic requirement and exact solutions to NP-complete problems may be used. The massive computational history need not be repeated every time a change occurs in the input. Techniques akin to adaptive maintenance in the software industry may become commonplace.

The future may allow massive constraint databases which will result for example from the exact cover formulation and solution to NP-complete problems, and a record of the computational history and maintenance history.

 Megaprojects and the exact cover device.

Large megaprojects spanning centuries are common in civilization. The clash of religions, cultures, races, countries and economies can be considered to be computations spanning centuries or thousands of years. Prolonged litigation can take centuries, like a temple land case in North India that was settled recently after 700 years of deliberation by the courts of law. Litigation can be considered to be a computation. Architectural wonders have taken centuries to build. The Rheims cathedral, the south Indian temples, the pyramids, the ancient wonders of the world are efforts perhaps of centuries. In nature a cactus in the middle east is in no hurry to breed. It blooms once in 700 years. The redwood trees in California  consider a thousand years a moment. So in nature also centuries and thousands of years are only a moment.

Problems like genome mapping may  be considered worthy of computational effort over centuries. This is  when the resulting solution should be exact. In this case the software engineering process that aids all phases of the software life cycle becomes important. This is  for the development and maintenance of geographically and temporally distributed, massively parallel software. This software is to be developed  by hordes of programmers and IT professionals in the 120 -200 avocations in the IT industry,   working for centuries.

The natural evolution of software to megaprojects.

From the software industry point of view the growth is natural. Early software or programming in the small was only upto 50,000 lines of code. The difficulties with the mainframe O/S 360 led to the study of software engineering and this culminated in a situation where a million lines of code could be developed by many international organizations reliably. At the same time legacy software now being outsourced to India for maintenance went to 10 to 100 million lines of code. Capers Jones points out that 10 million lines of code is a challenge to develop and only a handful of organizations in the world have mastered it. Pressman points out that a billion lines of code software is reasonably around the corner. With a programmer scope given by Capers Jones of 30,000 lines of code we will need 30,000-50,000 software professionals for the development of a billion lines of code, let alone the testers, maintainers, quality assurance personnel etc of the entire software development life cycle. Thus it appears that megaprojects spanning centuries will be a logical next step in the future. In this class of megaprojects the need for exact computational results of NP-complete problems for large input sizes is bound to find a place.

********BACKGROUND*********

1. The concept of constrained data bases.

In the ubiquitous computing environment of today databases abound. A person having a credit card would have personal details stored in a database. The balance on the card in the database could be a cell in a table recoding the data of the card. A transaction on the card would change the balance. In a static situation the database just contains data and nothing else.

We could improve on the database by having semantics or meaning associated with the data. This could be some attribute which would have values. Also some computation could be triggered by a change in a value. The change in the account balance could trigger an SMS alert.

In the business rules process of the credit card company,  a trigger like a change in a credit card balance can result in a phone call to the customer informing that a fabulous prize of $1 subsidy has been won for a luxury trip to the moon. Alternatively the change can result in a goon of a bill collector knocking at the customer’s door in the wee hours of the morning wanting desperately to test the density of the client’s bones.

The meaning and computation associated with the cell are called constraints. We could have constraints for every cell, every row, every column, or in our extension for any combination of these. The resulting situation is what is called a constrained database.

In today’s world the bulk of the MNC’s store their legacy data on mainframes which use the hierarchical database techniques. Here the data is stored in a tree data structure which is like the organization chart in MS-office. The other alternative database organization that is now widely adopted is the relational data base model where the data is stored in tables.

2. The problem of our study.

In our study we will concentrate on one type of problem called the exact cover problem. We will extend this by constraints to obtain a specific instance of  constraint data bases.

To illustrate the exact cover problem let us consider a problem that arises in chess that is called the N queens problem. Here we have a generalized chessboard of NxN squares. We are to place N queens on the chess board so that no queen attacks another. The solution uses the constraints that there can be only one queen per row and only  one queen per column. A further constraint is that the queens do not attack each other diagonally. We call this an instance of the extended exact cover problem.

A database whether it is relational or hierarchical is subject to updates. This may result in changes to the data. Take a relational data base.  At times the update function may involve the effective addition and deletion of rows and columns like the insert table command in MS-Office. This could arise as a result of transactions changing values of data or the addition and deletion of customers. Alternatively an existing customer may choose new services and options and this results in the addition and deletion of columns or rows.

In the formulations of the exact cover problem the update function is not so common. We however see its use for large computations where the input data is not static but has a stream of changes. The change in the input data arising from a transaction in databases is normally localized and small. In the case of the exact cover problem applications the change can result in a very large amount of computation or recomputation. In the case of recomputation we would like to reuse the earlier computation to the extent feasible. So we record the history of the earlier computation in a hierarchical data base for reuse purposes.

3. Relationship to syntax directed translation.

The concept of constraint databases is a natural evolution from the early days of computing. In the area of compiler writing the structure of the source program is called its syntax. This is modeled by a formal grammar called a context free grammar. The grammar parses the input program to build a structure that is a parse tree. The tree has data at its nodes. The nodes are given meaning or attributes and computation using the semantics is done  by semantic routines. This allows the translation of the input program by proper design of semantic routines. Thus we have a model of a constrained hierarchical data base.

Before the advent of the use of context free grammars Prof Bauer in Germany, and later on Prof Gries in USA, used parse tables. These are like the tables in a relational data base. The data in the table is augmented by attributes and semantic routines. Execution of properly designed  semantic routines can result in the translation of the source program to the target program. Thus we have the example of a constrained relational data base.

The incorporation of error recovery and error correction led to the requirements of changes in the context free grammar and the tree or table aiding the parse. Thus there is a requirement of changes in the input data, like an update in databases. The changes in the parse table or context free grammar should result in local changes so that we can reuse the computational history and keep the complexity of incorporating the error correction and recovery procedures within bounds.

Thus syntax directed compiling, with error recovery and error correction can be considered to be an application of constrained databases.

 

4. Semantics and constraints in Web programming.

In the history of web programming early HTML was just dealing with plain data. The advent of XML was to add semantics to the data.

5. The exact cover computational device.

We proceed now to develop in detail the concept, structure and use of what we call the extended exact cover problem computer or computational device, which we will show is also an example of a constrained database. The static model proposed when operating in PSPACE will compute in a very natural way a large class of known hard problems. Given enough space and time it will be shown to compute all the partial recursive functions.

References:

1. A.V.Aho, J.E. Hopcroft, J.D. Ullman, The Design and Analysis of Computer Algorthms, Addison-Weskey, 1974.

2. Horowitz and Sahni. Computer Algorithms. Galgotia publications, 1993.

3. Knuth, Donald (2000). Dancing links. P159. http://www-cs-faculty.stanford.edu/~uno/papers/dancing-color.ps.gz. 

4. R.S.Pressman, Software Enigneering-A Practitioner’s Approoach, Sixth Edition, McGraw Hill, 2005.

5. Capers Jones, Estimating software costs, Tata McGraw Hill, 2007.

6. J. E. Hopcroft, J. D. Ullman, Introduction to Automata Theory, Languages and Computation, Addison-Wesley, 1979.

7. David Gries, Compiler construction for digital computers, John Wiley and Sons, 1971.