"We need a name for it, let's call it memex--- it will store and process a lifetime of associations"---1971, Prof. Gerard Salton, explaining to me "As we may think", by Vannevar Bush.

"The brontosaurus moved deeper into the swamps when the mamals took over the forest, but one day it ran out of swamps."

A wee bit about myself and my environment

CONTENTS: A partial finite enumeration of associations with concepts/ideas/environments/organisations/people over a period of 25 years (1972-1997), predating the internet, the pc and the communications revolutions in India, created to study internet search engines.


During their visits to India, in the seventies, the eminent Prof.C.A.R.Hoare and Prof.P.B.Hansen informally made suggestions to the budding foundation indigenous computer industry in India, on methodologies of programming, software development and software project management. Attempts at comprehension, realisations of validity and applications of the advice, have yielded proven fruitful results in the development of moderately complex professional software, in the areas of System and Application Software, critical real-time and Communication Software, elementary Information Systems in all spheres of the industry and in education/training programs. An attempted formalisation of the advice, leads to a hypothesis, whose validity and viability is the thesis; and the proof of the thesis consists of a number of professional/experimental software development projects & empirical studies in Software Management Practices, in the areas of Information Systems, real-time and Communication Software development.

The basic Hansen's Approach (HA) was to 'think out' the systems program in a High Level Language (HLL), preferably PASCAL or Concurrent PASCAL, verify, develop and check out the same and as a subsequent step handcode and optimise the

handcoded version. Though this methodology was strongly advocated for the COBOL-74 implementation on the indigenously developed TDC-316 (PDP-11-like) system, it was actually used with various generalisations (Hansen's Generalised Approach (HGA)), to various software development projects. A first line of generalisation is to allow any initial HLL, any target HLL, or even mapping to hardware as the target implementation.

In his comments and direction to the area of Software Verification, the eminent Prof C.A.R.Hoare suggested that academic institutions should develop correct algorithms and make them available to industry. The algorithms could be of the complexity of the Whetstone ALGOL-60 Translator, which was historically used in the TDC-316 ALGOL-60 implementation and elaborated upon as part of the proof of the thesis. The comment of Prof.C.A.R.Hoare is named: Hoare's Clause (HC). (The comment was made during his visit to India). In his suggestions and direction for Software Project Management in critical software development areas, Prof.C.A.R.Hoare informally suggested that a viable method was for the project manager to think up the software down to the last detail; employ highly qualified, intelligent, motivated, young programmers with proven aptitude; and in the implementation of the software not to allow the slightest deviation (Hoare's Generalised Approach (HGA)). In his learned assessment of the state-of-the-art of Program Verification, (circa 1977), Prof.C.A.R.Hoare informally opined that the area has reached a zenith and

solution in the monumental exposition: A Discipline of Programming by Prof.E.W.Dijkstra. Hence, in HGA, by generalisation is integrated, as an ideal requirement, the Correctness Clause (CC), that the Software Project Manager should be able to verify the software before using the same. (It is assumed that verification techniques exist and have to be merely discovered, for all the software that will be required in practice). Integrating the two definitions of HGA, with HC and CC, we have the final generalised HGA (Hoare/Hansen Generalised Approach (HGA)) as a Programming Methodology, for both individual and team programming efforts.

It is experienced that HGA is an ideal, (a state), and hence a model or formal standard. To practice the same, a generalisation of the advice of the eminent Prof.D.Gries (in Compiler Construction for Digital Computers and the Science of Programming) is taken, to yield the Reduction to Practicability, Practicality and Practice Clause (RPC): Formal techniques and models need not be used directly, but only serve to guide thinking, in the areas of Software Engineering and Technology. To apply HGA to the industrial environment, the requirement of ideal management (Management Clause (MC)), and the requirement of the Aptitude and Attitude Clause (AAC) are considered to be part of HGA.

The process 'to think out' is modelled as zero or more applications of HGA, thus making HGA a recursive definition and process.

The thesis is that HGA is a viable model technique and the dissertation is towards the realisation of the same. In practice HGA is used with RPC. However, the slightest application of RPC, requires and expects high integrity and character in the programmer (Personnel Clause (PC)), and immediately appears the requirement for 'polymethodologies' in software development, by individual or team programmers.

As a proof of the thesis ten applications are considered and the conditions, constraints, programmer environment and basic environment of the software life cycles considered are stated. A survey of the known, available and accessible literature on related topics is included. In his valuable initiation into the fundamentals of writing a dissertation on a hypothesis in programming methodology and his constructive critique, the eminent Prof.W.A.Wulf has suggested three questions that should be answered from the many that should arise from introspection.

These relate to the practical generality of using HGA as a technique, the documentation considerations when using HGA and thirdly the challenge to HGA from optimising compilers. In this context HGA, primarily refers to Hansen's Approach (HA). Three appendices are included to address these three issues in detail. Apart from the above he has pointed out the necessity of making the constraints and conditions under which the programming methodology is valid and the necessity of making

the proof of the thesis unassailable. It is conjectured that the inability to discover a universally applicable practical methodology of programming stems from the intrinsic nature of the problem. Just as in Numerical Methods polyalgorithms naturally and intrinsically arise, the slightest application of the RPC clause, the power of the conditional and the requirement to be applicable to the collection of all practical computations, requires polymethodologies for programming. Even HGA requires total correctness to be proved in principle, despite the reality of the Halting Problem and thus the concept of 'practical software' is only intuitive and based on experience. However, an unassailable aspect of the proof of the hypothesis, of the viability of HGA, with RPC, is its historical use in the indigenous computer industry. With success, reliable systems with reliable software (in a commercial sense), were built and HGA used, with the use of varying degrees of the RPC clause, successfully: in the environment of having to build a reliable computer without a reliable computer, to aid the same. The ten applications are sketched below. The First application elaborated upon is a problem with crucial uses in the Nuclear and Defence areas. It involves Programmable Logic Controllers (PLCs) with highly demanding response times: ranging from less than a micro-second to a hundred micro-seconds. This true life, real-time application arose in connection with the triggering of a nuclear reactor. The core algorithmic requirements specification can be modelled as requiring whether a signal

has travelled from a left-hand rail to a right-hand rail, through a network of AND, OR and NOT gates. The complexity is that the combinational circuit, so modelled, is subject to dynamic variation. An alternative equivalent formulation is the determination of the satisfiability of a formula in the Propositional Calculus, where the variables of the formula are subject to dynamic variation. One could obtain a still further generalisation by allowing dynamic variation of the formula itself, subject to an upper bound on the number of variables. An alternative equivalent formulation is the determination of connectivity in a graph, with the constraint that half the edges of the graph are subject to dynamic variation. The complexity of the problem does not materially change whether a directed or an undirected graph is considered. A still another alternate formulation of the requirements specification, is the determination of the emptiness problem of a context free grammar (cfg) with just unit productions, with the complication of some of the unit productions being subject to being present or absent dynamically. The true life problem demanded determination of the connectivity of a 33 edge graph, with upto 50% edge variation, starting from an initial configuration of the graph. The effect of edge variations and the connectivity of the graph had to be determined, with the constraints of the available technology of a 0.5 MIPS, reliable M680x0 processor, equipped with a hardware stack and upto 1GB of memory, in less than 100 microseconds. The solution uses

HGA to develop a systems program, the core being a depth first search (dfs) of a graph and the bulk of the program consisting of various graph optimisation techniques. The final handcoded optimised solution uses the curious property of the recursive descent parsing technique (equivalent to the LL (1) parsing technique), of being nothing more than essentially dfs of a graph when each recursive routine modifies itself on first execution to prevent further executions, by creating a 'bouncer' at the head of itself on first entry. A crucial alternative generalisation of HGA arises from using single chip processors to modify the subroutines, to correspond to dynamic variations in the graph; and furthermore as only a few of the instructions in the repertoire are used, ASICs and FPGAs can be used to map the solution directly to application specific hardware using HGA. With currently available reliable 50 MIPS (or more) processors and current FPGAs and ASICs technology a less than one micro-second response time for the PLCs considered is very much possible with HGA.

The Second application of HGA is oriented to Translator Writing Systems (TWS). As is well-known the BNF formulations of the syntax of the most major real-life programming languages do not easily fit into the standard Syntax Analysis techniques. Thus a need exists in practical compiler construction to determine which part of a BNF formulation fits a Syntax Analysis (SA) technique considered. This was historically pointed out by

Prof.D.Gries (1971), as being an area where a systems program is required to determine suitable matches

(automatically) between parts of BNF grammars and Syntax Analysis techniques. In practical compiler writing, if one chooses a particular technique, a need exists to modify the BNF grammar, by inserting, deleting, or modifying productions and still see if the SA technique is applicable. HGA is used to develop a systems program to aid the study of the effect of such perturbations on a BNF grammar. The second application of HGA applies the dfs core algorithms of the first application to SA. dfs can be viewed as the scaffolding, representing a regular grammar (i.e. a context free grammar (cfg)) with just unit productions. dfs can be used to determine the connectivity between two nodes in a graph and thus by repeated application it can be used to determine the transitive closure of a relation (that the graph represents), rather than employing the Boolean matrix oriented Warshall's or Warren's algorithms. Transitive closure determination using Boolean matrices, involves avoiding a great number of useless '0' as the matrices are highly sparse and thus using up substantial space and time. Perturbation of a BNF grammar, of a given real life programming language, involves re-computation of precedence relations and repeated computations of transitive closure. To obtain the new precedence relations in hundreds of milli- seconds, it is seen that direct use of dfs, with graphs as the abstract data type employed (with dfs) rather than very sparse Boolean matrices (with Warshall's algorithm), gives better payoff both in terms of space and time.

The applicability of HGA with the same dfs core implementation, with suitable modifications and extensions, to generalised precedence techniques, LL(1) and to the SLR(1) and LALR(1) strategies of parsing is discussed.

The Third application to demonstrate the validity of HGA is in the area of Error-Correcting Compiler-Compilers. The eminent ALCOR group had pioneered in portable ALGOL-60 implementations and had uniformly used a Syntax Analysis technique known as Transition Matrices (TM). A formalisation of the technique is due to Prof.D.Gries, who further suggested a strategy for incorporating error- correction into the technique with the routines automatically generated. If one considers the TM technique as a logical generalisation of precedence techniques with the table used to drive the parser containing algorithms rather than the constants, one sees the suggested error- correction as further generalising by adding data structures (i.e upto three stacks) to the algorithms in the table. Repeated attempts to implement the systems program have always yielded monstrous programs highly demanding on space, time and code. However, if graphs rather than matrices are employed for transitive closure computations and HGA is employed, the monstrous program stands tamed. An additional advantage, as per the second application, is gained. Here the applicability of HGA is to consider the systems program at different levels of abstraction, with different abstractions of the problem simultaneously operating at the same level. The abstract data types employed being sets, graphs, matrices, sequences, strings and formal grammars.

The Fourth application of HGA is in the area of pattern recognition in a given real-life Information Systems Environment. A template of a full masterpicture (master- pattern) in Euclidean Space is to be matched against a sub- picture (sub-pattern) in the same Euclidean Space. The master-pattern is modelled as a finite set of computable points in the Euclidean Space and the sub-pattern similarly digitised. The master-pattern is super-imposed on the sub- pattern (or vice versa) and a match if it exists, subject to a predetermined threshold, is determined. It is shown using HGA that in carrying out the match one can do very much better than try all possible rotations, reflections and translations. The solution proceeds from an initial sparse match to a final exhaustive fine-match, subject to the threshold requirement. The solution is considered for both the cases of scaling being present or absent, the former adding (but not degenerately) to the complexity of the algorithm. At present crucial applications of the algorithm exists in the Defence area in the indigenous environment. An extension to affine geometry is easily possible and applications exist in the indigenous environment. At the time of writing of this thesis, it was pointed out to the author, by the eminent Prof.C.A.R.Hoare, that the solution finds application in modern particle accelerators. Since a general solution is considered with the only constraints being finiteness of the sets of computable points representing the master-picture and sub-picture and finiteness of the threshold requirements, the solution is of general applicability in the Image Processing area and perhaps forms a base for associated geometries.

The master-picture and the sub-picture are simultaneously modelled as finite sets of points, ordered finite sets of ordered pairs of polar/cartesian co- ordinates, complete polygons in the Euclidean Space (i.e each pair of points connected by a 'line'), complete graphs with the digitised points as nodes (each pair of nodes connected by an 'arc') with the graphs varying to subgraphs as the computation proceeds, collection of vectors in the Euclidean Space emanating from the nodes, strings with the lengths of the sides of the polygon and the relevant included angles, sorted and unsorted sequence of 'line' lengths. HGA is used to keep the above abstractions separate and used to develop the resulting algorithm. The HLL employed being a mixture of high level language features (with pictorial comments). Using HGA the above was first worked out mentally for 3-4 man-months, transferred using HGA to pencil and paper and then finally on to sequential PASCAL/C on the 286/386/486/Pentium based PC platforms, using HGA for each platform.

The Fifth application of HGA generalises on the fourth, by allowing tolerances in the matching i.e. two points match if they are in some neighbourhood, as defined by some 'length' in the Euclidean Space. The algorithm proceeds with two stages of coarse matching and then finally uses pre-computed modifications of 'rubber masking' data i.e. possible deformed orientations of a matching pair of lines, to arrive at a fine, exhaustive match. Applicability of the solution is to real life Information Systems and real-time applications. It has been interesting to note that not more

than eighth grade Euclidean Geometry is employed. Of particular interest and motivation for the solution has been the vexing chance finger-print matching problem. It is a folk axiom (empirical theorem(?)), in the forensic area, that no two finger-prints are alike. A finger-print can be mapped onto the Euclidean plane and then it effectively consists of two-dimensional curves. The learned forensic area has given an importance to the singularities present in the 2-D picture of the fingerprint and has classified the same. It can be said that no two fingerprints can have more than 12-15 singularities matching and these singularities are called minutae and upto 200 exist in a print. In a 'full' print, two distinguished minutae (points in the digitised set of points on the plane) are present and are called the core and delta. Given two prints, with the cores and deltas present and known in the digitised picture, it is merely a matter of aligning the cores and deltas, rotating the other minutae (an easy proposition when polar coordinates are used) and verifying the matching of Pythagorean lengths upto the threshold of matching. Given a print and a master-file of prints, the computation identifies matches upto a threshold of matching to be further verified (finally) by the human forensic expert. Given cores and deltas, HGA is usefully employed to match at the rate of couple of thousand prints a minute, on a simple PC platform (386/486/Pentium). The vexing problem arises if the core and delta are not known as in a partial-print known as a chance-print. The traditional solution is to try all possible rotations, translations and reflections, before

rejecting a master-print as a possible match and this involves a high computational overhead. The complexity of the computation is in practice much more as tolerances in the matching have to be considered, owing to 'smudging' and the digitisation process. The algorithm obtained by the fourth and fifth applications of HGA is seen to be fast in rejection and allows the matching of a chance-print in a sequential file of 3*10**5 prints (representing the current fingerprint database size of an average State in India), in very much less than an hour, on simple computational platforms of simple PCs (286/386/486/Pentium). The applicability of HGA to consider alternative hardware platforms in the above is considered in the development of a systems program. As the solution, as in most image processing applications, lends itself easily to parallel processing speed-ups, a PC/LAN with Novell Netware (with MS-DOS) is considered to be a multi-processor configuration onto which the algorithm is mapped using HGA. The physical hardware employed is to use a PC/LAN with diskless nodes, the terminals connected to the nodes being removed after installation of the software. The PC/LAN with its diskless nodes, without terminals for the nodes, is then moved to a rack-mounted or desk-side sized cabinet and thus portability of the entire system, along with the database, is achieved, and can be geographically mobile on small vans/trucks. Thus it is seen that HGA used with a PC/LAN is essential technology, for developing economies.

The Sixth application of HGA is a summary of applications of the method to pioneering compiler

developmental efforts of major real-life programming languages, in an industrial environment, in the seventies and eighties in India, using moderately reliable computer systems. The yield of HGA has been, first in India, experimental/commercial compilers/interpreters for PASCAL (sequential and concurrent), APL (LL(1) syntax directed compiler), CORAL-66 (employed in real-time applications), BLISS (experimental implementation), subsets of ADA, subsets of CHILL, FORTRAN-IV, -77 (scientific applications); LISP, SNOBOL, PROLOG (for Computer Science education); subsets of PL/1 (experimental); ALGOL-60 (commercial); C (experimental and limited commercial use); etc. The HGA developmental efforts used LL(1) uniformly more in the form of top-down without backup, with one symbol look-ahead. Extended-BNF formulation of the syntax was employed and such features as did not fit the LL(1) technique where coerced to do so by (ugly ?) semantic routines. The initial HLL version was worked out by the author in PASCAL with little or no debugging; and a testing of the same and hand-coding to assembly language (at times FORTRAN) was done by M.Phil dissertation work (of the Hyderabad Central University) & trainees in Computer Science and Engineering and Information System Engineering. The hardware platforms were the slowly stabilising TDC-316 & System 332 (a 32 bit medium-large copy of IRIS-55 of CII-Honeywell Bull, from which (as Prof.J.Saltzer commented during his visit) very little software was inherited). Extensions of the studies, of the applicability of HGA, has been to various TWS, including the SLR(1), LR(1) and LALR(1) techniques.

The Seventh application of HGA elaborates on the third, and shows a logical extension to bottom-up parsing techniques from the constants in the parsing table that standard precedence techniques (and most generalisations of the same) consider, the ALCOR group considered algorithms in the parse table. Incorporation of error correction suggested augumentation of the algorithms with data structures and hence encapsulation in an object is a logical extension of the generalisation. Thus an object in a table of objects is to be messaged based on the top of the parse stack and the incoming symbol. The necessity of having to deal with different abstractions of the problem at the same level, in the third application, generalises to the concepts of Data Abstraction and Polymorphism, in a very logical extension of ideas. The development of the TM parser, first without error-correction and then incorporation of error- correction as a subsequent step (with syntactic and semantic backup), allows the development of the Inheritance concept. Thus we have the concepts of Data Abstraction, Polymorphism, Encapsulation and Inheritance, from the intrinsic ideas, involved in the third application; and this generalises to the concept of OOP (Object Oriented Programming). The parse table of objects obtained, contains in effect, a data-base obtained based on the properties of the OG (Operator Grammar) modelling the syntax of the programming language considered and this generalises to the concept of OODBMS (Object Oriented Data Base Management Systems). Thus a hypothetical point of view can be taken that OOP and OODBMS follow as logical generalisations of the

intrinsic ideas involved in the efforts of the ALCOR group. This generalises HGA to use Object PASCAL in HLL descriptions, especially in the Graphics Specialism. The Eighth application of HGA, is in the non- traditional area of the applicability of real-life programming language (ALGOL-like, PASCAL-like or COBOL-like) descriptions, to aid popularisation of education and literacy in Automata Theory and the Theory of Computation in the Indian Environment. This applies HGA to (reliable(!)) abstract target machines. It was found that the use of the RAM and RASP turned out to be crucial in the training, instruction and mastery of the machine/assembly language instruction repertoire of a given processor. One can easily master the entire instruction repertoire, repeating elementary programming examples using HGA by isolating universal subsets of instructions in the repertoire. In this, one takes abstract views of the instruction repertoire subsets, as dealing with a set manipulating machine, a Propositional Calculus machine, at one end and at the other extreme end, the full power of a Universal machine a string manipulating machine, or a machine doing fixed and floating point arithmetic through algorithms. Such studies and training in a totally industrial environment were crucial in training assembly/machine language programmers, from the dilettante programmer to hard-core systems programmers, with proven success for a decade and a half. The TDC-12 and TDC- 312 (Trombay Digital Computer, 3rd generation, 12 bit word length) employed octal machine language programming, over a range of applications in Controls, Communications and

Computers. An upgradation took place to assembly language only with the TDC-316, as the assembler (paper tape oriented on the TDC-12, -312) was too slow.

An extension of HGA, not too popular, was to use other abstract machines, at the lower end of the hand-coding. While it was found that the RAM and RASP were directly useful, the equivalent abstract machines of the Chomsky hierarchy only served as cultural background, barring the finite automata (which is everywhere). HGA when used for elementary Computational Complexity theorems, is effective, but laborious. It was concluded that while useful for introductory and moderate theorems (results), there is no other way to study, train, think, teach or do the area expect as propounded by the eminent Prof.J.E.Hopcroft (1969, 1979). Similar attempts are seen recently in a book by Prof.Wood and similar conclusions can be drawn. This arises primarily due to the power and effectiveness of the use of 'intuitive arguments' in the area. A practical application of Automata Theory and HGA, is the use of the practical algorithm that arises, from an application of Cook's Theorem on two-way deterministic pushdown automata, to the determination of the largest common subsequences of two given sequences. Using HGA, the algorithm is used as a scaffolding for a practical algorithm for the fourth and fifth applications, by employing a string encoding of the problems considered.

The Ninth application of HGA is in the non-traditional use of COBOL (the only language available at times, to quite

a few programmers for quite sometime (over a decade, in the environment), as a vehicle for Computer Science and Information Systems Engineering Education, and thus attempts to show the applicability of HGA in the Business Software area. Though FORTRAN-IV, -V have been decried by the eminent Algolists, COBOL-74 has many features to aid structured programming and step-wise refinement can be made visible when programming in COBOL-74. An attempted mapping, successful from the feedback obtained for over a decade and a half, has been to use the algorithms in Prof.D.E.Knuth's monumental works (The Art of Computer Programming, vols I and III) with COBOL-74 and HGA. The environment, as elsewhere those days, allowed little memory, the COBOL-74 translator poor optimisation and extensive overhead (both verbosity and space-time requirements) and HGA was essential to run the final programs in assembly language. A practical application of COBOL with HGA, is the use of the same in using the solutions to the fourth and fifth applications and embedding the same in a larger IS program to set-up and manipulate a data-base of fingerprints.

The advent of COBOL-74 made substantial applications in Information Systems possible. However, the high unreliability of the Hardware/Software combine required HGA, as extensive patching was required both in the source code and in the target code. (Reliability was achieved only by the mid-eighties). There have been HGA uses, in the sense of conversion of COBOL packages, to assembly language for efficiency. Today, however, the COBOL programmer views his

COBOL machine as being transparent (as elsewhere). The Indigenous Period of the Indian Computer Industry, built on the philosophy of self-reliance, on the TDC-platforms, came to a logical and physical end by 1990. In 1987 an Information Systems Life Cycles Audit, was initiated and undertaken by the author, to study the Product, Project and Information Systems Life Cycles on the TDC machines (about 1000 installations). The said Audit is now summarised and extrapolated, using a reverse mapping of the Industry Structure Model (ISM) of the British Computer Society (BCS), (now accepted by the European Commission), to obtain a study of the Environment under which the above Software Life Cycles involved had to operate.

The Tenth application of HGA is a summary of the attempts made to apply the technique to the development and management of projects in Communication Software, real-time Software and Parallel Processing Applications. The Communication Software applications have been oriented to radar development for military and civilian use (in assembly language, with HGA historically and currently involving upto 50,000 to 1,00,000 lines of 'C' code); programming of spc telephone exchanges (historically in assembly language with HGA, REMAL, and currently in C and C++) and applications in distributed architectures in SCADA (Supervisory Control And Data Acquisition) applications (historically in machine, assembly language with HGA, real- time FORTRAN, CORAL-66 languages and currently in 'C').

The Parallel Processing applications of HGA could have used in the development of Operating Systems on mini- computers, though in actuality assembly language was directly employed. The O/S implementations were by hard- core, experienced and competent assembly language programmers, to whom HGA did not appeal. The consolation is that HGA, with RAMs and RASPs were employed to make them learn assembly language programming in the first place. (This despite the fact that HGA is useful for the kernels of operating systems as shown elsewhere).

The tenth application of HGA is (in part) in real-time application software and deals with three aspects of applications. One is the use of Simulator Software for Nuclear and Thermal Power Plants (running plants as opposed to designing them). The second is the development of Human Machine Interfaces for Nuclear Power Plants. The third area considered is SCADA (Supervisory Control and Data Acquisition) applications in:- a) Gas/Oil flow as distributed over hundreds of miles (distributed computer architecture) to and from refineries.

b) Distribution of energy in Railway Traction after generation. c) Energy flow in steel plants (comparatively geographically localised). d) Water distribution over hundreds of miles.

Traditional applications used HGA, partly, but current distributed architectures tolerate the use of 'C' as HLL directly, bypassing the need for assembly language, and HGA is used only as a model of successful project management. An ongoing project of the application of HGA is the grid isolation problem in power distribution in India. The direct application of HGA in the tenth application has been in the development of compilers for ADA (real-time Software) and CHILL (Communication Software). These are experimental implementations. A special language REMAL, using HGA, was designed and implemented under the direction of the author in the seventies for communication software. A natural application and crucial use of HGA, with parallel processing concepts, arises in the area of Scientific Software, in the indigenous environment with its curious economy and way of life. The human programming effort is cheap, the degree of materialism is low, a PC is not a personal computer but a shared facility, a workstation is rare, owing to the relative levels of salaries/income. However, the LAN as a centralised facility is common, and it can be considered to be a multiprocessor configuration. This allows the polyalgorithms that naturally and intrinsically arise in the area of Numerical Methods to be mapped on to the nodes of the LAN. The use of HGA allows the complexity/sophistication of the nodes of the LAN to be lesser than if a HLL is directly employed and this has a multiplicative effect in the savings in the cost and sophistication of the LAN. The high cost of copyrighted imported software, will as the environment develops, lead to

indigenisation of the software and the use of HGA will lead to substantial savings. An elementary case in point has been use of HGA in the polyalgorithms associated with the determination of the roots of polynomials, using FORTRAN as HLL. Another line of application is the practical use in industry, of the algorithms resulting from the exploding studies in recent years of parallel programming speed-up of Numerical Methods algorithms. When the parallel algorithms are augmented by the use of HGA, the substantial savings in the cost of the LAN allows practicality of commercial indigenous scientific software. By generalisation it is seen that the applicability of HGA is to all developing economies.

The short-term future directions considered are the anticipated improvements in the capture of the Information Resource in the Indian Environment, which will enable extensions of HGA to more sophisticated Information Systems Life Cycles involving substantial use of the Database Specialism, extending HGA to 4GLs (a case in point being the ongoing use of ORACLE in front office bank computerisation efforts). At present ethical, cultural, traditional etc., constraints make the capture of the Information Resource the main bottleneck in India, in the use rather than the applicability of computers. Increased industrialisation and competition will demand more cosmetic applications, extending the use of HGA to the Graphics Specialism (a case in point being the ongoing cosmetics in military radar application software), with associated extensions of HGA to OOP. The long-term future directions are in the

applicability of HGA to the Knowledge Engineering Specialism (a case in point being ORACLE based Knowledge bases, in medical applications), and in the direction of the lower end of the handcoding the non-traditional non-Von Neumann Architectures (a case in point being the use of systolic arrays for Image Processing); and non-Electrical Engineering based technologies (a case in point being fingerprint processing with Optical Computers). The advent of Man Machine Interfaces (MMI), make possible studies of HGA in non-traditional computation specifications, at the higher end of the hand translation, i.e. here we move from the HLL specification / representation to the computation specification (cases in point are diagnostics, comments, computation specifications and results of computation representations using pictorial and audio MMI). As an indication an overdone cosmetic application, in elementary Civil Engineering: an Estimation and Costing Package, oriented to the middle class Indian housing projects, is outlined. This employs pictorial, audio and text as input/output, in the 14 major languages of India (a multi- lingual interface with a single machine translation).

The role of HLL descriptions turns out to be very crucial, and in principle the full (total) verification of the software: the key factor. However, in practice, it appears that as pointed for practical compiler writing: formal techniques need not be used directly but only help to guide one's thinking, is also generally applicable to the total verification of the HLL specifications and descriptions.

Thus in principle HLL description verifications in non- traditional computations specifications that advances in Human Machine Interfaces make possible, form a useful and crucial study. The paucity of Information Systems Audit and Quality requirements so far will, when the environment develops require major applications of verification techniques. The paucity of traditional requirements in scientific applications (Numerical Methods Software), the Electronic Design Automation (EDA) area and in the practical scientific, engineering and industrial environments, have led only to cursory studies in the area, and these are pointed out as future crucial applications of HGA. One could take the point of view that programming in any computing environment, in non-trivial scientific and industrial applications areas, can be said to proceed in four distinct stages: starting with machine language programming with HGA, then on to assembly (macro-assembly) languages with HGA, a logical development of the next stage employing a plethora of well-understood and implementable mixture of HLL features and assembly language with HGA, and finally stabilising into the use of a totally relevant and accepted HLL (provided the overhead is socially, economically and technologically tolerable). Historically, computer pioneers in the West, did not have a wide access to HGA, as no HLL existed (it was being invented), but in developing economies and upcoming technologies, the well understood HLL area can be used, along with verification techniques, to develop the relevant software, till the relevant and acceptable HLL evolves, relevant translators

writing techniques advance, and affordability and availability of technology allow a relevant HLL to be directly used. (Two cases: CHILL for Communication Software development and ADA for real-time applications). The conclusion drawn from the above is that HGA is a very valuable tool, not only in un-reliable computing environments, but also has applicability over a wide range of applications and platforms. Many times it is the only strategy one has access to, to think at HLL level, but the requirements of the systems program may have timing / complexity constraints, that one has to satisfy, and this involves going down to the machine level, with the relevant variations of hardware platforms, depending on available and affordable technology. The views of the pre-eminent Prof.C.A.R.Hoare, FRS, that the topic and methodology is absolutely uptodate and relevant for the future, lends credence to the above point of view. The opinion of the pre-eminent Prof.E.W.Dijkstra, to the author, that the above experience in HGA is sufficiently extensive, varied and should (if well-written) prove instructive, establishes the validity of HGA as a crucial strategy and tool for quite some time to come.

The main point tried to be made is, that HGA in some primitive, pristine sense, allows a platform of views that dons perhaps a pretentious, illusory, but a theoretically sound and practically viable mantle, of attempted gestalt views of the computation specifications, representations and implementations over a range of abstract and real

computational technologies/platforms: as being merely transformations of representations of effective computations. Finally, the historical motivation to use Hoare's/Hansen's method, owing to environmental constraints, and as an indication, an elaboration of the historical environment, the TDC-316 ALGOL-60 Translator development is outlined.


The primary acknowledgement is due to Prof. C.A.R. Hoare, FRS, Oxford University for his encouragement based on a draft of the synopsis of this thesis. The strong favouarble encouragement and inspiring positive comments on a preliminary outline of the work, from the point of view of exploring a general software engineering technique by Prof. Dr. Edser W. Dijkstra and Prof. P. B. Hansen are gratefully acknowledged. The guidance of Prof. W. A. Wulf on the basics of writing a thesis on programming methodology and his positive encouragement in the writing of this thesis have been invaluable. The encouragement received from a response from Prof. Juris Hartmanis, Cornell University, on the results of the Euclidean pattern matching problem from the point of view of a general result of general applicability in Computational Geometry applications have been a primary motivating force. The motivation provided by Prof. M. R. Williams, Professor of Computer Science and Assistant Editor- in-Chief of the Annals of Computing of IEEE, by inviting a report on the Indian ALGOL-60 implementation and the associated software engineering environmnet for publication in the reputed journal is gratefully acknowledged. Also acknowledged are the actions of Dr. William Aspray, Director of the Center for the History of Electrical Engineering of IEEE, and the custodians of the archives of the National Archives of Electrical Science and Technology(NAEST) of IEE(UK) and the Babbage Institute for the History of Information Processing for recording the said software engineering environment which initiated the study and applications of the Hoare/Hansen Generalised Approach(HGA) in the indigenous environment. Grateful acknowledgement is made to Prof. David Gries for posing the implementation problem of the incorporation of automatic error-correction into the Transition Matrix syntax analysis technique which is a relatively unexplored technique of general practical applicability in compiler construction and compiler- compilers.

The fundamental guiding philosophy of Prof. J. Das, DSc, FNA, formerly of the Indian Institutes of Technology, Kharagpur and Delhi, that software without hardware should not be considered is acknowledged and has been one of the motivating forces for generalising the approach suggested by Prof. P. B. Hansen.

The guidance of Prof. John E. Hopcroft, Prof. R. W. Constable, Prof. Gerard Salton, Prof. Ellis Horowitz, Prof. Alan C. Shaw, Prof. David Gries, Prof. J. J. More, Prof. R. Williams and Prof. R. W. Conway for inculcating and instructing me in the principles of computer science is acknowledged.

The invaluable comments, of Prof. J. Saltzer of MIT, Prof. John W. Carr III of the University of Pennsylvania, Prof. C. A. R. Hoare of Oxford, Prof. P. B. Hansen then at Cal Tech and Prof. R. W. Burstall, on computer science during their visits to India which played a guiding role in the development of this thesis are acknowledged.

The comments of Prof. S. K. Sahni, FIEEE, of the University of Minnesota, on the algorithm dealing with the connectivity of dynamically varying graphs, which arose in the context of triggering a real-life nuclear reactor, are greatfully acknowledged. The inculcation in the early seventies into good programming principles by Prof. Bernd Krieg of Germany, Prof. Donald Johnson, Prof. Steven Muchnick and Dr. N. Gehani of Bell Labs is gratefully recalled.

The constructive discussions in the early seventies on Computability with Prof. Harry Hunt III of Harvard, Prof Kurt Mehlorn of Germany, Prof Janos Simon of the University of Brazil, and Prof. Thomas Szymanski of Princeton are gatefully recalled and acknowledged. The encouragement and opprotunity provided to write this thesis by Prof. Dr. I. Gopal Reddy, Vice-Chancellor of the Jawaharlal Nehru Technological University, is respectfully acknowledged.

The guidance and comments on this work by Dr. S. Ramani of NCST, India, Prof. P. V. S. Rao of the Tata Institute of Fundamental Research, Prof. D. V. R. Vittal of Osmania University and Prof. M. N. Seetaramanath of Andhra University have been basic guiding and motivating factors. The strong motivation and environment provided by Dr. S. Srikantan, PhD(Moore School), FCSI has made the work reported in this thesis possible. The motivation provided by the International Federation of Training and Development Orgainisations (IFTDO) in inviting an article on the Information Systems Life Cycles Audit outlined, along with Sri Reggie Thomas, BTech(IIT), FGTS (Chief Manager, State Bank of India) and Sri. A. Vijaya Kumar, BTech(IIT), FGTS (Additional General Manager, State Bank of India), is acknowledged. In this connection the help of Sri P. V. L. N. Murthy (Chief Manager, New South Wales Bank of Australia), Sri Ch Sreenivasa Rao (Chief Design Engineer, SGS Thompson, Singapore), Sri Subrata Sen Gupta (Chief General Manager, Philips India Ltd.), Sri T. Ramamurty (General Manager, Nizam Sugars) and Sri T. V. Subba Rao (Director, Finance, Srigareni Colleries) is acknowledged.

The constructive comments of Sri. T. K. S. Murthy, formerly Group Director, Bhabha Atomic Research Center and Prof. P. Narayanan of Bombay University on the thesis have been invaluable.

The motivation, guidance and encouragement to complete the work due to Dr. C. Rao Kasarbada, Chairman and Managing Director, Electronics Corporation of India Limited(ECIL) and Dr. E. V. R. Rao, Director(Technical), ECIL are acknowledged.

Sri V. H. Ron, General Manager, ECIL has provided the environment and motivation to complete the work and his encouragement has been invaluable.

The strong continuous enouragement of Sri Dilip Kumar, Additional General Manager, ECIL is acknowledged.

The formal critical guidance of Dr. G Ramakrishna, PhD(Saha Institute of Nuclear Physics) is acknowledged.

The driving force of Prof. G. R. Babu, Director, JNTU has been one of the main forces behind undertaking the completion of this thesis.

The motivation, help and constant envouragement provided by Sri M. Ramakrishna Rao, Chief Legal Adviser, Central Bureau of Investigation, India is acknowledged. The Euclildean pattern matching problem was first brought to my attention by Sri D. V. Rama Murthy, BTech, and Sri R. Raghuveera, BTech, from the forensic area. Thanks are due to them. The problem with its applicability in Geographic Information Systems was highlighted by the valuable discussions with Sri P. Dharm Veer (Senior Scientist, Survey of India) and the applicability of the same for missile guidance was understood by valuable discussions with Sri S. Sridharan (Chief Scientist, Defence Research Development Labs) and due acknowledgements are made to them.

The guidance, motivation and help in the applications of the algorithms by Sri T. Sri Ram, General Manager, Indian Space Research Organisation(ISRO), India is gratefully acknowledged. The help of the Library and Information Science Officers to study the control point matching problem in the areas of computational geometry, tomography, cartography, remote sensing, molecular chemistry, biology, robot vision and guidance, bio-metrics in the forensic area, signature verification etc., at the library and information centers at the Indian Institutes of Technology ( Kharagpur, Delhi, Bombay and Madras), Nizam Institute of Medical Sciences, Survey of India, National Remote Sensing Agency, Indian Institute of Chemical Technology, Center for Cellular and Molecular Biology, Center for Artificial Intelligence (University of Hyderabad), National Police Academy and the State Bank Computer and Communication Institute, respectively, are gratefully acknowledged. Special thanks are to Ms. T. Swarna, Information Officer of the Bhabha Atomic Research Center Library for providing a large number of articles.

Acknowledged are the strong encouragements of Sri S. R. Vijayakar (former Secretary, Department of Electronics, Government of India(GOI)), Sri. S. V. Kasargod (former Director, Electronics Corporation of India(ECIL)), Dr. R. Narayanasamy (former Executive Director,ECIL), Dr. N. V. Koteswara Rao (Additional Director-General, National Informatics Center, GOI), Sri. G. R. Pai(former General Manager, ECIL), Prof. R. V. S. S. Avadhanlu (Professor, Nizam Institute of Medical Sciences), Dr. B. R. Sastry (Professor and Director, Board of Studies, Siddhartha Engineering College), Dr. Sridhar Mitta (Vice-President, Wipro Information Systems), Sri V. K. Premchand (Chairman and Managing Director, Frontier Information Systems Ltd.), Dr. A. Lakshman Rao (Vice-President, Wipro Information Technology), Col. V. S. M. Sharma(former General Manager, ECIL), Col. B. S. Nagendra Rao (former Director ECIL), Dr. B. G. Siddharth (Director, B.M. Birla Science Center).

The guidance and motivation of Prof. L. C. Siva Reddy (Department of Computer Science, Osmania University), Prof. Dr. P. G. Reddy (former Dean, Hyderabad Central University), Prof. P. Subba Reddy (Director, Computer Center, Osmania University) and Prof. M. R. K. Reddy (Director, JNTU) are acknowledged.

I would be failing in my duty if I do not convey that the principal driving forces to see me through throughout my career has been my wish to live upto the expectations of my father the late Sri Tenneti Krishna Rao, IOFS, as also my esteemed Professor, Dr. Juris Hartmanis of Cornell University who continuously inspired me and goaded me towards achievement. The opportunity to experiment with HGA educationally due to Sri V. Premsagar RAo, FIETE and Sri M. Madan Mohan, Disg. FIETE at the Institution of Electronics and Communication Engineers(IETE) is acknowledged.

The strong motivation to proceed with the work given by the late Prof. S. V. C. Aiya, FNA, FNAE, CEng, FIEE and the late Prof. K. Sreenivasan, FNA, FNAE, CEng, FIEE (both formerly of the Indian Institute of Science, Bangalore, India) is gratefully acknowledged. Thanks are due to a host of colleagues, students and trainees for well-meaning advice and motivation in carrying out this work. Among these Sri P. Appaji Rao (Head, Telecommunications Division, ECIL), Sri B. Ram Prasad, Sri M. Hariprasad Rao, Sri N. R. Krishnan (Senior Manager of Engineers India Limited), Sri N. Renukachary and Dr. D. Nagaraja are specially mentioned.

Thanks are also due to my daughter Kum. Suchaita aged seven and Ms. T. Jyoti my wife for putting up with the one year tantrums associated with the writing of this thesis.

Thanks are due to the advice and help of Sri V. Kim Reddy (Senior Documentation Officer, ECIL) and Mr. Samuel Raj (of ECIL) for the preparation of the thesis.