Thursday, 9 August 2012

Industrial Engineering


Managing the IE (Industrial Engineering) Mindset: An investigation of Toyota’s practical thinking shared among employees

Abstract:
Purpose: The goal of this work was to investigate the managerial practices of today to understand if Toyota is sheltering themselves from these newer practices or embracing them like most believe.

Design/methodology/approach: This work utilizes a new form of data mining named Latent Semantic Analysis (LSA) to analyze an organizations ideal management practices.

Findings: This work shows quantitatively that TPS favors earlier versions of industrial engineering compared to the optimization techniques available today.

Originality/value: The use of data mining to analyze organizational management practices.


Introduction

     What is unique about Toyota's system is not particularly any single piece of TPS, but how the pieces are combined to bring out something new, different and very difficult to imitate. It is argued in this work that Toyota's management system is a richly interconnected set of parts and relationships that are more important than the nature of the parts themselves. This means that even if the parts themselves can be identified, their relations are often lost, which loses meaning of the system. It is believed that research in TPS must follow the same type of systems thinking to discover how TPS emerges from the way the parts are organized in the system. Holism, rather than reductionism can provide a more entire solution than a partial one. Historically, practitioners have been concerned about what Toyota is doing now rather than what was Toyota doing when TPS did not exist. Pioneers like Taiichi Ohno, the father of TPS and one of his close friends, Sheigo Shingo, an industrial engineering consultant to Toyota during the time, are less received and noted for developing TPS. 
          In Ohno's book (Ohno, 1988), named the Toyota Production System, Ohno firmly believed that TPS is simply a form of industrial engineering (IE) aimed at reducing cost through systematic study. By treating everything as a process, Ohno and Shingo built the interconnections of TPS one by one, but more importantly passed on this industrial engineering way of thinking to future generations. The purpose of this work is to evaluate and quantify some of Toyota's thinking styles as it relates to Ohno's traditional view of TPS. It is speculated that one of the ways Toyota is able to develop such a holistic approach to TPS is by passing down from generation to generation a type of thinking similar to industrial engineering. The secondary goal of this work is apply a new form of management science, named dimensional reduction analysis to highlight and quantify managerial preferences. The secondary goal of this work is apply a new form of management science, named dimensional reduction analysis to highlight and quantify managerial preferences. In this work the link will between TPS and IE will be established and analyzed to determine which trend of IE practices are utilized to maintain the TPS structure.


Literature review

   Ohno believed that the quickest way to catch up with America was to import American production management techniques and business management practices. Toyota studied industrial engineering (IE) which by Ohno's accounts can best be compared to the Toyota Production System. That is, a company-wide system tied directly to management to systematically lower cost and raise productivity. Shingo also viewed TPS as a way of thinking to addresses plant improvement. He believed that management should possess a set of fundamentals closely related to industrial engineering as a way to spread and teach the Toyota Production System.

1     Industrial engineering

An industrial engineer is one who is concerned with the design, installation and improvement of integrated systems of people, material, information and equipment which utilizes specialized knowledge and skills in mathematical, physical and social sciences together with the principles and methods of engineering analysis (Salvendy, 2001). Over the years industrial engineering has drawn upon mechanical engineering, economics, labor psychology, philosophy, and accountancy in an effort to bring together people, machines, materials and information (Saunders, 1982). If industrial engineers had to focus on one aspect of their field it would be productivity or productivity improvement. That is, the total elimination of waste by increasing efficiency through cost reduction (Going, 1911).

Industrial engineering not only covers the technical aspects of systems, but also systems relating to management. Anderson proposes that industrial engineering is one the primary drivers for linking the needs of the employers to the needs of the employees. Employers want industrial peace, reduction of cost, higher efficiency and improvement in quality. Employees want steady work, higher wages, better personal relations with their supervisor and good working conditions. By utilizing industrial engineering techniques, management can develop, evaluate and improve the wants of both groups (Anderson, 1928).

2     Scientific management

Scientific management was never a manner of how much a person can do under a short burst of speed but instead a safe and comfortable working speed that can be done day after day (Mogensen, 1935). The long term prosperity of the employer cannot exist unless it is accompanied by the prosperity of the employee (Gilbreth, 1973). Scientific management included employees to some degree in decision making.

3     Skill sets of industrial engineers in the era of scientific management
IEs were mostly concerned with defining processes, identifying problems and establishing standard operations (Harrington, 1911). Some of the most practical techniques employed by IEs was the use of direct observation and work sampling (Staley & Delloff, 1963). The industrial engineer also was proficient with the use of charting. By breaking down processes into smaller units, the IE could analyze work flow by examining process steps visually. Lastly, the IE was concerned with running trials to test new productivity ideas. By testing factors one at a time and by sequentially changing those parameters based on previous trials, the IE could speed up decision making while focusing on improvement.
4     Industrial engineering today
Today, industrial engineering has become a more integral part of the organization. With the invention of the high speed computer in the 1960s industrial engineering has evolved into a hard discipline where data can be recalled at any time and decision making can be improved through the use of models and simulations (Saunders, 1982). Computers have given industrial engineers the ability to analyze and optimize complex systems throughout the organization (Katzell et al., 1977). Industrial engineering offers several sub specialties such as human factors, job design, labor psychology and systems engineering. Now it is not uncommon for IEs to work on planning systems, supply chains, accounting systems and organizational polices.

5     Skill sets of industrial engineers in the 21st century
The skill sets of the modern industrial engineer are much different compared to the days of scientific management. Most modern IE skill sets emphasize rapid organizational change instead of spending time stabilizing and documenting current operations. Techniques such as Process Design and Re-engineering can result in radical change by focusing on end to end processes. PDR assumes a clean state change and suggests skipping documenting existing processes because it limits the vision of the design team with nothing to be gained (Taylor et al., 2001). A popular tool to aid in the study of complex organizational factors in PDR is Experimental Designs (ED) and Design of Experiments (DOE) concepts. These techniques allow industrial engineers to understand the complexities of the business and interacting factors acting on and within the organization before leaping towards a new state (Czitrom, 1999). Six Sigma is a systematic method for strategic process improvement that relies on statistical methods to make dramatic reduction in customer defect rates (Tanco et al., 2009). Initially established by Motorola in 1987, Six Sigma has been extremely popularized as new form of business management strategy (Jugulum & Samuel, 2008). Lean Sigma is another improvement methodology that is being employed by industrial engineers. Proponents suggest that by integrating statistical methods with the ideas of work simplification a common language can be developed to help organizations be responsive to changing markets while eliminating defects (Wedgwood, 2007). Six Sigma provides the detailed statistical study to optimize projects while lean is usually implemented through a series of short focused kaizen blitz. (Jugulum & Samuel, 2008).
6     Has lean followed the trends of industrial engineering?
The lean community has followed many of the same trends and skill sets as applied by the industrial engineering profession. Bicheno suggests that a kaizen specialist should be capable of performing value engineering in product design and development (Bicheno, 2000).  There is work that suggest the kaizen specialist should be able to perform cellular manufacturing, production flow analysis and supply chain infrastructure design (Askin & Goldberg, 2002; Srinivasan, 2004). In these contexts, the kaizen specialist is illustrated as person that exists within an organization to advance lean concepts in highly specialized areas single handedly. In other ways the kaizen specialist is also expected to work with employees utilizing team-based worker participation activities often referred to as kaizen events. Compared to extreme Taylorism, where the IE function is responsible for telling workers what to do, kaizen specialist appear to be much nicer and softer. For example, the work of Martin and Osterling state that these specialists should be armed with PowerPoint kick-off material, masking tape, whiteboards, post-it notes and kaizen team t-shirts (Martin & Osterling, 2007).

7     Is Toyota sheltered from modern industrial engineering?
IE handbooks today are emphasizing system optimization, advance computational mathematics and rapid overhaul within organizations. Interestingly, Toyota’s approach to TPS appears to be highly shielded from modern trends in the industrial engineering profession and mainstream business improvement methodologies. In 1935 Sakichi Toyoda the founder of Toyota developed five basic teachings based on the Toyoda family work ethic. His teachings emphasized the importance of practicality, good study habits and healthy homelike work environment. In the 1950s Taiichi Ohno initiated a new type of production system (i.e. TPS) with an emphasis on standardization, just-in-time, jidoka and kaizen (Ohno, 1988). Ohno’s shop floor focus and the idea of testing practical ideas immediately encouraged learning by getting employees to confirm failure with their own eyes (Ohno, 1988b). Ohno viewed that management should join with subordinates in experimentation and each supervisor must have the ability to teach. In 2001 Toyota continued this practical view of TPS when Fuijo Cho then President of Toyota Motor Corporation released the Toyota Way, a set of managerial values to strengthen the organizational thinking as it relates to work (Cho, 2001). In 2005 Cho re-issued the company’s 8-step problem solving process named Toyota Business Practice (TBP) as an effort to share a common way of thinking about problems in the workplace (Cho, 2005).

Research approach
The overall approach in this analysis is to analyze Toyota’s organizational documents by applying statistical data mining. This work will use Latent Semantic Analysis (LSA) to study Toyota’s industrial engineering techniques, systems and managerial practices. LSA is a theory and method for extracting and representing the contextual-usage and meaning of words and phrases by statistical computation applied to text (Landauer, 2004). LSA is based on Singular Value Decomposition (SVD) which is a mathematical matrix decomposition technique using factor analysis. In LSA repetition does not imply relevance because LSA looks at both the local weights and global weights and normalizes them.LSA is particularly not suited for distinguishing similar terms that vary in context. The study of IE practices can be more quantitative and precise using LSA compared to traditional techniques. Existing techniques are largely subjective and qualitative. Current methods rely on interviews that assume that participants’ accounts are a fair reflection of what has actually occurred. Subjects also tend to inflate the results due to attribution theory (Gerhart, et al., 1999). Attribution theory is the tendency to make causal explanation about the world based on individual internal beliefs.
Questionnaires and surveys are also well accepted techniques for conducting research. Unfortunately, research shows that subjects are influenced in their responses by how questionnaires are formatted. Another popular technique is the use of informants (Gardner & Wright, 2009). An informant is a knowledgeable subject or employee inside the company that is used to measure the content and quality of the company’s system. Informants pose many problems in the study and are often a source of unreliable data. For observation techniques to be accurate, analyst need to make observations over a wide variety of persons and range of context. There are also many limitations due to participant observation. Human behaviors are often emitted at a rapid pace that exceeds our ability to note by informal means. Research has shown that behaviors are bundles of complex information (symbols, logic emphasis, warmth, aggressiveness, syntax, humor) and the personal process of filtration suffers from biased conclusions, which means that it is difficult to lead to objective evidence (Biddle, 1979). LSA may prove an acceptable substitute for traditional research techniques; however, there are noted concerns in this new area of study. First, it is assumed that employees within the organization are aware of IE expectations and can perform them if asked. Second, that these expectations produce a conforming behavior. Third, it is assumed that these expectations have been communicated by management throughout the organization. It is also unknown how management sanctions these expectations, which is the positive or negative reinforcement for engaging in the desired behavior. These unknown areas provide many interesting and future opportunities for research. Bedsides these short falls of studying IE corporate documents, inscriptions do represent a continuing existence and a more permanent intention of how IE is being used.
Research methodology
A document-term(s) matrix was created from numerous Toyota documents; such as the Toyota Way, The Toyota Business Practices, the team member basic training manual, the team member handbook, role of the supervisor, standardized work training manuals, process and system kaizen manual and problem solving for managers. Also tables for text corpus properties of the documents are used in the matrix and the themes selected in the study. Next, the Singular Value Decomposition (SVD) algorithm is used to reduce the document-term(s) matrix.

Interpretation of results in latent semantic analysis
The overall goal in LSA is to map the dominate semantic themes in a reduced dimensional space. The reduced dimensional space represents all word and document vectors in the semantic space or text corpus. This work will map the strength (i.e. magnitude) of each word vector and its ranks to illustrate the level of dominance throughout the document collection. Rank 1 (lower order) is the most dominant rank followed by rank 2 and so on. The singular value matrix indicates through a scree plot (not shown) the optimal rank. Ranks beyond the “k” value are less dominant.
Results
Figures attempt to describe how a manager from Toyota is expected to apply some of the same industrial engineering logic or thinking styles in the workplace. (For Detail analysis of figures go to http://www.jiem.org/index.php/jiem/article/view/293/252 )
Discussion
The figures attempt to evaluate how managers at Toyota are trained in assessing the workplace. More specifically, when a manger begins to question or speculate organizational activities or functions, there are two general approaches managers are trained to assess the current situation. A rank 1 data point in quadrant four would indicate that a manager's starting point in the investigation is to gather information by searching through a variety of different sources such as reports, shift logs, e-mails and databases. Data points in quadrant two would mean that a manager's first instinct into inquiring and assessing a workplace issue or problem is to see the problem first hand, meaning that it is more important to judge the situation personally without trying to make assumptions or inferences about actual situation.
Results suggest that Toyota strongly favors the scientific management approach in assessing workplace problems. Figure 4 substantiates quantitatively that Toyota trains it managers in genchi genbutsu which in Japanese means go and see. Managers are expected to go to the source to find facts to make correct decisions. This data also implies that management should not rely on other employees’ interpretation of the problem. It could also be argued that Toyota's scientific management approach for information gathering is out of date, unpractical and too slow for modern business. . Not all andons are problem solving or genchi genbutsu visits by a manager. Most andons are answered by a team leader to adjust or correct the process. Yet, when problems do reach a point of escalation or severity, managers at Toyota are expected to see the problem first hand. If the problem solving team did not actually see or witness the problem first hand, the countermeasure trial would not be approved. This is important because data does not replace facts gathered through genchi genbutsu. Again, like the Gilbreth's pointed out some 50 years ago, many charlatans got into time and motion study without understanding the purpose of scientific management. Like lean, most practitioners place value in obtaining speedy results rather than achieving consistency, predictability and stability. Toyota views that stability (i.e. standardization) is the prerequisite for kaizen. Toyota's preference on standardization compared to BPR demonstrates quantitatively that managers follow an older view of the IE profession.
Conclusion
This work uses a new method for analyzing managerial practices. LSA was used to mathematically describe Toyota's approach towards industrial engineering. The other goal in this work was to understand the similarities or differences of industrial engineering towards Toyota’s management style. Early work indicates that Ohno and Shingo modeled much of their thinking towards the earlier versions of industrial engineering, namely scientific management. Up to this point, most work describes Toyota or lean methods as modern IE techniques. This research shows quantitatively that Toyota’s managerial practices of today very much resemble the IE profession in the early 1900s. It is argued that Toyota has remained successful applying TPS because they have sheltered themselves from modern IE influences that seek to raise the competence of a single employee (i.e. industrial engineer) rather than the basic thinking skills of all employees. Lastly, this work illustrates that Toyota has been successful passing down scientific management practices from generation to generation. Interestingly, Toyota’s drive to “get back to basics” continues and remains a constant force among managers to prepare future generations of Toyota employees.




Wednesday, 8 August 2012

MOBILE REMOTE MONITORING SYSTEM

Abstract - Low bandwidth has long been a reason for the unsuitability of wireless internet in telemedicine. However with the advent of extended third generation wireless as an economically accessible high speed network, more opportunities are being created in this area of telemedicine. This paper explores the opportunity created by the latest wireless broadband technology for remote monitoring of patients in the home.

I.                  INTRODUCTION
One of the first records of wireless communication in telemedicine was the use of a GSM (second Generation) cellular network to transmit an electrocardiogram (ECG) from a patient in 2001. Since then there have been a number of developments related to improving the speed and capacity of networks. The main wireless technologies have been GPRS (2.5G), satellite and wireless LAN . The typical speed of 2G system is 9.6 Kbps. However the evolution of mobile telecommunication systems from second generation (2G) to 2.5G (iDEN 64 Kbps, GPRS 171 Kbps, EDGE 384 Kbps) and 3G (W-CDMA, CDMA2000, TD-CDMA) systems have provided faster data transfer rates but these rates are still not fast enough for many telehealth applications. Satellite communication is another wireless communication medium, which despite having high operation costs, has the advantage of world wide coverage and a variety of data transfer speeds from 10 Kbps to a high speed data rate of 100 Kbps. The latest technology in use in Australia is 3.5G, also called BeyondG and NextG. It is based on a High Speed Packet Access (HSPA) network which consists of HSDPA (High Speed Downlink Packet Access) for downlink and HSUPA (High Speed Uplink Packet Access) for uplink, uplink being slower than downlink. 4G, e.g. Orthogonal Frequency and Code Division Multiplexing (OFCDM), is the future generation of wireless network that boasts potential speeds of up to 100Mbps.
Our research focuses on use of a 3.5G based system for home monitoring and control of drug consumption in patients such as those on narcotic replacement medication. The research work has three parts:
1. Define the requirements for monitoring and control and identify the limitations of the extended third-generation mobile network in this application.
2. Design and implement hardware and software for a Mobile Remote Monitoring System (MRMS) capable of monitoring patients through audio and video and controlling remote assessment and medication dispensing devices through command signals based on extended 3G.
3. Test the MRMS firstly in the laboratory and then in a controlled external environment.


II. MOBILE REMOTE MONITORING SYSTEM (MRMS)
The MRMS consists of a dedicated server and at least one Client (Patient) side computer linked via cable or wireless broadband internet (extended G). A prominent feature of the system is its portability, so there is no location restriction. The system is designed to replicate the dosing conditions of a patient being dosed under the supervision of a clinician.

                                                                    An overview of the remote monitoring system


The program accommodates both types of Wireless modem providing the user with flexibility of choosing the appropriate modem based on network availability and cost.

1) The Client side comprises a laptop, a wireless modem, two webcams and a headset with a microphone. The Client has the following functions running on parallel Threads
a. Video communication: This function captures images from the webcams as well as receives images from the Server and plays them
b. Audio communication: This function captures mono audio at 11025 HZ at 8 bits per sample for 743 ms. Both the video and audio data are sent to the Server by User Datagram Protocol (UDP).

2) The Server side comprises a computer, USB webcam and a headset with a microphone. It is connected to the internet by broadband internet with a static IP. The Server has control of the Client (e.g. which webcam to use). The Server has the following functions.
a. Video communication: Unlike the Client, there is only one webcam at the Server.
b. Audio Communication: Audio has the same specification as for the Client. Both the video and audio send data to Client by TCP or UDP
c. Control Signal: This function allows the user to select between the alternative webcams at the remote end.

B. System Methodology
The MRMS comprises two parties – the Staff (Server side) and the Patient (Client side). The Client sends images and audio to the Server which has a fixed IP using UDP. UDP, which is also termed Unreliable Data Protocol since it doesn’t guarantee the delivery of packets, is, however, suitable for sending video images since loss of few packets is not as important as timely delivery of the latest images. UDP is faster than TCP as UDP does not wait for connection to establish and has less overhead. Congestion control is an important issue which has to be implemented at the application layer when sending data by UDP.  The webcams capture images at five different resolutions - 160x120, 176x144, 320x240, 352x288, and 640x480. Joint-Picture-Experts-Group (JPEG) compression was applied to the images.
III. EXPERIMENTAL
Experiments were conducted using this system in the lab with normal people. The Server was based on a Pentium IV 2.8GHz processor running Windows XP. The Client was based on a Dell Latitude Laptop with a Pentium M 1.5 GHz processor running Windows XP. Our university’s broadband internet was used at the Server end and the two wireless modems at the Client end. We had two webcams for general viewing, and headphones with microphones at the Server and Client ends. The Wireless internet had 7.2 Mbps downlink and 1.9 Mbps uplink speeds on a UMTS network incorporating HSPA running at 850 MHz. The questions thet arise are- quality of image required, frame rate, UDP perform compared to TCP, frame loss rate, bandwidth of the network, congestion.

IV. RESULTS AND DISCUSSION
The optimum range of frame rate for 640 x 480 was around 6 to 10 fps whereas for 352 x 480 the optimum rate was 10 to 20 fps in our experiment. 10 fps gave a satisfactory video output.
Two compression ratios 35% and 65% for image were used by the program. The average size for 352x288 using JPEG compression was around 6 KB at 65% compression & was 8 to 11 KB at 35% compression. UDP as expected could transmit frames faster than TCP which would wait for acknowledgement before sending the next frame. The upload frame rate was found to be around
9.00 fps with 65% compression by TCP, In HSPA, download is faster than upload. With UDP the download rate was around 9.90 at 65% compression There was no frame loss when the image size was 6.26 KB Congestion control is an important issue when using UDP  unlike TCP. However with our image size and frequency and the upload speed of 1.9 Mbps supported by
Telstra’s NextG, this was not a major problem.
V. CONCLUSION

Our results show that a usable mobile remote monitoring system with two way audio and video
communication capability can be achieved using 3.5G wireless technology. It can also easily be used for wired broadband internet like ADSL. The next stage of our research is implementing a congestion control mechanism. The system should also support remote assessment techniques. Along with that further work includes compressing JPEG images into M-JPEG instead of
 ending one frame per packet.


Reference :


Title : 3.5G Based Mobile Remote Monitoring System
Authors : Aman Bajracharya, Timothy J. Gale, Clive R. Stack and Paul Turner
30th Annual International IEEE EMBS Conference
Vancouver, British Columbia, Canada, August 20-24, 2008

Tuesday, 7 August 2012

DESIGN PROCESS & MANUFACTURING PROCESS OF VOLTAGE REGULATOR IC – 7805



NITIE, Mumbai
 Team Member:
Harsh Budholiya, 34
Poonam Bhalla, 60
     (PGDIE-42 Batch)     


Voltage Regulator Integrated Circuit - 7805
7805 is a voltage regulator integrated circuit. It is a member of 78xx series of fixed linear voltage regulator ICs. The voltage source in a circuit may have fluctuations and would not give the fixed voltage output. The voltage regulator IC maintains the output voltage at a constant value. The xx in 78xx indicates the fixed output voltage it is designed to provide. 7805 provides +5V regulated power supply. Capacitors of suitable values can be connected at input and output pins depending upon the respective voltage levels.
Today IC-7805 is manufactured by various companies over the world like, National Semiconductors in India.
1.1 What is an integrated circuit?
An integrated circuit or monolithic integrated circuit (also referred to as IC, chip, or microchip) is an electronic circuit manufactured by lithography, or the patterned diffusion of trace elements into the surface of a thin substrate of semiconductor material. Additional materials are deposited and patterned to form interconnections between semiconductor devices.
Integrated circuits are used in virtually all electronic equipment today and have revolutionized the world of electronics. Computers, mobile phones, and other digital home appliances are now inextricable parts of the structure of modern societies, made possible by the low cost of producing integrated circuits.

1.2 IC Design Flow
The IC design process starts with a given set of requirements. After the development, this initial design is tested against the initial design requirements. When these requirements are not satisfied, the design must be improved. If such improvement is either not possible or too costly, then the requirements must be revised and the IC design process re-starts with the new modified requirements. The failure to properly verify a design in its early phases typically causes significant and expensive re-design at a later stage, which ultimately increases the time-to market. Thus, the verification of the design plays a very important role in every step. 
Fig. provides a view of the Very large scale integration (VLSI) design flow based on
schematic capture systems. Although the design process has been described in a linear fashion for simplicity, in reality there are many iterations back and forth, especially between any two neighboring steps, and occasionally even remotely separated pairs. Although top-down design flow provides an excellent design process control, in reality, there is no truly unidirectional top-down design flow. Both top-down and bottom-up approaches have to be combined. For instance,
if a chip designer defined an architecture without close estimation of the corresponding chip area, then it is very likely that the resulting chip layout will exceed the area limit of the available technology. In such a case, in order to fit the  architecture into the allowable chip area, some functions may have to be removed and the design process must be repeated. Such changes may require significant modification of the initial requirements. Thus, it is very important to feed forward low-level information to higher levels (bottom up) as early as possible.

1.3 Steps for designing an IC

A. Schematic Capture
Equipment – Schematic Editor
Man Power - Required
The traditional method for capturing a transistor-level or gate-level design is via a schematic editor. Schematic editors provide simple, intuitive means to draw, place and connect individual components that make up a design. The resulting schematic drawing must accurately describe the main electrical properties of all components and  their interconnections. Also included in the schematic are the power supply and ground connections, as well as all pins for the input/output interface of the circuit. This information is crucial for generating the corresponding netlist, which is used in later stages of the design. The generation of a complete circuit schematic is therefore the first important step of the transistor-level design flow. Usually, some properties of the components and/or the interconnections between  the devices are subsequently modified as a result of iterative optimization steps. 

B. Layout
Equipment – Layout Editor
Man Power - Required
The creation of the mask layout is one of the most important steps in the full-custom (bottom-up) design flow. This is where the designer describes the detailed geometries and the relative positioning of each mask layer to be used in actual fabrication, using a Layout Editor. Physical layout design is very tightly linked to overall circuit performance (area, speed and power dissipation) since the physical structure determines the transconductances of the transistors, the parasitic capacitances and resistances, and obviously, the silicon area which is used to realize a certain function. On the other hand, the detailed mask layout of logic gates requires a very intensive and time-consuming design effort. The  physical design of CMOS logic gates is an iterative process which starts with the circuit topology and the initial sizing of the transistors. 
·        Floor Planning:  A floorplan of an integrated circuit is a schematic representation of tentative placement of its major functional blocks. Depending on the design methodology being followed, the actual definition of a floorplan may differ. It is major function during layout.

Floor planning-Slicing type

C. Design Rule Check (DRC)
Tool – Design Rule Checker
Man Power - Required
The created mask layout must conform to a complex set of design rules, in order to ensure a lower probability of fabrication defects. A tool built into the Layout Editor, called Design Rule Checker, is used to detect any design rule violations during and after the mask layout design. The designer must perform DRC, and make sure that all errors are eventually removed from the mask layout, before the final design is saved. 

D. Layout versus Schematic (LVS) Check
Equipment – Not Known
Man Power – Required
After the mask layout design of the circuit is completed, the design should be checked against the schematic circuit description created earlier. By comparing the original network with the one extracted from the mask layout the designer can check that the two networks are indeed equivalent. The LVS step provides an additional  level of confidence for the integrity of the design, and ensures that the mask layout is a correct realization of the intended circuit topology. Note that the LVS check only guarantees a topological match. In other words, a successful LVS
will not guarantee that the extracted circuit will actually satisfy the performance requirements. Any errors that may show up during LVS such as unintended connections between transistors, or 6 missing connections/devices, etc. should be corrected in the mask layout - before proceeding to post-layout simulation. 

E. Circuit Extraction
Equipment – Extractor
Man Power - Required
There are three extractions takes place:
i)                   Parasitic Extraction
ii)                 Substrate Coupling Extraction
iii)               Package Model Extraction

Circuit extraction is performed after the mask layout design is completed in order to create a detailed net-list for the simulation tool. The circuit extractor is capable of identifying the individual transistors and their interconnections as well as the parasitic resistances and capacitances that are inevitably present between these layers. The extracted net-list can provide a very accurate estimation of the actual device dimensions and device parasitics that ultimately determine circuit performance. The extracted net-list file and parameters are subsequently used
in Layout-versus-Schematic comparison and in detailed transistor-level simulations (post-layout simulation). 

F. Post-layout Simulation
Equipment – Circuit Simulator
Man Power - Required
The electrical performance of a full-custom design can be best analyzed by performing a post-layout simulation on the extracted circuit net-list. At this point, the designer should have a complete mask layout of the intended circuit/system, and should have passed the DRC and LVS steps with no violations. The detailed (transistor-level) simulation performed using the extracted net list will provide a clear assessment of the circuit speed, the influence of circuit parasitics such as parasitic capacitances and resistances, and  any glitches that may occur due to signal delay mismatches. If the results of post-layout simulation are not satisfactory, the designer should modify some of the transistor dimensions and/or the circuit topology, in order to achieve the desired circuit performance under realistic conditions. This may require multiple iterations on the design until the post-layout simulation results satisfy the original design requirements. However,
a satisfactory result in post-layout simulation is still no guarantee for a completely successful product; the actual performance of the chip can only be verified by testing the fabricated prototype.

1.4 Basic Manufacturing Process:
A. STARTING MATERIAL
Silicon is one of nature's most useful elements. Silicon is the material most commonly used for the manufacturing of semiconductors. Silicon, as a pure chemical element, is not found free in nature. It exists primarily in compound form with other chemical elements. In all of its various forms, silicon makes up 25.7% of the earth's crust, and is the second most abundant element in the Periodic Table of Elements. It is exceeded only by oxygen. Silicon occurs chiefly as a compound of silicon and oxygen called an oxide or as a compound of silicon and salts called a silicate. Silicon in the form of an oxide most commonly occurs as silicon dioxide, SiO2, generally called silica, or sand. Other common forms of silicon dioxide are quartzite, quartz, rock crystal, amethyst, agate, flint, jasper, and opal. Many of the previous forms contain minute quantities of other elements that give the forms color. Some of these minerals are known as semi-precious gem stones. Granite, hornblende, asbestos, feldspar, clay, mica, etc., are but a few of the numerous silicate minerals. Silicon, as sand, is one of the main ingredients of glass. Silicon is an important component in steel, aluminum alloys, and other metallugurical products. Silicon carbide is one of the more useful abrasive materials.

i) Purification
Equipment – Fractional Distillator
Man Power - Automatic process
Silicon for semiconductor applications is taken from quartzite, the rock form of silicon dioxide. Quartzite has trace levels of other elements that must be removed in the purification process. The purifying of quartzite consists of reacting the quartzite with some form of carbon material. The carbon may be in the form of coal, coke, or wood. The process is performed at a temperature of approximately 2000°C. This chemical reaction produces metallurgical grade silicon (MGS) that is
98% pure, which is not good enough for semiconductor use, so must be further purified. This silicon is reacted with very strong hydrochloric acid (similar to strong swimming pool acid) to form a new liquid called trichlorosilane. The liquid is then purified by fractional distillation (similar to the distillation process to make whiskey). The resulting, highly purified trichlorosilane liquid is converted to polycrystalline electronic grade silicon (EGS) by the Siemens' process. The Siemens' process changes the liquid into a solid polycrystalline silicon (usually called polysilicon) rod.


ii)  Czochralski Crystal Growing
Equipment – Czochralski Crystal Grower
Man Power - Automatic process
The next process step converts the very pure silicon from a polysilicon crystal form into a single crystal or monocrystalline form (Figure 2-2). This process is known as Czochralski crystal growing, often called Cz, the abbreviation for Czochralski. The Cz process involves a special crystal growing furnace (Figure 2-3) wherein the pure polysilicon is placed into a very pure quartz crucible, along with the dopant material to make the wafer N-type or P-type. The process chamber is evacuated and purged with very pure argon. The crucible is heated to the melting point of silicon, 1420°C. A previously loaded monocrystalline silicon "seed" of the desired crystal orientation is lowered into the molten silicon and then slowly withdrawn as the "seed" and crucible rotate in opposite directions. This process causes the molten silicon to freeze out onto the "seed" crystal, forming a monocrystalline silicon ingot. The dopant species added to the polysilicon material determines the crystal's electrical characteristics.

Crystal Grower (Czochralski)
The growing process is very automated and yields quality crystals. The growth process is very slow, typically 0.5 inch per hour for 150mm diameter crystals. Because of slow growth rates, the manufacturing process consumes large quantities of electricity. After the growing process is completed, the silicon ingot is evaluated for both electrical and mechanical parameters. Generally, the yield is high and the usable silicon efficiency is 80 to 90 percent of the original crucible charge. This helps keep the cost down. The diameter of the ingot during crystal growing cannot be controlled to the tolerance required by wafer handling equipment. To meet the diameter tolerance, the ingot is centreless ground to
the exact diameter. Another part of the grinding process is to grind a major flat and minor flat or flats for crystal plane designation. The major flat is used for wafer orientation in wafer handling equipment and as a visual indicator to the manufacturing personnel.



iii) Sawing Crystal Into Wafers
Equipment – Grinder
Man Power - Automatic process
The next sequence of process steps involves sawing the ingot into the individual wafers and edge grinding the outer edge of the wafer circumference to a controlled shape. The sawing concept is illustrated in Figure. The edge grinding process removes the sharp edges of the wafer and dramatically reduces silicon particles when the wafer is handled.

Fig: Sawing process
iv) Wafer Preparation
Equipment – Surface smoother with Etcher & Polisher
Man Power - Automatic process
The wafer is double-side (both sides simultaneously) lapped to reduce the surface roughness and then chemically etched to further smooth both sides of the wafer surface. A final surface polish is done on the designated side of the wafer. The designated side is determined by the ground wafer flat(s). The final surface must be very smooth (like a mirror) and free of surface defects and imperfections. A final chemical cleaning process will remove the polishing materials, particulates and any other potentially contaminating materials. These Electrical and mechanical evaluation completes the processing.
The wafers are packaged in an ultra-clean environment and sealed in the storage-shipping containers. They are ready for use in the fabrication process.

B. WAFER CLEANING
Equipment – Cleaner with Catastrophic Device
Man Power - Automatic process
Contamination control during IC manufacturing is a major factor for Yield, Cost, Reliability, and Quality. Therefore, the total manufacturing cycle MUST BE controlled continuously to meet these needs. This means there is control over the environment, materials, chemicals, people, equipment and the process interactions with all of the above. The physical size of contamination can vary over a large range. Figure 2-6 illustrates some of the contaminating materials and sizes. Contamination in any form, films or particles, larger than 10 percent of the smallest line width is considered detrimental. In general, anything on the wafer surface that is not designed to be there is considered contamination.
An extremely critical part of the manufacturing sequence is the cleaning of the wafer surface after certain process steps and prior to other process steps. Cleaning processes are required before the wafers are introduced into any elevated temperature process. A cleaning process is necessary to remove any form of surface contamination that may create a defective transistor within an IC die or produce instability in the circuit during the lifetime of the IC. The purity of the wafer surface is essential. Contaminating materials on the wafer surface can lead to some of the following problems:
1. Prevent or mask effective cleaning or rinsing.
2. Prevent or mask effective concentration of dopants to be introduced into the silicon, whether by diffusion or ion implantation.
3. Cause poor or no adhesion of deposited layers.
4. Cause undesired chemical reactions and lead to decomposition of materials.
5. Alter the silicon crystal structure causing undesired electrical parameter changes.
6. Lead to long-term instability of electrical parameters.
7. Cause film degradation or catastrophic device failure.


Many chemical cleaning processes have been evaluated for cleaning silicon wafer surfaces. These processes must be divided into two categories: those processes used prior to the metallization process and those used after the metallization process because of chemical attack on metal. The selected process must remove a variety of contaminates and leave nothing on the surface as the result of the chemical reaction. This requirement has lead to the use of hydrogen peroxide (H2O2) as the preferred cleaning solution. The peroxide is a 30 percent concentration and unstabilized chemically. The unstabilized form is necessary because of purity. With peroxide as the starting solution, two different chemical cleaning processes have evolved. One is known as the RCA clean. The other cleaning process is known by several different names: Piranha, Caro, and Sulfuric/Peroxide.






C. DIELECTRIC FORMATION
Equipment – Thermal Chemical Reactor
Man Power - Automatic process
A dielectric is a material that is a poor conductor of electricity or does not conduct at all. Dielectric layers are often called insulators. Dielectric layers play a critical role in the manufacturing and operation of semiconductors.

They are used:
• for insulation between conducting layers (e.g., for devices with more than one level of metal and to separate the gate from the silicon in an MOS transistor),
• to protect the surface of a completed die,
• to mask off portions of the surface of the wafer during some manufacturing operations, and
• between the plates of capacitors in ICs.
Various forms of dielectrics related to semiconductors include silicon dioxide, silicon nitride, and silicon oxynitride. The most common is silicon dioxide.

i) Thermally Grown Silicon Dioxide
Silicon has a very unique property when exposed to any source of oxygen. A chemical reaction that forms silicon dioxide (SiO2) will occur. Silicon dioxide is a very stable dielectric or insulating material. Silicon will react with oxygen in the air at room temperature to form a very thin layer of silicon dioxide. This layer will be 20 to 30Å thick wherein the thickness stops the chemical reaction. The process of reacting the silicon wafer with some source of oxygen is referred to as thermal
oxidation. In other words, silicon at the interface between the oxide formed and the silicon wafer is being consumed by the chemical reaction of forming silicon dioxide. These silicon atoms are permanently changed into the compound, silicon dioxide.


D. PHOTOLITHOGRAPHY
Equipment – Photolithographer with Baking Process
Man Power - Automatic process
In the photolithography process sequence, the wafer is covered with a layer of light-sensitive material (photoresist), which is then selectively exposed to light. The selective exposure is accomplished by shining the light through a quartz plate (mask or reticule) with a patterned opaque material on it
The exposed photoresist is washed away and the remaining, unexposed photoresist is hardened by baking. The portions of the layer below the photoresist not covered by the hardened photoresist is removed and then the photoresist is removed
·        Masks
It is the most important activity under photolithography. A photomask is a quartz plate with one layer of patterns for all of the ICs on a wafer on it. A reticle only has the patterns for a few ICs on it. The patterns are formed with opaque substances such as emulsion or chrome.

E. JUNCTION FORMATION
Equipment – Ion Implantation Device
Man Power - Automatic process
Recalling from the materials section, dopant atoms are added to the silicon lattice when the silicon ingot is grown to provide the electrical characteristics of the wafer (N-type or P-type). During the process of producing ICs on the wafer, atoms of the same polarity as the wafer or of the opposite polarity must be introduced into the wafer in selected regions. This alteration of the dopant levels is done by solid-state diffusion or ion implantation.

i) Diffusion
Diffusion is the term used to describe the movement of atoms, molecules, or particles from a location of high concentration to a new location of lower concentration. An example of the process of diffusion can be visualized by watching a small drop of red coloring being dropped into a glass of clear water.
Initially, as the drop strikes the clear water, the red color is highly concentrated. After a short period of time the deep red color begins to lessen and the color starts to spread out over a larger volume. As more time elapses, the color continues to spread out and the red color starts to change to pale red and on into a pink. In a given time the pink will almost disappear in the now nearly clear water. The process just described represents diffusion as a function of time and temperature. The rate an atom, molecule or compound diffuses from a high concentration to a lesser concentration is related to temperature and time. The parameter that ties the diffused rate to temperature for a given time is known as the diffusion coefficient or diffusivity. The dopant atom must displace a silicon atom in the crystal structure of the silicon to become electrically active. The process of diffusion is used in semiconductor processing to introduce a controlled
amount of a chosen dopant into selected regions of a semiconductor crystal. The diffusion process used to accomplish this substitution is divided into two distinct steps.
1. Predeposition
2. Redistribution or drive-in

F. EPITAXIAL DEPOSITION
Equipment – CVD Device
Man Power - Automatic process
The term epitaxial is derived from Greek, meaning to build upon. Epitaxial deposition, in general, is the deposition of a layer of single-crystal silicon on a single-crystal (monocrystalline) wafer. The deposited layer is a crystallographic extension of the substrate in terms of atomic order (i.e., it has the same crystal structure). Thus, the substrate could be considered the "seed" that is necessary
to promote the single-crystal deposition. Epitaxial deposition is a chemical vapor deposition (CVD) process. The subject of chemical vapor deposition was covered in the dielectric section (page 2-9). However, the original use of CVD started with the deposition of single-crystal silicon in the late 1950's and has played a major role in the industry since.
Epitaxy (or epi) has played a major role in the evolution of bipolar transistors and bipolar integrated circuits. In recent years both MOS discrete transistors and ICs have started to use epi as a key part of their structures. Silicon CVD processes (epitaxial and polysilicon depositions) can be categorized by temperature range, pressure, and reactor wall temperature. The categories are summaries in Figure 2-61. In the case of polycrystalline silicon (poly, polysilicon) the crystallography of the substrate is not replicated during the deposition (e.g., poly on silicon dioxide). It is also possible that the deposition could yield an amorphorus film (e.g., silicon dioxide) that has no crystal structure.


G. POLYSILICON DEPOSITION
Equipment – CVD Device
Man Power - Automatic process
Polysilicon deposition is another application of CVD technology for thin films. Unlike epi, polysilicon is not required to follow the crystal orientation of the substrate (e.g., poly on silicon dioxide, which is amorphous). Thin-film polysilicon has many important applications in the semiconductor industry. Polysilicon was the key ingredient leading to the "self aligned" MOS technology.

Heavily-doped polysilicon has become the most widely used gate-electrode material for MOS products, both discretes and ICs. In addition, it serves as an interconnect, capacitor plate(s), doping source, and can be oxidized to form a stable layer of SiO2. Polysilicon is utilized in these roles because of its compatibility with subsequent high-temperature processing, its excellent interface with SiO2, its high reliability as a gate electrode material, and its ability to be deposited over steep topography with good conformal coverage.

Lightly-doped polysilicon films are used as resistors in static memory products and to fill trenches in DRAMs. Thin films of polysilicon are made up of small single-crystal grains of about 1000Å separated by grain boundaries. The film that will be a polysilicon layer can be either amorphorus or polycrystalline "as deposited." Subsequent heat cycles at elevated temperatures will cause an amorphous film to become polycrystalline. This "as-deposited" undoped film has an extremely high resistivity.

The resistivity of poly can be changed by doping the film with phorphous, boron, or arsenic. Because of the variation in grain size and grain boundary effects, heavily doping the film only reduces the sheet resistance to 10-30 /o (ohms/sq.). In between these extremes, ion implantation can control the resistivity for various resistor values or doping applications. The deposition of polysilicon is generally done using LPCVD (Low-Pressure CVD) equipment operating in the 600-630°C range. This allows reasonable load size per run and acceptable productivity. Silane (SiH4) is the silicon source material. PECVD (Plasma-Enhanced CVD) reactors emerged in the late '80's and early '90's. PECVD systems can be operated at lower temperatures making them attractive for submicron technology. Summarizes LPCVD deposition.
Earlier, it was indicated the polysilicon could be doped to control the resistivity. The doping process for polysilicon can be a separate process after the deposition or it can be incorporated into the deposition. There are advantages and limitations to each process technique.


H. METAL DEPOSITION
Equipment – Using Wiring Circuits
Man Power - Automatic process
Metal (usually aluminum or aluminum with a small amount of other materials) is used to connect the individual components (diodes, transistors, resistors, and capacitors) in an integrated circuit. A layer of metal is deposited on the entire top surface of a wafer. Through photolithography and etch, selected portions of the metal are removed. The remaining aluminum serves as the conductors (wires) between the various components of each IC.

Noyce's revolutionary concept of wiring all the transistors at the top surface of the silicon rather than having individual discrete dice on a substrate altered the direction of the semiconductor industry. The prior art of the Kilby patent integrated components on a common substrate but still used loops of very fine wire to interconnect the components.

Noyce's new method of wiring circuits all within the die area of the circuit opened up endless new possibilities. This section will cover the metal technology used for interconnects.



I.  NMOS PROCESS:
Equipment – Not Known
Man Power – Automatic process



J. ASSEMBLY
Equipment –
Man Power - Required
After all the wafer fabrication steps have been completed, the good dice within the wafer are ready to be separated and assembled into final product form. The assembly or packaging process will place the electrically good devices (chips, die, and dice are other terms used to identify the individual circuits) in a package, interconnect the device to the package's leads and provide some form of final sealing. Since the early days of the semiconductor industry, the packaging process has often been considered to be a lower level of technology than the wafer fab process. As to why this label emerged is purely speculation, but because of the high-labor content per package, the volume assembly of semiconductors was relocated to the Asia-Pacific region. This region continues to dominate the volume assembly.
As the semiconductor industry moved from the SSI (small-scale integration) era through MSI and LSI (medium and large) into the VLSI (very large-scale integration) levels of integration, packaging technology has changed dramatically. The changes have included automation, advances in materials, and new package designs. The wafer is sawn into individual dice (separated). Each good die is attached to a package substrate (as shown) or leadframe. The bonding pads around the perimeter of the die are connected (usually wire bonded) to the internal terminations of the external leads. Finally, the package is completed by sealing the pieces of the housing together or by encapsulation with molding compound. This sequence will convert a tested wafer into a packaged product. Packaging requirements can be translated into package functions and constraints. These functions and constraints along with the package design factors dictate the actual package design. The design factors consider the chip performance requirements, the density of chip, materials issues, assembly methods, and cost. The cost factor has played a major influence on packaging. As previously stated, labor cost was responsible for assembly moving to the Asia-Pacific region. Material costs for plastic packages are less expensive than for ceramic or metal packages. Plastic package assembly methods have been automated to a higher level than either ceramic or metal packages. All of these cost factors favor the plastic package and have elevated it to be the highest volume type of IC package produced.
The major segments in the assembly process are outlined in Figure. These generic outlines can be further divided into flows for DIP packaging, plastic packages and ceramic packages.

Figure: IC assembly Sequence

K. PACKAGING
Equipment – Not Known
Man Power - Required
Integrated circuit packaging is the final stage of semiconductor device fabrication, followed by IC testing. The die (integrated circuit), which represents the core of the device, is encased in a support that prevents physical damage and corrosion and supports the electrical contacts required to assemble the integrated circuit into a system.
In the integrated circuit industry it is called simply packaging and sometimes semiconductor device assembly, or simply assembly. Also, sometimes it is called encapsulation or seal, by the name of its last step. Confusion sometimes arises, because the term electronic packaging generally includes the later stages of mounting and interconnecting the devices, for example on printed-circuit boards.
DIPs are commonly used for integrated circuits (ICs). Other devices in DIP packages include resistor packs, DIP switches, LED segmented and bargraph displays, and electromechanical relays.


DIP (dual in-line package)
In microelectronics, a dual in-line package (DIP or DIL[1]) is an electronic device package with a rectangular housing and two parallel rows of electrical connecting pins. The package may be through-hole mounted to a printed circuit board or inserted in a socket.
A DIP is usually referred to as a DIPn, where n is the total number of pins. For example, a microcircuit package with two rows of seven vertical leads would be a DIP14. The photograph at the upper right shows three DIP14 ICs
DIP connector plugs for ribbon cables are common in computers and other electronic equipment.
Dallas Semiconductor manufactured integrated DIP real-time clock (RTC) modules which contained an IC chip and a non-replaceable 10-year lithium battery.
DIP header blocks on to which discrete components could be soldered were used where groups of components needed to be easily removed, for configuration changes, optional features or calibration

L. FINAL PRODUCT TESTING
Equipment – Wafer Prober
Man Power - Required
The final testing of the product after packaging tests the product to the data sheet specification or to a customer-specific test requirement. A variety of final tests are performed to check for specified electrical characteristics.
Automation is used in a majority of the final test activities. Only when a package is new to the industry or customized and does not fit existing hardware will hand insertion to a socket be used. Bent leads or poor lead finish on a package can cause severe problems for final test. Thus, automation is necessary to prevent these and other problems.







M. FINAL OUTPUT
IC Specifications:



Note:
1. Load and line regulation are specified at constant junction temperature. Changes in Vo due to heating effects must be taken into account separately. Pulse testing with low duty is used.





References:

·        http://www.youtube.com/watch?v=9rCyu8B0tYs&feature=player_detailpage (video of chip manufacturing)


·        http://smithsonianchips.si.edu/ice/cd/BT/SECTION2.PDF


·        http://www.engineersgarage.com/electronic-components/7805-voltage-regulator-ic


·        http://www.ifm.liu.se/courses/TFYA39/Lecture%2012.pdf



·        http://web.engr.oregonstate.edu/~moon/engr203/read/read3.pdf



·        http://www.ece.neu.edu/courses/eece4525/2011fa/Lab3/IC_Design_Flow.pdf


·        http://www.scribd.com/doc/87056156/IC-Design-Flow



·        http://www.rle.mit.edu/vlsi/publications/pub34.pdf