Friday, April 5, 2019
Taguchi Definition Quality
Taguchi  exposition QualityTAGUCHIS DEFINITION OF QUALITYThe old traditional definition of quality states quality is conformance to specifications. This definition was expanded by Joseph M. Juran (1904-) in 1974 and  then by the Ameri substructure Society for Quality Control (ASQC) in 1983. Juran observed that quality is fitness for use. The ASQC defined quality as the totality of features and characteristics of a product or service that bear on its ability to satisfy given needs.Taguchi presented a nonher definition of quality. His definition  accent the  going awayes associated with a product..It must be kept in mind here that  social club  embr crude oils   ii the manufacturer and the customer.  going associated with  place variability includes, for example, energy and time (problem fixing), and money (replacement cost of parts).  detrimentes associated with  de permiterious side  beliefs could be  grocery sh  atomic  consequence 18s for the manufacturer and/or the physical effect   s, such as of the do drugs thalidomide, for the consumer.TAGUCHIS LOSS FUNCTIONTaguchis quality philosophy strongly emphasizes passinges or costs. W. H. Moore asserted that this is an enlightened   surgeryion that embodies  leash important premises for every product quality characteristic there is a  objective lens  prize which results in the  meekest loss deviations from  sign value always results in increased loss to society and loss should be measured in monetary units (dollars, pesos, francs, etc.). depicts Taguchis typically loss  forge. The figure also contrasts Taguchis function with the traditional view that states there  ar no losses if specifications argon met. It  john be  gainn that small deviations from the  behind value result in small losses. These losses, however, increase in a nonlinear  direction as deviations from the  site value increase.Where L(Y) is the expected loss associated with the specific value of Y.Essentially, this equation states that the loss is prop   ortional to the square of the deviation of the measured value, Y, from the  stigma value, T. This implies that any deviation from the target ( base on customers desires and needs)  volition  mitigate customer satisfaction. This is in contrast to the traditional definition of quality that states that quality is conformance to specifications. It should be  acknowledge that the constant k  basis be  intractable if the value of L(Y) associated with  whatever Y value are both known. Of course, under many circumstances a quadratic function is only an approximation.Since Taguchis loss function is presented in monetary terms, it provides a common language for all the departments or components  at bottom a company. Finally, the loss function  undersurface be  employ to define   help measures of a quality characteristic of a product or service. This property of Taguchis loss function  give be taken up in the next section. But to anticipate the discussion of this property, Taguchis quadratic f   unction can be converted toThis can be accomplished by assuming Y has  about probability distri merelyion with mean, a and  form o.2 This second mathematical expression states that average or expected loss is due either to process  variate or to being off target (called bias), or both.TAGUCHI, ROBUST DESIGN, AND THE DESIGN OF EXPERIMENTSTaguchi asserted that the  maturement of his method actings of experimental  use started in Japan about 1948. These methods were then refined over the next several(prenominal) decades. They were introduced in the United States around 1980. Although, Taguchis approach was built on traditional concepts of  creation of experiments (DOE), such as factorial and fractional factorial designs and  immaterial  wanders, he created and promoted some new DOE techniques such as  augur-to- to-do  dimensions, robust designs, and  line and tolerance designs. Some experts in the field have shown that some of these techniques,  particularly signal-to-noise ratios, are    not optimal under certain conditions. Nonetheless, Taguchis ideas concerning robust design and the design of experiments  allow for now be discussed.DOE is a body of statistical techniques for the effective and efficient collection of  entropy for a  figure of purposes.  two significant ones are the investigation of research hypotheses and the accurate determination of the  relation effects of the many different factors that influence the quality of a product or process. DOE can be employed in both the product design  anatomy and production phase.A  pivotal component of quality is a products ability to perform its tasks under a variety of conditions. Furthermore, the operating environmental conditions are  usually beyond the control of the product designers, and, therefore robust designs are essential. Robust designs are  base on the use of DOE techniques for  keeping product parameter settings (e.g., temperature settings or drill  hurryings), which  change products to be resilient    to changes and variations in working environments. .To achieve economical product quality design, Taguchi proposed  common chord phases  frame design, parameter design, and tolerance design. In the first phase, system design, design engineers use their practical experience, along with scientific and engineering principles, to create a viably functional design. To elaborate, system design uses current techno put downy, processes, materials, and engineering methods to define and  remodel a new system. The system can be a new product or process, or an improved modification of an existing product or process. .EXAMPLES AND CONCLUSIONSAs Thomas P. Ryan has stated, Taguchi at the very least, has  concentrate our attention on new objectives in achieving quality  cash advance. The statistical  scapes for accomplishing these objectives will likely continue to be  veritable. Quality management gurus, such as W. Edwards Deming (1900-1993) and Kaoru Ishikawa (1915-), have stressed the importanc   e of continuous quality improvement by concentrating on processes upstream. This is a fundamental break with the traditional practice of relying on inspection downstream. Taguchi emphasized the importance of DOE in improving the quality of the engineering design of products and processes. As previously mentioned, however, his methods are frequently statistically inefficient and cumbersome. Nonetheless, Taguchis design of experiments have been widely applied and theoretically refined and extended. Two application  geeks and one refinement example will now be discussed.Taguchi methodsTaguchi methods are statistical methods  work uped by Genichi Taguchi to improve the quality of manufactured goods, and more recently also applied to, engineering, biotechnology, marketing and advertising.  schoolmaster statisticians have welcomed the  cultivations and improvements brought about by Taguchi methods, particularly by Taguchis development of designs for  conducting variation, but have critici   zed the inefficiency of some of Taguchis proposals.Off-line quality controlTaguchis rule for manufacturingTaguchi realized that the best opportunity to eliminate variation is during the design of a product and its manufacturing process. Consequently, he developed a strategy for quality engineering that can be used in both contexts. The process has  three stagesSystem designParameter design security deposit designSystem designThis is design at the conceptual level, involving creativity and innovation.Parameter design erst the concept is established, the nominal values of the various dimensions and design parameters need to be set, the detail design phase of conventional engineering. Taguchis radical insight was that the exact choice of values  take is under-specified by the performance requirements of the system. In many circumstances, this al low-pitcheds the parameters to be elect so as to minimize the effects on performance arising from variation in manufacture, environment and cu   mulative damage. This is sometimes called robustification.Tolerance designWith a successfully completed parameter design, and an understanding of the effect that the various parameters have on performance, resources can be focused on  trim back and controlling variation in the critical few dimensionsTaguchi Method Design of ExperimentsThe general steps  involve in the Taguchi Method are as follows1. Define the process objective, or more specifically, a target value for a performance measure of the process. This may be a flow rate, temperature, etc. The target of a process may also be a  token(prenominal) or maximum for example, the goal may be to maximize the output flow rate. The deviation in the performance characteristic from the target value is used to define the loss function for the process.2. Determine the design parameters  touch the process. Parameters are variables within the process that affect the performance measure such as temperatures, pressures, etc. that can be soft    controlled. The  public figure of levels that the parameters should be  alter at must be specified. For example, a temperature might be varied to a low and high value of 40 C and 80 C. Increasing the  cast of levels to  interpolate a parameter at increases the  depend of experiments to be conducted.3. Create orthogonal  set outs for the parameter design indicating the  descend of and conditions for each experiment. The selection of orthogonal arrays is based on the  build of parameters and the levels of variation for each parameter, and will be expounded  under.4. Conduct the experiments indicated in the completed array to collect data on the effect on the performance measure.5. Complete data analysis to determine the effect of the different parameters on the performance measure.A  dilate description of the execution of these steps will be discussed next.Determining Parameter Design Orthogonal  alignThe effect of many different parameters on the performance characteristic in a cond   ensed set of experiments can be examined by using the orthogonal array experimental design proposed by Taguchi. Once the parameters affecting a process that can be controlled have been determined, the levels at which these parameters should be varied must be determined. Determining what levels of a variable to  footrace requires an in-depth understanding of the process, including the minimum, maximum, and current value of the parameter. If the difference between the minimum and maximum value of a parameter is large, the values being tested can be  come on apart or more values can be tested. If the range of a parameter is small, then less values can be tested or the values tested can be closer together. For example, if the temperature of a reactor jacket can be varied between 20 and 80 degrees C and it is known that the current operating jacket temperature is 50 degrees C, three levels might be  chosen at 20, 50, and 80 degrees C. Also, the cost of conducting experiments must be cons   idered when determining the number of levels of a parameter to include in the experimental design. In the previous example of jacket temperature, it would be cost prohibitive to do 60 levels at 1 degree intervals. Typically, the number of levels for all parameters in the experimental design is chosen to be the  aforesaid(prenominal) to aid in the selection of the proper orthogonal array.Knowing the number of parameters and the number of levels, the proper orthogonal array can be selected. Using the array  selector switch table shown  down the stairs, the  have of the appropriate array can be found by  locutioning at the column and  row corresponding to the number of parameters and number of levels. Once the name has been determined (the subscript represents the number of experiments that must be completed), the predefined array can be looked up. Links are provided to many of the predefined arrays given in the array selector table. These arrays were created using an algorithm Taguchi    developed, and allows for each variable and setting to be tested equally. For example, if we have three parameters (voltage, temperature, pressure) and two levels (high, low), it can be seen the proper array is L4. Clicking on the link L4 to view the L4 array, it can be seen   quadruplet-spot different experiments are given in the array. The levels designated as 1, 2, 3 etc. should be replaced in the array with the actual level values to be varied and P1, P2, P3 should be replaced with the actual parameters (i.e. voltage, temperature, etc.) adjust Selector fundamental Notes Regarding Selection + Use of Orthogonal ArraysNote 1The array selector assumes that each parameter has the same number of levels. Sometimes this is not the case. Generally, the highest value will be taken or the difference will be split.The  avocation examples offer insight on choosing and properly using an orthogonal array. Examples 1 and 2 focus on array choice, while Example 3 will  parade how to use an ortho   gonal array in one of these situations.Example 1  Parameter A, B, C, D = 4   take aims 3, 3, 3, 2 = 3 Array L9Example 2  Parameter A, B, C, D, E, F = 6   trains 4, 5, 3, 2, 2, 2 = 3 Array  special L16Example 3A reactors behavior is dependent upon impeller model, mixer speed, the control algorithm employed, and the cooling  irrigate valve type. The possible values for each are as followsImpeller model A, B, or CMixer speed 300, 350, or 400 RPMControl algorithm PID, PI, or PValve type butterfly or globeThere are 4 parameters, and each one has 3 levels with the exception of valve type. The highest number of levels is 3, so we will use a value of 3 when choosing our orthogonal array.Using the array selector above, we find that the appropriate orthogonal array is L9When we replace P1, P2, P3, and P4 with our parameters and begin filling in the parameter values, we find that the L9 array includes 3 levels for valve type, while our system only has 2. The appropriate strategy is to fill in    the entries for P4=3 with 1 or 2 in a  stochastic, balanced way. For exampleHere, the third value was chosen double as butterfly and once as global.Note 2If the array selected based on the number of parameters and levels includes more parameters than are used in the experimental design, ignore the additional parameter columns. For example, if a process has 8 parameters with 2 levels each, the L12 array should be selected according to the array selector. As can be seen below, the L12 Array has columns for 11 parameters (P1-P11). The right 3 columns should be ignored.Analyzing  observational DataOnce the experimental design has been determined and the trials have been carried out, the measured performance characteristic from each trial can be used to  collapse the relative effect of the different parameters. To demonstrate the data analysis procedure, the following L9 array will be used, but the principles can be transferred to any type of array.In this array, it can be seen that any    number of repeated observations (trials) may be used. Ti,j represents the different trials with i = experiment number and j = trial number. It should be noted that the Taguchi method allows for the use of a noise matrix including external factors affecting the process  way out rather than repeated trials, but this is outside of the scope of this article.To determine the effect each variable has on the output, the signal-to-noise ratio, or the SN number, needs to be  work out for each experiment conducted. The calculation of the SN for the first experiment in the array above is shown below for the case of a specific target value of the performance characteristic. In the equations below, yi is the mean value and si is the variance. yi is the value of the performance characteristic for a given experiment.SN_i=10logfracbary_i2s_i2 Wherebar y_i=frac 1N_isum_u=1N_iy_i,us_i2=frac 1N_i-1sum_u=1N_ileft ( y_i,u-bar y_i right )i = Experimentnumberu=TrialnumberN_i=NumberoftrialsforexperimentiFo   r the case of minimizing the performance characteristic, the following definition of the SN ratio should be calculatedSN_i=-10logleft(sum_u=1N_ifracy_u2N_iright)For the case of maximizing the performance characteristic, the following definition of the SN ratio should be calculatedSN_i=-10logleftfrac1N_isum_u=1N_ifrac1y_u2rightAfter calculating the SN ratio for each experiment, the average SN value is calculated for each factor and level. This is done as shown below for Parameter 3 (P3) in the arraySN_ maskredP3,1=frac(S_N1+S_N6+S_N8)3,SN_color grislyP3,2=frac(S_N2+S_N4+S_N9)3,SN_colorgreenP3,3=frac(S_N3+S_N5+S_N7)3,Once these SN ratio values are calculated for each factor and level, they are tabulated as shown below and the range R (R = high SN  low SN)of the SN for each parameter is calculated and entered into the table. The larger the R value for a parameter, the larger the effect the variable has on the process. This is because the same change in signal causes a larger effect on    the output variable being measured.ProblemsProblem You have just produced one thousand 55  gallon drums of  sesame oil for  sales event to your distributors. However, just before you are to ship oil, one of your employees remembers that one of the oil  drumfishs was temporarily used to store insecticide and is almost surely contaminated. Unfortunately, all of the  positions look the same.One barrel of sesame oil sells for $1000, while each assay for insecticide in  food for thought oil costs $1200 and takes 3 days. Tests for insectide are extremely expensive. What do you do?Solution Extreme multiplexing. This is  standardised to using a Taguchi method but optimized for very sparse systems and specific cases. For example,  kind of of 1000  set, let us consider 8  place for now, one of which is contaminated. We could test each one, but that would be  exceedingly expensive. An separate  radical is to mix samples from each barrel and test the mixtures.Mix barrels 1,2,3,4  Sample AMix ba   rrels 1,2,5,6  Sample BMix barrels 1,3,5,7  Sample CWe claim that from  exam only these three mixtures, we can determine which of the 8 barrels was contaminated. Let us consider some possible results of these tests. We will use the following label scheme +/-,+/-,+/- in order of A, B, C. Thus, +,-,+ indicates A and C showed contamination but not B. Possible Result 1 -,-,- The only barrel not  coalesce in was 8, so it is contaminated. Possible Result 2 +,-,-  bbl 4 appears in A, but not in B and C. Since only A returned positive, barrel 4 was contaminated.Possible Result 3 -,+,- Barrel 6 appears in B, but not in A and C. Since only B returned positive, barrel 6 was contaminated.We can see that we have 23 = 8 possible results, each of which corresponds to a particular barrel being contaminated. We  have the rest of the cases for the reader to figure out.Solution with 1,000 barrels Mix samples from each barrel and test mixtures.  to each one mixture will consist of samples from a unique    combination of 500 barrels. Experiments required = log2 (1000) =10.Solution with 1,000,000 barrels Experiments required = log2(1000000)=20.Thus, by using extreme multiplexing, we can greatly reduce the  of experiments needed, since the  of experiments scales with log2( of barrels) instead of  of barrels.Worked out ExampleA microprocessor company is having difficulty with its current yields. Silicon processors are made on a large die, cut into pieces, and each one is tested to match specifications. The company has requested that you  strain experiments to increase processor yield. The factors that affect processor yields are temperature, pressure, doping amount, and deposition rate.a) Question Determine the Taguchi experimental design orthogonal array. The operating conditions for each parameter and level are listA Temperature A1 = 100C A2 = 150C (current) A3 = 200C B Pressure B1 = 2  pounds per square inch B2 = 5 psi (current) B3 = 8 psi C Doping Amount C1 = 4% C2 = 6% (current) C3    = 8% D Deposition  investD1 = 0.1 mg/s D2 = 0.2 mg/s (current) D3 = 0.3 mg/sa) Solution The L9 orthogonal array should be used. The filled in orthogonal array should look like thisThis setup allows the testing of all four variables without having to run 81 =34=(3 Temperatures)(3 Pressures)(3 Doping Amounts)(3 Deposition rates) separate trials.b) Question Conducting three trials for each experiment, the data below was collected. Compute the SN ratio for each experiment for the target value case, create a  rejoinder chart, and determine the parameters that have the highest and lowest effect on the processor yield.b) Solution Shown below is the calculation and tabulation of the SN ratio.S_m1=frac(87.3+82.3+70.7)23=19248.0,S_T1=87.32+82.32+70.72=19393.1,S_e1=S_T1-S_m1=19393.1-19248.0=145.0,V_e1=fracS_e1N-1=frac145.12=72.5,SN_1=10 log frac(1/N)(S_m1-V_e1)V_e1=10 log frac(1/3)(19248.0-145.1)145.1=19.5,Shown below is the response table. This table was created by calculating an average SN    value for each factor. A sample calculation is shown for Factor B (pressure)SN_colorredB1=frac(19.5+17.6+22.2)3=19.8,SN_colorblueB2=frac(21.4+14.3+24.0)3=19.9,SN_colorgreenB3=frac(19.3+29.2+20.4)3=23.0,The effect of this factor is then calculated by determining the rangeDelta = Max  Min = 23.0-19.8=3.2,It can be seen that deposition rate has the largest effect on the processor yield and that temperature has the smallest effect on the processor yield.Extreme Example Sesame Seed  low-downProblem You have just produced one thousand 55 gallon drums of sesame oil for sale to your distributors. However, just before you are to ship oil, one of your employees remembers that one of the oil barrels was temporarily used to store insecticide and is almost surely contaminated. Unfortunately, all of the barrels look the same.One barrel of sesame oil sells for $1000, while each assay for insecticide in food oil costs $1200 and takes 3 days. Tests for insectide are extremely expensive. What do you    do?Solution Extreme multiplexing. This is similar to using a Taguchi method but optimized for very sparse systems and specific cases. For example, instead of 1000 barrels, let us consider 8 barrels for now, one of which are contaminated. We could test each one, but that would be highly expensive. Another solution is to mix samples from each barrel and test the mixtures.Mix barrels 1,2,3,4  Sample AMix barrels 1,2,5,6  Sample BMix barrels 1,3,5,7  Sample CWe claim that from testing only these three mixtures, we can determine which of the 8 barrels was contaminated. Let us consider some possible results of these tests. We will use the following label scheme +/-,+/-,+/- in order of A, B, C. Thus, +,-,+ indicates A and C showed contamination but not B.Possible Result 1 -,-,- The only barrel not mixed in was 8, so it is contaminated.Possible Result 2 +,-,- Barrel 4 appears in A, but not in B and C. Since only A returned positive, barrel 4 was contaminated.Possible Result 3 -,+,- Barrel 6    appears in B, but not in A and C. Since only B returned positive, barrel 6 was contaminated.We can see that we have 23 = 8 possible results, each of which corresponds to a particular barrel being contaminated. We leave the rest of the cases for the reader to figure out.Solution with 1,000 barrels Mix samples from each barrel and test mixtures. Each mixture will consist of samples from a unique combination of 500 barrels. Experiments required = log2(1000)=10.Solution with 1,000,000 barrels Experiments required = log2(1000000)=20.Thus, by using extreme multiplexing, we can greatly reduce the  of experiments needed, since the  of experiments scales with log2( of barrels) instead of  of barrels.Other Methods of Experimental DesignTwo other methods for determining experimental design are factorial design and random design. For scenarios with a small number of parameters and levels (1-3) and where each variable contributes significantly, factorial design can work  headspring to determine    the specific interactions between variables. However, factorial design gets increasingly complex with an increase in the number of variables. For large systems with many variables (50+) where there are few interactions between variables, random design can be used. Random design assigns each variable a state based on a  invariant sample (ex 3 states = 0.33 probability) for the selected number of experiments. When used properly (in a large system), random design usually produces an experimental design that is desired. However, random design works poorly for systems with a small number of variables.To  pay back a even better understanding of these three different methods, its good to get a  optic of these three methods. It will illustrate the degree of efficiency for each experimental design depending on the number of variables and the number of states for each variable. The following will have the three experimental designs for the same scenario.Scenario. You have a CSTR that has fou   r(4) variables and each variable has three or two states. You are to design an experiment to systematically test the effect of each of the variables in the current CSTR.Experimental Design 1 Factorial Design By looking at the  variables and  states, there should be a total of 54 experiments because (3impellers)(3speeds)(3controllers)(2valves)=54. Heres a list of these 54 experimentsExperimental Design 2 Taguchi Method Since you know the  of states and variables, you can refer to the table above in this wiki and obtain the correct Taguchi array. It turns out to be a L9 array.With the actual variables and states, the L9 array should look like the followingExperimental Design 3 Random Design Since we do not know the number of signal recoveries we want and we dont know the probabilities of each state to happen, it will be difficult to construct a random design table. It will mostly be used for extreme large experiments. Refer to the link below to help you obtain a better grasp on the ra   ndom design concept.Dr. Genichi Taguchi Dr. Taguchi built on the work of Plackett and Burman by combining statistics and engineering to achieve rapid improvements in product designs and manufacturing processes. His efforts led to a subset of screening experiments commonly referred to the Taguchi Techniques or the Taguchi Method.Major Premises of Taguchi Techniques Focus on the robustness of the product.  exploit the product correctly in spite of variation in materials and processes. Design the product to be  unreactive to the common cause variation that exists in the process. Quantify the effects of deviation using the Quality Loss Function The Quality Loss Function, L(y), provides both a conceptual and a quantifiable means to demonstrate the impact of deviation from target.Noise Factors Taguchi calls common cause variation the noise. Noise factors are classified into three categories Outer Noise, Inner Noise, and Between Product Noise. Taguchis approach is not to eliminate or ignor   e the noise factors Taguchi techniques aim to reduce the effect or impact of the noise on the product quality.Quality Loss Function The Loss Function can help put the cost of deviation from target into perspective. The loss represents a summation of rework, repair, warranty cost plus customer dissatisfaction, bad reputation, and eventual loss of market share for the manufacturer.Signal to Noise Ratio Taguchis emphasis on minimizing deviation from target led him to develop measures of the process output that incorporate both the location of the output as well as the variation. These measures are called signal to noise ratios. The signal to noise ratio provides a measure of the impact of noise factors on performance. The larger the S/N, the more robust the product is against noise. Calculation of the S/N ratio depends on the experimental objectiveDerivation of Taguchi Matrices Taguchi matrices are derived from classical Full Factorial arrays.As with Plackett-Burman designs, Taguchi de   signs are based on the assumption that interactions are not likely to be significant. Taguchi designs have been developed to study factors at two-levels, three-levels, four-levels, and even with mixed levels. The levels in Taguchi matrices have historically been reported as Level 1 and Level 2 for two-level experiments. These levels are no different than the Low (-) Level and the High (+) Level used in Full Factorial designs and by Plackett and Burman. For more than two levels, experimenters typically use Level 1, Level 2, Level 3, etc. for Taguchi designs.Types of Taguchi Designs A series of Taguchi designs for studying factors at two-levels are available. Two-level designs include the L4, L8, and L16 matrices. The L4 design studies up to 3 factors. The most popular Taguchi designs are the L8 and L16 that study up to 7 and 15 factors respectively. The L4, L8, and L16 designs are geometric designs based on the 22, 23, and 24 Full Factorial matrices respectively. They are based on th   e Full Factorials so that interactions can be studied if desired. Non-geometric Taguchi designs include the L12, L20, and L24 designs that can study up to 11, 19, and 23 factors respectively. There are other two-level Taguchi Matrices, both geometric and non-geometric, designed to study even more factors, but it is rare that larger  numbers of factors can be studied in a practical, feasible, or cost-effective manner.Analysis of Interactions While Taguchi views interactions as noise factors and most likely not significant, he does offer techniques to evaluate the impact of two-way interactions on responses. Taguchi provides two techniques to explore interactions in a screening experiment. The linear graph is a graphical tool that facilitates the assignment of factors and their interactions to the experimental matrix. Some experimenters find the interaction tables developed from the linear graphs to be easier to use.Three-Level Matrices* Taguchi screening designs for three levels exis   t.o The L9 looks at 4 factors at 3 levels.o An L27 can be used to study up to 13 factors at 3 levels and an L81 can evaluate up to 40 factors at 3 levels.* Taguchi designs for 4 levels and 5 levels are available.Matrices with Outer Arrays  
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.