Saturday, August 31, 2019

A Short Story Of Denim Essay

Denim is more than just a cotton fabric; it inspires strong opinions within the hearts of historians, designers, teenagers, movie stars, reporters and writers. Interest bordering on passion can be found among textile and costume historians today, especially in the debate over the true origins of denim. These experts have put decades of work into their research; here are summarized the prevailing opinions about the birth of denim, followed by a discussion of the way Levi Strauss & Co. has helped to contribute to denim’s movement around the world. In 1969 a writer for American Fabrics magazine declared, â€Å"Denim is one of the world’s oldest fabrics, yet it remains eternally young.† If continuous use of and interest in an item makes it â€Å"eternally young† then denim certainly qualifies. From the 17th century to the present, denim has been woven, used and discarded; made into upholstery, pants and awnings; found in museums, attics, antique stores and arc haeological digs; worn as the fabric of hard honest work, and as the expression of angry rebellion; used for the sails of Columbus’ ships in legend; and worn by American cowboys in fact. Legend and fact are also interwoven when scholars discuss the origin of the name denim itself. Most reference books say that denim is an English corruption of the French â€Å"serge de Nimes;† a serge fabric from the town of Nimes in France. However, some scholars have begun to question this tradition. There are a few schools of thought with regard to the derivation of the word â€Å"denim.† Pascale Gorguet-Ballesteros, of the Musee de la Mode et du Costume in Paris, has done some interesting research on both of these issues. A fabric called â€Å"serge de Nimes,† was known in France prior to the 17th century. At the same time, there was also a fabric known in France as â€Å"nim.† Both fabrics were composed partly of wool. Serge de Nimes was also known in England before the end of the 17th century. The question then arises: is this fabric imported from France or is it an English fabric bearing the same name? According to Ms. Gorguet-Ballesteros, fabrics which were named for a certain geographic location were often also made elsewhere; the name was used to lend a certain cachet to the fabric when it was offered for sale. Therefore a â€Å"serge de Nimes† purchased in England was very likely also made in England, and not in Nimes, France. There still remains the question of how the word â€Å"denim† is popularly thought to be descended from the word â€Å"serge de Nimes.† Serge de Nimes was made of silk and wool, but denim has always been made of cotton. What we have here again, I think, is a relation between fabrics that is in name only, though both fabrics are a twill weave. Is the real origin of the word denim â€Å"serge de nim,† meaning a fabric that resembled the part-wool fabric called nim? Was serge de Nimes more well-known, and was this word mistranslated when it crossed the English Channel? Or, did British merchants decide to give a zippy French name to an English fabric to give it a bit more cachet? It’s likely we will never really know. Then, to confuse things even more, there also existed, at this same time, another fabric known as â€Å"jean.† Research on this textile indicates that it was a fustian – a cotton, linen and/or wool blend and that the fustian of Genoa, Italy was called jean; here we do see evidence of a fabric being named from a place of origin. It was apparently quite popular, and imported into England in large quantities during the 16th century. By the end of this period jean was being produced in Lancashire. By the 18th century jean cloth was made completely of cotton, and used to make men’s clothing, valued especially for its property of durability even after many washings. Denim’s popularity was also on the rise. It was stronger and more expensive than jean, and though the two fabrics were very similar in other ways, they did have one major difference: denim was made of one colored thread and one white thread; jean was woven of two threads of the same color. Moving across the Atlantic, we find American textile mills starting on a small scale at this same time, the late 18th century, mostly as a way to become independent from foreign producers (mainly the English). From the very beginning, cotton fabrics were an important component of their product line. A factory in the state of Massachusetts wove both denim and jean. President George Washington toured this mill in 1789 and was shown the machinery which wove denim, which had both warp and fill of cotton. One of the first printed references to the word â€Å"denim† in the United States was seen in this same year: a Rhode Island newspaper reported on the local production of denim (among other fabrics). The book The Weavers Draft Book and Clothiers Assistant, published in 1792, contains technical sketches of the weaving methods for a variety of denims. In 1864, an East Coast wholesale house advertised that it carried 10 different kinds of denim, including â€Å"New Creek Blues† and â€Å"Madison River Browns.† (They sound rather contemporary, don’t they? Another example of denim appearing â€Å"eternally young.†) Webster’s Dictionary of the same year contained the word â€Å"denim,† referring to it as â€Å"a coarse cotton drilling used for overalls, etc.† Research shows that jean and denim were two very different fabrics in 19th century America. They also differed in how they were used. In 1849 a New York clothing manufacturer advertised topcoats, vests or short jackets in chestnut, olive, black, white and blue jean. Fine trousers were offered in blue jean; overalls and trousers made for work were offered in blue and fancy denim. Other American advertisements show working men wearing clothing that illustrates this difference in usage between jean and denim. Mechanics and painters wore overalls made of blue denim; working men in general (including those not engaged in manual labor) wore more tailored trousers made of jean. Denim, then, seems to have been reserved for work clothes, when both durability and comfort were needed. Jean was a workwear fabric in general, without the added benefits of denim as I just mentioned. In Staple Cotton Fabrics by John Hoye, published in 1942, jean is listed as a cotton serge with warp and woof of the same color, used for overalls, work and sport shirts, doctors and nurses uniforms and as linings for boots and shoes. Of denim, Hoye says, â€Å"The most important fabric of the work-clothing group is denim. Denims are strong and serviceable; they are particularly strong in the warp direction, where the fabric is subjected to greater wear than the filling.† Twenty years after this was written, the magazine American Fabrics ran an article which stated, â€Å"If we were to use a human term to describe a textile we might say that denim is an honest fabric – substantial, forthright, and unpretentious.† So how did this utilitarian and unpretentious fabric become the stuff of legends that it is today? And how did pants made out of denim come to be called jeans, when they were not made out of the fabric called jean? One very important reason can be found in the life and work of a Bavarian-born businessman who made his way to Gold Rush San Francisco more than 150 years ago. Levi’s ® jeans, of course, are named for the founder of the company that makes them. A lot of people over the years have thought that Levi Strauss & Co. was started by a Mr. Levi and a Mr. Strauss; or even by the French philosopher/anthropologist Claude Levi-Strauss. The truth is, the company was founded by a man born as â€Å"Loeb† Strauss in Bavaria in 1829. He, his mother and two sisters left Germany in 1847 and sailed to New York, where Loeb’s half-brothers were in business selling wholesale dry goods (bolts of cloth, linens, clothing, etc.). For a few years, young Loeb Strauss worked for his brothers, and in 1853 obtained his American citizenship. In that same year, he decided to make a new start and undertake the hazardous journey to San Francisco, a city enjoying the benefits of the recent Gold Rush. At age 23, Loeb either decided to go into the dry goods business for himself (perhaps thinking that the easiest way to make money during a Gold Rush was to sell supplies to miners), or he was sent there by his brothers, in order to open the West Coast branch of the family business. No matter what the reason, San Francisco was the kind of city where people went to reinvent themselves and their lives, and this proved to be true for Loeb, who changed his name to â€Å"Levi† sometime around 1850, – for which we should be grateful, or else today we would all be wearing â€Å"Loeb’s Jeans.† We don’t know how young Levi Strauss got his business off the ground; what his thinking was; if he travelled into the gold country in search of customers, because LS&CO. lost virtually all of its records, inventory, and photographs in the great San Francisco earthquake and fire of 1906. This has led to many problems for company officers, researchers, and certainly thos e interested in LS&CO.’s history. Chief of these is digging up the true story of the invention of blue jeans, and separating popular myth from historical reality. For decades, the story ran like this: Levi Strauss arrived in San Francisco, and noticed that miners needed strong, sturdy pants. So he took some brown canvas from the stock of dry goods supplies he brought with him from New York, and had a tailor make a pair of pants. Later, he dyed the fabric blue, then switched to denim, which he imported from Nimes. He got the idea of adding metal rivets to the pants from a tailor in Reno, Nevada, and patented this process in 1873. Luckily, the company obtained copies of the patent papers for the riveting process a number of years ago so we know that Jacob Davis, the Nevada tailor, did come up with this idea and worked with Levi Strauss to manufacture riveted clothing. However, the brown canvas pants story is really just an attractive myth. This story likely arose because evidence had been found of some brown pants made of a heavy material which the company sold in the 19th century. However, historical research done at institutions in the San Francisco area provides us with the truth within the myth. Levi Strauss was a wholesale dry goods merchant beginning with his arrival in San Francisco in 1853. He sold the common dry goods products, including clothing whose manufacturers are unfortunately unknown to us. Levi worked hard, and acquired a reputation for quality products over the next two decades. In 1872 he got a letter from tailor Jacob Davis, who had been making riveted clothing for the miners in the Reno area and who purchased cloth from Levi Strauss & Co. He needed a business partner to help him get a patent and begin to manufacture this new type of work clothing. Well, Levi knew a good business opportunity when he saw one, and in 1873 LS&CO. and Davis received a patent for an â€Å"Improvement in Fastening Pocket-Openings.† As soon as the two men got their manufacturing facility under way, they began to make copper riveted â€Å"waist overalls† (which is the old name for jeans) out of a brown cotton duck, and a blue denim. It’s likely that a pair of these duck pants (which survived the 1906 fire) confused early historians of the company, as duck looks and feels like canvas. The denim, however, was true blue. Of course, Levi did not dye any brown fabric blue, as the myth has proclaimed, nor did he purchase it from Nimes. Knowing that the riveted pants were going to be perfect for workwear, it’s likely he decided to make them out of denim rather than jean for the reasons mentioned earlier: denim was what you used when you needed a very sturdy fabric for clothing to be worn by men doing manual labor. The denim for the first waist overalls came from the Amoskeag Manufacturing Company in Manchester, New Hampshire, on the East Coast of the United States. This area, known as New England, was the site of the first American textile mills, and by 1873 their fabrics were wellknown and well-made. Amoskeag was incorporated in 1831 and their denim production dated to the mid-1860s (this being the time of the American Civil War, the company also manufactured guns for a few years). In 1914 an article about the association between LS&CO. and Amoskeag appeared in the mill’s own newspaper. It read in part, â€Å"In spite of the many cheaper grades offered in competition, the sale of the Amoskeag denim garment has kept up due in part to the superior denim used in its construction and in part to superior workmanship such as sewing with linen thread, etc. Doubtless the Amoskeag denim has contributed in no small degree to the success of Levi Strauss & Co. and, in return, that concern has contributed in an equal degree to the success of Amoskeag denims, advertising as it does, their superiority over all other denims.† At Levi Strauss & Co., the duck and denim waist overalls were proving to be the success that Jacob Davis had predicted. Levi Strauss was now the head of both a dry goods wholesaling and garment manufacturing business. In addition to the waist overalls, the company made jackets and other outer wear out of denim and duck; they also branched out into shirts of plain or printed muslin. Levi Strauss died in 1902, at the age of 73. He left his thriving business to his four nephews Jacob, Louis, Abraham and Sigmund Stern – who helped rebuild the company after the disaster of 1906. The earliest surviving catalog in the Archives shows a wonderful variety of denim products for sale. Within a few years, it became obvious to the Stern brothers that they needed a new source of denim. Near the end of the 19th century Amoskeag and other New England mills had begun to experience a slow decline, due to competition from mills in the southern states, higher labor and transportation costs, outdated buildings and equipment and high taxes. The demand for waist overalls was so great that LS&CO. needed a more reliable method of obtaining the fabric they needed. Interestingly, by around 1911 the company had stopped making garments out of cotton duck. It’s possible that this was due to customer preference: once someone had worn a pair of denim pants, experiencing its strength and comfort – and how the denim became more comfortable with every washing – he never wanted to wear duck again; because with cotton duck, you always feel like you’re wearing a tent. By 1915 the company was buying the majority of its denim from Cone Mills, in North Carolina (by 1922 all the denim came from Cone). Founded in 1891, it was the center of denim production in America by the turn of the century. Cone developed the denim which brought Levi’s  ® jeans their greatest fame during the following decades. By the 1920s, Levi’s ® waist overalls were the leading product in men’s work pants in the Western states. Enter the 1930s – when Western movies and the West in general captured the American imagination. Authentic cowboys wearing Levi’s ® jeans were elevated to mythic status, and Western clothing became synonymous with a life of independence and rugged individualism. Denim was now associated less often with laborers in general, and more as the fabric of the authentic American as symbolized by John Wayne, Gary Cooper and others. LS&CO. advertising did its part to fuel this craze, using the West’s historic preference for denim clothing to advertise Levi’s ® waist overalls. Easterners who wanted an authentic cowboy experience headed to the dude ranches of California, Arizona, Nevada and other states, where they purchased their first pair of Levi’s (the products were still only sold West of the Mississippi). They took these garments home to wow their friends and help spread the Western influence to the rest of the country, and even overseas. The 1940s, wartime. American G.I.s took their favorite pairs of denim pants overseas; guarding them against the inevitable theft of valuable items. Back in the States, production of waist overalls went down as the raw materials were needed for the war effort. When the war was over, massive changes in society signalled the end of one era and the beginning of another. Denim pants became less associated with workwear and more associated with the leisure activities of prosperous post-war America. Levi Strauss & Co. began selling its products nationally for the first time in the 1950s. Easterners and Midwesterners finally got the chance to wear real Levi’s ® jeans, as opposed to the products made by other manufacturers over the years. This led to many changes, within the company and on the products. Zippers was used in the classic waist overalls for the first time in 1954. This was in response to complaints from non-Westerners who didn’t like the button fly (the jeans the y were used to wearing had zippers). We received similar comments from men who had grown up using a button fly, saying rather rude things about finding a zipper where buttons should be. We did offer both products all over the country, but making changes to people’s favorite pants is always a risk. Some things took longer to change. One of them was the attitude that denim clothing was appropriate only for hard, physical labor. This was dramatically demonstrated to LS&CO. in 1951. Singer Bing Crosby was very fond of Levi’s ® jeans and was wearing his favorite pair while on a hunting trip to Canada with a friend in that year. The men tried to check into a Vancouver hotel, but because they were wearing denim, the desk clerk would not give them a room; apparently denim-clad visitors were not considered high-class enough for this hotel. Because the men were wearing Levi’s ® jeans, the clerk did not even bother to look past their clothing to see that he was turning away America’s most beloved si nger (luckily for Bing, he was finally recognized by the bellhop). LS&CO. heard about this, and created a denim tuxedo jacket for Bing, which we presented to him at a celebration in Elko, Nevada, where Bing was honorary mayor. Interestingly, the day set aside for this special presentation was called â€Å"Blue Serge Day† not â€Å"Levi’s Day† or â€Å"Blue Denim Day.† Was the word â€Å"denim† not sophisticated enough for the organizers of the event (who were not from LS&CO.)? I don’t think we’ll ever know the answer to this. The 1950s brought great acclaim to Levi’s ® jeans and denim pants in general, though not in the way most company executives would like. The portrayal of denim-clad â€Å"juvenile delinquents† or, as one newspaper put it, â€Å"motorcycle boys† in films and on television during this decade led many school administrators to ban the wearing of denim in the classroom, fearing that the mere presence of denim on a teenager’s body would cause him to rebel a gainst authority in all of its forms. Nearly everyone in America had strong opinions about what wearing blue jeans did to young people. For example: in 1957 we ran an advertisement in a number of newspapers all over the U.S. which showed a clean-cut young boy wearing Levi’s ® jeans. The ad contained the slogan, â€Å"Right For School.† This ad outraged many parents and adults in general. One woman in New Jersey wrote, â€Å"While I have to admit that this may be ‘right for school’ in San Francisco, in the west, or in some rural areas I can assure you that it is in bad taste and not right for School in the East and particularly New York†¦Of course, you may have different standards and perhaps your employees are permitted to wear Bermuda shorts or golf togs in your office while transacting Levi’s business!† Interesting, isn’t it, how this woman predicted the future trend toward casual clothing in the workplace? But even as some Americans tried to get denim out of the s chools, there were just as many who believed that jeans deserved a better reputation, and pointed to the many wholesome young people who wore jeans and never got into trouble. But no matter what anyone thought or did, nothing could stop the ever-increasing demand for Levi’s ® jeans. As one 1958 newspaper article reported, â€Å"†¦about 90% of American youths wear jeans everywhere except ‘in bed and in church’ and that this is true in most sections of the country.† Events in this decade also led the company to change the name of its most popular product. Until the 1950s we referred to the famous copper riveted pants as â€Å"overalls;† when you went into a small clothing store and asked for a pair of overalls, you were given a pair of Levi’s ®. However, after World War II our customer base changed dramatically, as referred to earlier: from working adult men, to leisure-loving teenage boys and their older college-age brothers. These guys called the product â€Å"jeans† – and by 1960 LS&CO. decided that it was time to adopt the name, since these new, young consumers had adopted our products. Now how did the word â€Å"jeans† come to mean pants made out of denim? There are two schools of thought on this one. The word might be a derivation of â€Å"Genoese,† meaning the type of pants worn by sailors from Genoa, Italy. There is another explanation: jean and denim fabrics were both used for workwear for many decades, and â€Å"jeans pants† was a common term for an article of clothing made from jean fabric; Levi Strauss himself imported â€Å"jeans pants† from the Eastern part of the United States to sell in California. When the popularity of jean gave way to the even greater popularity of denim for workwear, the word â€Å"jeans† seemed to get stuck with the denim version of these pants. Certainly the word jeans has been used to describe any type of pant made out of denim, and not just the riveted, indestructible, working-man’s pants originated by Levi Strauss & Co. in 1873. We even called some lightweight denim Western Wear pants in the 1940s â€Å"jeans.† But until America’s youth decided what j eans meant to them, we stuck with the classic moniker â€Å"overalls.† From the 1950s to the present, denim and jeans have been associated with youth, with new ideas, with rebellion, with individuality. College-age men and women entered American colleges in the 1960s and, wearing their favorite pants (jeans, of course), they began to protest against the social ills plaguing the United States. Denim acquired a bad reputation yet again, and for the same reasons as it had a decade earlier: those who protest, those who rebel, those who question authority, traditional institutions and customs, wear denim. Beginning in the late 1950s, Levi Strauss & Co. began to look at opportunities for expansion outside of the United States. During and after World War II, people in Japan, England and Germany saw Levi’s ® jeans for the first time, as they were worn by U.S. soldiers during their off-duty hours. There are letters in the company Archives from people who traded leather jackets and other clothing items to American G.I.s for their Levi’s ® jeans, and wrote to the company asking how they could get another pair. Word began to spread via individual customers, and American magazi nes which made their way overseas. Letters came to us from places as diverse as Thailand, England and Pitcairn Island in the South Pacific, written by people begging us to send them a pair of the famous jeans. British teenagers would swarm the docks when American Merchant Marine ships came into port, and buy the Levi’s ® jeans off the men before they even had time to set foot on dry land. By the late 1960s, the trickle of jeans into Europe and Asia had become a flood. Denim was poised to re-enter the continent which had given it birth, and it would be adopted with an enthusiasm shown to few other American products. Indeed, despite its European origins, denim was considered the quintessential American fabric, beginning even in the mid-1960s, when jeans were still a new commodity in Europe. We entered the Japanese market a few years later. One writer wrote prophetically in 1964: â€Å"Throughout the industrialized world denim has become a symbol of the young, active, informal, American way of life. It is equal ly symbolic of America’s achievements in mass production, for denim of uniform quality and superior performance is turned out by the mile in some of America’s biggest and most modern mills. Moreover, what was once a fabric only for work clothes, has now also become an important fabric for play clothes, for sportswear of all types.† By the 1970s, these â€Å"play clothes† tended toward the flared and bell bottom silhouette. At the same time, new fabrics were used for products that had traditionally been made out of denim. The product line of Levi Strauss & Co. was no exception. â€Å"Blue Levi’s ®Ã¢â‚¬  were still a staple of the company’s collection, but a glimpse at sales catalogs will reveal that customers also wanted plaid, polyester, no-wrinkle flares with matching vests. What looked almost like the end of simple, cotton denim as the fabric of everyday wear, was merely a pause in denim’s continued ascension to global dominion. A closer look will show that denim never really disappeared. Even in the 1970s, when it seemed that denim was being pushed aside in favor of these other fabrics, writers, manufacturers, and marketing executives worked hard to keep denim in the public eye. A writer in the Fall 1970 issue of American Fabrics said, â€Å"Indigo Blue Denim†¦has become a phenomenon without parallel in our times. To the youth of this country, and many other countries in this shrinking world, Indigo Blue Denim does not stand for utility. It’s the world’s top fashion fabric for pants.† By the mid to late 1970s the craze for doubleknits and other like fabrics began to slow. At the same time, marketing reports in various trade magazines showed an upward surge in the popularity of denim, as seen in the number of denim-clad models in print and television advertising. Those who followed clothing trends into the late 1970s were quoted in the trade papers with comments such as, â€Å"Jeans are more than a make. They are an established attitude about clothes and lifestyle.† This attitude could be seen very clearly in the â€Å"decorated denim† craze which saw beaded, embroidered, painted and sequined jeans appearing on streets from California to New York and across the ocean. Personalizing one’s jeans was such a huge trend in the United States that Levi Strauss & Co. sponsored a â€Å"Denim Art Contest† in 1973, inviting customers to send us slides of their decorated denim. The company received 2,000 entries from 49 of the United States, as well as Canada and the Bahamas. Judges included photographer Imogen Cunningham, designer Rudi Gernreich, the art critic for the San Francisco Chronicle newspaper, and the Curator for San Francisco’s De Young Museum. The winning garments were sent on an 18month tour of American museums, and some of them were purchased by LS&CO. for the company Archives. In the Introduction to the catalog published to accompany the museum tour, contest coordinators wrote that Levi’s ® jeans had become â€Å"a canvas for personal expression.† Personal expression found another medium in the 1980s with the â€Å"designer jean† craze of that decade. It seems you can’t keep a good fabric down, no matter what form it takes. We all remember the ways in which denim was molded onto our bodies and the way that jeans were now worn almost anywhere, including places where they would have been completed banned in previous years (such as upscale restaurants). A writer for American Fabrics predicted this trend all the way back in 1969, when he wrote, â€Å"What has happened to denim in the last decade is really a capsule of what happened to America. It has climbed the ladder of taste.† Today, LS&CO. employees wear Levi’s ® jeans to work. Looking back, we see that the very first people to wear Levi’s ® jeans worked with pick and shovel, and though our tools are computer keyboard, PDA and cell phone, we have both been moved to wear the same thing each and every work day: denim jeans. Born in Europe, denim’s function and adaptable form found a perfect home in untamed America with the invention of jeans; then, as now, denim makes our lives easier by making us comfortable; and gives us a little bit of history every time we put it on.

Gendering World Politics Essay

Gender analysis of international relations can no longer be considered new. Both in history and political science, scholars of women and gender and foreign relations have carved out what is now robust subfields. In Gender in World Politics, Tickner’s first chapter explores the encounter between feminism and international relations sub-field of political science. She first establishes the debates within each. Feminism has been the subject of a debate between liberal feminism and its rivals, while IR has seen three: science realism versus idealism, realism vs. social. It is in the context of this policy, â€Å"third debate† means the meeting Tickner feminism and infrared. More specifically, feminism is expanding IR agenda on several fronts, including normative theory, historical sociology, critical theory and postmodernism. In this context, Tickner investigates â€Å"Gender Dimensions of War and Peace and Security† in Chapter Two. In the 1990s, feminists began to question â€Å"realistic† outlook on security, most of which have had a top-down, state-centered, the structural approach. Feminists , however, mostly come from the bottom up, starting at the micro level. For example, feminists attacked the premise that wars have been fought to protect women and children, in fact, in his opinion, to the extent that wars tend to generate massive refugee crisis, violations and rampant prostitution, are disproportionately women wild. In Chapter Three, Tickner moves on to the global economy. Here, feminists have joined the debate on globalization, especially questioned the boosterism often seen in the industrialized West. For example, they use gender analysis to reveal the unpleasant realities of home-based labor in the developing world. What multinational corporate managers would call â€Å"flexibility† and â€Å"cost containment,† the overwhelmingly-female workers would see as lower-paying, less-sta ble, and less-regulated labor. Gender perspectives on democratization, state and world order are the focus of chapter four. In contrast to conventional IR, ignorant of democratization, and more recently â€Å"democratic† peace theories, feminism IR-examining the micro level, where democratic transitions can exclude women or even leave them materially worse. Tickner then looks at women and international organizations (both the United Nations and non-governmental organizations) and norms (such as human rights). In the fifth and final chapter, Tickner suggests â€Å"Some Pathways for IR Feminist Futures.† Clearing these routes involves â€Å"knowledge traditions† that, for example, challenge prevailing gender laden dichotomies such as rational / emotional, public / private and global / local. It also includes new methodologies for IR, such as ethnography and discourse analysis. In the end, Tickner IR urges feminists to remain connected to the broader discipline even when they question their basic assumptions. Tickner synthesizes a wide range of recent literature and thus provides us with a solid understanding of the subject. His is not the only introduction to feminist IR but is a very good. Tickner is careful not to claim too much for feminist IR or fire other approaches. It also takes little for granted, holding such basic terms as â€Å"globalization† and even â€Å"gender† to scrutiny. And finally, this is a nuanced work. Tickner presents fairly represents and disagreements among feminists as well as the geographic and methodological. Similarly, captures the dilemmas facing IR feminists. For example, feminists must work within existing state structures or face them from the outside? If based on the state of progress or in the market. If the book has a weakness, it is one of style. . The writing, moreover, is better and more accessible than in many other political science texts. However, I often find difficult to tackle prose. In part, this is a matter of style, writing Tickner most lack color and verve, interesting anecdote or a vivid illustration. And partly it’s a matter of using the political scientist. â€Å"This language is understood by those inside†, as she says Tickner in another context, â€Å"but can seem quite bewildering, and sometimes even alienating to those outside, making communication very difficult transdisciplinary. Again, the language is typical of the field and could be much worse, but the repeated occurrence of terms such as â€Å"epistemological†, â€Å"postpositivist†, â€Å"problematize† and â€Å"privilege†, as verb , tends to swell the sentences and make the book seem longer than it is. In the end, however, a minor weakness, and definitely should not be allowed to deter non-specialists. In addition to the contribution of the book itself feminist IR, this is one of its great virtues brings relevant trends in political science historians who study women and gender and foreign relations. For many historians have discovered that, in the words of Cynthia Enloe fine, â€Å"the personal is international â€Å". This discovery is facilitated and enriched as Tickner helps us to cross the disciplinary divide. J. Ann Tickner, Gender in International Relations: Feminist Perspectives on Achieving Global Security (New York: Columbia University Press, 1992). Cynthia Enloe, Bananas Beaches and Bases: Making Feminist Sense of International Politics (Berkeley and London: University of California Press, 1990) Jan Jindy Pettman, Worlding Women: A Feminist International Politics (London and New York: Routledge, 1996)

Friday, August 30, 2019

Cold War: Cuba and Latin America Essay

Cold War: Cuba and Latin America There were several motivations for United States policy in Latin America during the 1950’s and the 1960’s. Some of these motivations included the applying of the policy of containment in Latin American to stop the spread of communism. Another motivation was to stop the growing alliance between Cuba and the Soviet Union. All of these motivations were set in place to avoid the development of a second Cuba in Latin America. It was urgent for the United States to act since now there was Soviet Union presence in Latin America offering to be an ally. The United States had numerous justifications for the polices that it followed during it’s presence in Latin America. One of them being President John F. Kennedy’s Alliance for Progress. The United States offered Latin America countries that were developing economically aid; this was a method of applying the policy of containment. The United States need to stop communism motivated them to pass the Alliance for Progr ess. The United States justified the policy by arguing that they needed to have a policy in Latin America that went beyond the Roosevelt Corollary. After 1959, the United State was still devoted to ridding Fidel Castro’s presence from Cuba. The United States policy makers saw the alliance between Cuba and Soviet Union as dangerous thing, particularly after the critical Cuban Missile Crisis. In the Dominican Republic, the Johnson Administration justified the assassination of Rafael Trujillo since his dictatorship had become a liability to the United States. Trujillo was at one point a United States ally because he was willing to protect its interests but he was cruel to his own people and the United States feared he would spark a revolution in the Dominican Republic, much like the one that had brought Fidel Castro to power. There were many things that the United States ignored as it followed the polices that they had enacted. One, being the lack of evidence that there was a relationship between Castro and the Soviet Union before 1959. Another being that the Alliance for Progress was modeled on the Marshall Plan for Western Europe but Latin America was not Western Europe (92). There was also the contradiction between the Alliance for Progress, that it was nice than the method that it followed in Latin America during the 1960’s.

Thursday, August 29, 2019

To what extent do you agree with fischers thesis about the origins of Essay

To what extent do you agree with fischers thesis about the origins of world war 1 - Essay Example While the arguments which connect the First World War to the second are quite plausible and accurate, it seems difficult to ignore other situations which were developing in Europe as a prelude to The Great War. As per the ideas given by Fischer (1967), he suggests that the German elite as well as the Kaiser of Germany had expansionist ideas which could only be satisfied with war. Ever since the social democrats had started showing their muscle in Germany, the elite of the country knew that war would be required to quell their domestic issues as well as further their agenda of expansion (Hart, 1972). Essentially, the thesis presented by Fischer (1967) places the blame for the war on the German rulers who used the assassination of the Archduke as a framing device and a catalyst for making the declaration of war jus ad bellum. There is certainly evidence to support this since documentary evidence which are presented by Fischer shows that some people in power were calling for an expansionist approach and were looking for German domination over its European neighbours. Therefore, instead of foreign influences and political movements of alliances across the continent resulting in the inevitable situation of war, the war was created by Germany and therefore the blame for the First World War much like the Second World War goes to the Germans. Fischer (1967) points out connections which link Germany under Kaiser Wilhelm in the First World War with Germany under the regime of Hitler. The primary connection being the business alliances which benefited from the war in many different ways including the industrial manufacture of weapons of war as well as the economic activity required for keeping up the war effort. The argument presented by Fischer (1967) is an extreme end of the spectrum since it suggests that Germany willed the war upon Europe while the rest of Europe was unwilling to go to war but was dragged into it due to the various treaties that

Wednesday, August 28, 2019

Evidence base practice Assignment Example | Topics and Well Written Essays - 250 words - 1

Evidence base practice - Assignment Example RAM was of was advanced by one Callista Roy, back in 1976 (Clarke, Barone, Hanna and Senesac 2012). Roy’s major aim while developing the model was to promote adaptation in the nursing practice. The model’s development was influenced by various factors like; education, clinical experience, family, religious background, and education (Weiss, Hastings, Holly and Craig 2012). It seeks to address the following issues; According to Roy, adaptation happens whenever individuals respond positively to environmental changes. The model comprises four major components of individual, Nursing, Health and Environment (Weiss et al. 2012). The model notes that an individual is a bio-psycho-social being that constantly interacts with an ever changing surrounding. It considers people as individuals or in groups like families, organizations and the society as a whole (Clarke et al. 2012). It suggests that health is both a status and a procedure of being complete. Health and sickness are considered unavoidable areas of an individual’s life. RAM remains the best fit for the nursing practice because it gives practical suggestions concerning the nursing practice and process. It supposes that for individuals to respond well to changes in the surrounding, they have to adapt. Such adaptation depends on the stimulus the person is exposed to and his/ her extent of adaptation (Smith 2013). The individual also has four adaptation means, namely; physiologic necessities, self-notion, role purpose as well as interdependence. In conclusion, RAM is still the best fit for the nursing practice because it gives practical suggestions concerning the nursing practice and process. It suggests that all through the nursing process, every nurse, and all healthcare professionals should make adaptations to the nursing care plan. All this is done on the basis of the patient’s health

Tuesday, August 27, 2019

Presenting song as poem Essay Example | Topics and Well Written Essays - 500 words

Presenting song as poem - Essay Example This kind of music usually uses a simple vocabulary, and known words ."Dear mama" opens with a statement that shows there will be not so many sophisticated words: "You are appreciated". So, the message and the theme of the poem are stated very clear and directly. In this case this sentence , according to the style of music is equivalent to all the literary devices used in other poems or songs. Even this thing happens it does not mean this song has less suggestive meaning. The word choice is much more related to the events that influenced the speaker's attitude or feelings about his mother: the problems from school, with the police, Thanksgiving Day. The speaker uses the blacks' dialect or the street language for showing an affiliation to a group. Here his mother is shown as an icon, "black queen", "sweet lady", but she also has a terrestrial side, understanding and helping him. Being a straight song the figurative language is not so much used , but it can still be found some literary devices. At the beginning of the text the speaker makes a simile between his family and other ones: "Over tha years we wuz poorer than tha other little kids", his condition being much poorer.

Monday, August 26, 2019

A Look at a Baters Food Group's Distribution Strategy in Meeting Essay

A Look at a Baters Food Group's Distribution Strategy in Meeting Delivery Performance - Essay Example Aside from discussing the significance of zero-inventory-ordering policies, staggering delivery, and just-in-time (JIT) concept in the establishment of lean production and distribution system, this report identified and discussed several factors that can trigger operational bottle-neck within a food manufacturing company. Furthermore, this report tackled the importance of using e-commerce in expanding the existing distribution system of Baxters. Table of Contents Abstract †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦ 2 Table of Contents †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. 3 I. Introduction †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚ ¬ ¦. 4 II. Common Factors that Directly and Indirectly Causes Distribution and Delivery Performance Problems on E-Commerce . 5 III. Importance of Establishing Lean Production and Distribution System on E-Commerce .............................................. 7 IV. Significance of Zero-Inventory-Ordering Policies, Staggering Delivery, and Just-In-Time (JIT) Concept in the Establishment of Lean Production and Distribution System ........... 10 V. Baxters Food Group’s Distribution Strategy in Meeting Delivery Performance †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. 12 VI. ... 22 Appendix II – Significance of E-commerce on Baxters’ Distribution Strategy †¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. 23 Introduction Formerly known as W.A. Baxters & Sons Limited, Baxters Food Group Limited was established back in 1868 as a local food manufacturing company in Scotland that specializes in the manufacturing of microwaveable gourmet soup bowls, chilli bowls, pickles and other food preservatives such as jams, marmalades, and jellies, chutneys and sauces (Baxters 2012a; Bloomberg Businessweek 2012). Today, the company manufactures its food products throughout Australia, Canada, and Poland (Baxters 2012b). Despite the global economic uncertainties during the past couple of years, the use of e-commerce enabled Baxters Food Group able to maintain the efficiency of its distribution and delivery performance. Even though the company’s monthly sales were down by 2% from ?125.8 million as com pared to ?129 million in 2010, Baxters Food Group reported approximately 6% increase in its pre-tax profits as of 2011 (McCulloch 2012). Delivery performance of Baxters is highly dependent on its ability to establish a lean production system. Since Baxters Food Group is a local food manufacturing company in Scotland, it is necessary to identify and discuss all factors that could create flaws on its production and distribution line. In relation to the distribution strategies used by Baxters Food Group, this report will focus on analyzing the factors that made the company able to maintain its efficiency despite the volatility in the demand for canned food products. Using the principles, theories, frameworks, and techniques of modern manufacturing strategies, this report will discuss how the Baxters was able to maintain its daily operational costs low

Sunday, August 25, 2019

Education in China and America Essay Example | Topics and Well Written Essays - 1250 words

Education in China and America - Essay Example Literacy can be achieved through many ways which some include cultural, visual, internet, and information. Although many people in the society lack literacy skills, literacy remains an elusive target to all people. This paper discusses in depth about the difference between China education and American education. Although China has a high population, it has managed to excel in its basic education. It is depicted that China attaches a great significance to education. Education remains the first priority in China. Chinese high schools differ immensely with American high schools primarily in structure. An American school day tends to be shorter compared to a Chinese school day. Scholarly research denotes that American students spend only seven hours in school during the weekdays whereas as a Chinese student, I used to spend stay in school spending most of my time studying at times even until 10.00 at night. The class structure too varies between the two countries. In contrast to the Chin ese system where students remain in the same room all day, the teachers rotating in and out, the American students have each class in a different classroom with different classmates unlike Chinese students who have a set of the same classmates all day. This becomes advantageous to the students since the students are able to form a strong social bond between each other. I could have the same classmates even for the whole grade division. This made us live as a family like brothers and sisters. The two countries portray a different system of grade division. In essence, America’s most common system starts with kindergarten then proceeds to fifth grade as elementary school, sixth through eighth grade as junior high school, and ultimately ninth through twelfth grade as high school. In contrast to the American system, Chinese system begins with kindergarten through the sixth grade called the elementary school and finally the seventh grade through the twelfth grade, which is called t he middle school. The two countries differ in the way information passes from the teachers to the students. America always prefers a high personal expression. Most of the classes entirely revolve around discussion materials where the teacher expects the students to engage in dialogues. American education encourages classroom participation that contributes greatly to the performance of the students. In China, teachers do not put much emphasis on class discussions. As a Chinese student, observed the quietness in the room as the teacher came and lectured as we all listened. This reduces the teacher-student relationship since it was very rare to find a student engaging in a talk with the teacher. Some students even went to an extent of fearing the teacher. The students maintain respect amongst themselves and their teachers. The classroom is quiet as compared to the American classroom that is noisy and boisterous. Chinese classrooms emphasize on a more formal atmosphere compared to the A merican classrooms. American education always lays its basis on technology. It is depicted that in America, the progress in the field of technology and knowledge occupy’s about three-quarter of their productivity output. Unlike American education

Saturday, August 24, 2019

Diversity, Equity, and Standards Assignment Example | Topics and Well Written Essays - 1000 words

Diversity, Equity, and Standards - Assignment Example Some of those black Americans imbued with leadership skills struggled hard to find their place under the American system (Gilbrich, 1999). Booker T. Washington became the first African-American to attend higher education in America but endured hardships to complete his education (Gilbrich, 1999). W. E. B. DuBois was the first to gain a doctorate degree and organized the National Association for the Advancement of Colored People. It was Mary McCleod Bethune who started advocated that native American women should avail education for employment and opportunities (Gilbrich, 1999). In 1904, she founded the first African-American school for women in Florida which was later named as the Bethune-Cookman College (Gilbrich, 1999). The founder also later became a presidential consultant on education and racial matter under Pres. Franklin Roosevelt (Gilbrich, 1999). These humble beginning encouraged the natives to get out from their reservations to learn albeit poor effort, at that time, of the government to integrate the traditional and cultural beliefs of the tribes (Gilbrich, 1999). ... The question of identity became a post-education concern too. As American education evolved, authorities have reconsidered the significance of providing education for native American in recognition of societal diversity and appreciation of multi-ethnic culture. The government take serious responsibility on early childhood learning program, kindergarten classes, elementary and secondary education. Scholarship is provided to American natives specially those who lacked the financial capacity to enrol for higher education. Scholarship, grants and federals student financial aid were offered for college education (Department of Education, 2012). Tribal scholarship otherwise known as local scholarship could also be availed. Many Native American nowadays are able compete with the rest of the white community in business management, in governance, in leadership, and in music or arts industry (Department of Education, 2012). State education reforms are also undertaken under the administration o f President Barack Obama, the re-elected executive who hailed from the black community of Africa. But more reforms are yet to be done. Its however appreciated that although there remains some disparity in the state comparative result of the NAEP using White-Black-based data segregation (e.g. as cited in the average mathematics scale score sorted by race/ethnicity to report trends in Grade 12 of public school as of 2009) of statistics but the average difference of scale score is not quite reasonable to conclude that there is indeed racial discrimination in the access and enjoyment of education (National Center for Education Statistics, 2012; NAEP, 2012). The curriculum in elementary and high schools is subject to the budget and strategic

Friday, August 23, 2019

Marketing Planning Essay Example | Topics and Well Written Essays - 5000 words

Marketing Planning - Essay Example HSBC has a network of 6,600 offices in all these regions. It is a public limited company headquartered in England (HSBC, 2013a). This bank was originally established with the aim of facilitating trade between two most important trade partners in the world during the mid 19th century; China and the countries in Europe. The Chinese economy had been progressing for all these 150 years, due to which, since the 19th century the bank has occupied a good position to reap real benefits from the rapid GDP growth of the Chinese economy. However, the financial break down that occurred in the last decade, increased regulatory activities and led to higher scrutiny of the financial transactions taking place all over the world. In 2012, the HSBC bank faced strong regulation by the international organizations. Public scrutiny can potentially cost the bank many billion dollars. This paper presents a marketing plan for the bank framed according to the SOSTAC structure. Organizational structure Organiz ational structure refers to the relationships that are established in any firm. It refers to the hierarchical structure within the organization. Hierarchy is created with the organization by way of the rules set down for the functionalities and responsibilities at different levels of the organization and also the way in which the relationships are maintained among the employees at different levels. ... This structure describes the level of communication among the employees working at the same level within the organization (Vos and Schoemaker, 2005). High level of transparency in communication among the employees creates a number of benefits; easier flow of communication among employees, minimal bureaucracy and most importantly a pleasant working atmosphere. Increased transparency allows the employees to work with a relaxed mood. Furthermore, HSBC promotes a very informal work culture within the company. This culture allows employees to solve any kind of discrepancy to be solved informally. It also motivates employees and reduces job related stress. In turn it helps the organization to improve overall employee performance. Mission, vision and values HSBC bank has the vision of becoming â€Å"the leading international Bank† (HSBC, 2013b) in the world. The organization has certain missions that it follows in order to achieve their vision. The mission of the bank is to set up a network among all its customers by offering them top ranked service. The bank is growth oriented and all activities of the bank follow the growth objective of the organization. Precisely, the mission of the bank is â€Å"connecting customers to opportunities† (HSBC, 2013b). The bank wants to play a fulfilling role in helping people realize their dreams and achieve their ambition. By successfully following this mission the organization can enable its clients to make more profitable business. It helps economies to thrive and make prosperous economic activities. This is the purpose with which the bank operates. Organizational values The company holds high value regarding its own principles and also about the values of the clients that engage in business

Thursday, August 22, 2019

Case study 4 Assignment Example | Topics and Well Written Essays - 250 words

Case study 4 - Assignment Example cool, fun and objects that can be used for reading different materials, Apple was able to appeal to a segment of consumers who were ready to spend lots of money for such features. The three most important factors of success in Apple are investment on innovation, strategic market segmentation and emphasis on customer satisfaction. First, Apple spends 2.5 billion on developers to create a range of applications for is products. Second, Apple focuses on one section of the market by fulfilling their needs and requirements, and in turn they do not hesitate to spend on the products. Last, Apple practices excellent customer service, in addition to a variety of quality products and services, which satisfies needs of customers and earns their loyalty. Steve Jobs played a crucial role in steering the company to heights of success, and the company can respond by upholding his strategies. I think the company will still be successful because the growth and prosperity lies not only on leadership, but also on strategic ventures, which Steve Jobs left behind. Therefore, I would be willing to invest in Apple because the company still maintains her strategic decisions and market ventures. Microsoft was not able to achieve success like Apple because of failure to design attractive products, focus on the general market instead of creating differentiation with a profitable segment, lack of strategic leadership and decision making, poor innovation and customer satisfaction and finally, lack of product

The social and historical influence Essay Example for Free

The social and historical influence Essay Look at the significance of chapter 5 to the novel as a way to focus on the relevance and effect of the writers use of language to describe setting and characters and what it shows about the social and historical influence? This novel is about an extremely intelligent doctor called Victor Frankenstein who used his knowledge of science to find a way of keeping people alive. Mary Shelleys plot must have been influenced by the changes that were happening around her in Britain during the early stages of the industrial revolution. Scientists at this time were investigating the meaning of life. They were using corpses in experiments. She appears to have a great understanding of the scientific discoveries of that time. Behind the writing is a deeper meaning. Mary Shelley is trying to show us how an addiction to anything is not natural and very often dangerous to our well-being and to our social and mental health. Chapter 5 is a crucial moment because this is when the monster is brought to life. Mary Shelleys opening sentence of chapter 5 is Dr Frankenstein telling us It was a dreary night of November that I beheld the accomplishment of my toils. He goes on to say It was already one in the morning; the rain pattered dismally against the panes. The dismal setting contrasts with Dr Frankensteins expectation and anxiety he is feeling just moments before his creation is brought to life. This setting and events causes us to be alarmed and scared because we start imagining what it would be like to be in his position. He then tells us The monsters dull yellow eye opens and it begins to breathe. We would think that he is pleased that he has succeeded in what he set out to do but he straight away is horrified at his creation and what it has turned out to look like. He expresses his negative feelings many times, such as breathless horror and disgust filled my heart and the demonical corpse to which I had so miserably given life. This shows us that the social influence of looks was as great then as it still is in 2009. Dr Frankenstein is distraught about how his creation looks rather than thinking about the feat he has just achieved, I find it puzzling that he is shocked by its image only after the creature has been brought to life, even though he could have seen what it would look like when it was on the operating table. I believe that this shows Dr Frankenstein was so caught up by the science involved with making this being that he was blind to the obvious This is backed by him saying I had selected his features as beautiful this shows us he genuinely thought it was beautiful when it was being made. This chapter shows us that social influences are similar to that we experience today. Through out the novel Mary Shelly uses language to change the atmosphere. This is most present in chapter 5 in which the monster is born and Dr Frankensteins mood changes from a rather exited one to one of bitter disappointment and then again to cowardice before becoming rather animated at the arrival of Clerval. There is much emphasis on description. Dr Frankenstein describes in great detail the evening, the monster and his changing feelings towards his project. His language is often overdramatic and emotional. At length lassitude succeeded to tumult I had before endured, and I threw myself on the bed in my clothes, endeavouring to seek a few moments of forgetfulness, here he is using descriptive words that would not commonly be used today, this reminds us that this novel was written in the 1800s. Shelley writing style is very catching. I believe this is because she isnt writing as a watcher but as a character. This gives us the best view because we know the characters feeling as well as knowing what they dont, this can leave the reader frustrated, worried, scared, and hopeful such as at the end of chapter five when Clerval arrives and Victor has his nervous breakdown. This is evident when victor imagines what could happen. Clerval asks My dear Victor, what, for Gods sake, is the matter? Do not laugh in that manner. How ill are you! What is the cause to all this? To which Victor replies Do not ask me, He can tell. Oh save me! Save me! All this while Frankenstein had been fighting an invisible monster, had Clerval known like us about the monster many people could have been saved. In conclusion I think Mary Shelley produced a novel that was frighteningly believable. I believe that this book shows us that social influence is massive and Shelley believed that one-day man would hold the meaning to life not God. Show preview only The above preview is unformatted text This student written piece of work is one of many that can be found in our GCSE Mary Shelley section.

Wednesday, August 21, 2019

Evolution Of The Principle Of Comparative Advantage Economics Essay

Evolution Of The Principle Of Comparative Advantage Economics Essay From the early 19th century, new outlooks on trade theory have influenced how countries have engaged in production. One of the most significant developments in this area was that of comparative advantage. Comparative advantage refers to the ability of a country to produce one good at a lower opportunity cost than another. Comparative Advantage argues that all countries will gain from trade, even those that are relatively inefficient in the production of goods. All countries will gain, even those with an absolute disadvantage in the production of all goods, as opposed to with Absolute Advantage, which refers to the ability of a country to produce one good at a lower opportunity cost than another. In this essay, I intend to discuss how the theory of comparative advantage has come into being, from its inception in the early 1800s, through the neo classical period and into the modern era. This discussion will look at the variations on the theory proposed by some of the leading economists in the field of international trade, and how they viewed and expanded upon the original law of comparative advantage. In looking at how the law has developed over the past two centuries, my aim is to show the principles uses in describing how international trade is conducted to this day. In the latter sections of the essay, I will refer to empirical evidence that tests if comparative advantage predicts accurately patterns of international trade. Comparative Advantage Adam Smith illustrated an early understanding of the benefits that could be gained by focusing on the production of goods that the population was most efficient at producing: If a foreign country can supply us with a commodity cheaper than we ourselves can make it, better buy it of them with some part of the produce of our own industry, employed in a way in which we have some advantage (Smith,1776,295). This idea demonstrated Smiths understanding of the concept of absolute advantage, whereby gain is realised in exchange between two men who are superior in the production of one good. The principle of comparative advantage was first presented in the work of Robert Torrens in his 1815 Essay on the External Corn Trade, where Torrens discussed Absolute Advantage in substantial detail and explained how it was beneficial for a country to engage in trade for a commodity even if the host country could produce the same good at a lower actual cost than the country it was trading with. However, it is David Ricardo who is widely credited with the first complete formulation of the theory of comparative advantage in 1817. Ricardo recognised that absolute advantage was only a limited version of a more general theory. His early understanding of the theory of comparative advantage is displayed in the quote: Two men can both make shoes and hats, and one is superior to the other in both employments; but in making hats he can only exceed his competitor by one-fifth or 20 per cent; and in making shoes he can excel him by one-third or 33 per cent: will it not be in the interest of both that the superior man should employ himself exclusively in making shoes, and the inferior man in making hats? (Ricardo,1817, p136). The assumptions in his reasoning can be seen in Kemp Okawas review of the formulation of comparative advantage, where they set out a model in which both countries are initially autarkical, then subsequently open up to a free trade environment, that all countries have at their disposal the potential to produce all possible commodities, and that in a state each country involved is able to consume all of these commodities. (2006,468). John Aldrich was recorded as saying, Torrens, Ricardo and Mill all made contributions to the discovery of comparative advantage, not by a major multiple discovery but through a sequence of insights and arguments (Aldrich, 2004, 379). James Mill studied and subsequently ratified Ricardos view on the existence and viability of comparative advantage in 1821 when he said When two men have more than they need, it will be a great accommodation to both if they can perform an exchange of a part of the food of the one for a part of the cloth of the other, and so in other cases (1821,63). In his treatment of the principle, he provided one of the clearest explanations and examinations of the workings of comparative advantage, rectifying much of the ambiguity of Ricardos exposition. His work enhanced the status of the principle of comparative advantage in economic circles by illustrating its viability through the use of numerous numerical examples. John Stuart Mill, son of James Mill, studied and subsequently made refinements to the theorem introduced by his father. Through his work, comparative advantage gained more universal acceptance as an explanation of the benefits of trade in the mid 19th century. He was responsible for the rational reconstruction of Ricardo in which the labour cost coefficients were interpreted as the amounts used in each unit of a good produced rather than Ricardos labour cost of producing the amounts contained in a typical trading bundle'(Ruffin,2002,727-748). Some of Mills most prominent work in the field of comparative advantage can be seen in his 1844 Theory of international values which aided the economic community to come to a fuller understanding and appreciation of the centrality of comparative cost in trade theory (Gomes,2003). In 1930, Gottfried Haberler of the neo-classical school of economics provided a modern interpretation of the theory of comparative advantage which generalised and separated it from David Ricardos labour theory of value, helping to form the foundations of modern trade theory. Haberler believed that it was possible to reformulate the theory in such a way that its analytical value and all conclusions drawn from it are preserved, rendering it at the same time entirely independent of the labor theory of value (Bernhofen,2005,998). His work indicated that comparative advantage is about resource allocation, and adapted it into a more general principle that accommodated non-linear production frontiers. Kemp and Okawa state that Haberler indicated that the relative opportunity costs of producing determines both the direction of free international trade and the manner in which gains from this trade are shared by trading partners (2006,1). The next significant progression in the development of the theory was through the work of two Swedish economists Eli Heckscher and Bertil Ohlin. Their theory examined the reasons behind the differences in comparative costs. The Heckscher-Ohlin model introduced new ideas which differed from the classical approach. Factors of production are taken into account for the first time, of which the two primary ones were labour and land (Eicher, Mutti Turnovsky,2009,68). The theory explains how countries of similar technological levels can trade, how trade affects the distribution of wealth in the economy and how growth in an economy affects trade. Their model was based on two assumptions. Firstly; that countries would no longer differ in terms of technology, but rather by their endowment of factors of production. This meant that countries would be concerned with relative differences in labour and capital abundances compared to their trading partner. The second assumption was that goods differ by the factors of production they require. They explained that the more abundant a factor of production was, the greater the likelihood that it would be cheaper to produce their specialised goods and hence, the opportunity cost of producing goods which were reliant on this factor would be lower in other words, that the source of comparative advantage resided in the factor endowments of a country (Viner,1937). This implies that countries would have a comparative advantage in producing goods that their abundant factor of production. For example, countries with an abundant supply of labour would reap the greatest benefits by focusing their specialism on labour intensive products. The benefits of the H-O theory compared to the theory of comparative advantage were that: it offered; a better means of explaining observed trade patterns, the ability to develop implications about how trade affects wages and returns on capital, it shows the economic growth on trade and it offers a more thorough explanation of political groups on trade. A further development of H-O theory was the Stolper-Samuelson theorem which shows that the owners of scarce/abundant factors are disadvantaged/benefited when an economy opens up for trade and specializes in the production of the good that is intensive in its use of the abundant factor a discovery that was beneficial in the understanding of the politics behind free trade and protectionism. The theory states that during increase in the price of an abundant factor and the fall in the price of the scarce factor, and that the owners of the abundant factor will find their incomes rise the owners of the scarce resource will see their real incomes fall. Rogoff states that their paper was the first to demonstrate the Heckscher-Ohlin theorem in a two good, two country, two factor (labour and capital) model. The H-O theorem shows that with identical technologies at home and abroad, the country with the larger endowment of labour relative to capital should export the labour intensive good. This advancement of the theory aided the thinking about trade between countries with widely different capital-labour ratios. (Rogoff,2005,8). Chipman and Inoue state that for their theory, the following assumptions are made: 1. All trade takes place in a free trade environment, with no transport costs attached. 2. The factors of production, labour and capital, are freely mobile between industries within countries, while at the same time being immobile between countries. 3. The production functions neoclassical and constant over time. 4. The endowment of labour in each country is constant over the two periods (2001,2). Contemporary research by economists such as Helpman Krugman (1985) adapts traditional comparative advantage theory by relaxing some of the assumptions that underlie the contemporary specification of the principle, such as economy of scales and product differentiation. Nowadays, the comparative advantage theory can be further developed by including new aspects, such as specialization, technological differences and aspects of game theory (Tian, 2008). Comparative advantage may appear to be somewhat paradoxical, in the sense that it states that, under a certain set of conditions, a country should produce and export a good that its workforce is not particularly skilled at producing when compared directly to the workforce of another country. However, it holds true when it is explained that when two countries who each hold a comparative advantage in a particular good engage in trade with one another, trade between these nations raises both of their real incomes, on the condition that there is a relative gap between the costs of the same types of products in production by the countries engaging in trade. Ricardos model shows that, if a country wants to maximise gain, it must strive to fully employ all of its resources. It should then allocate its resources to each these resources to its comparative advantage industries, and subsequently, it should aim to operate in a free trade environment, which will benefit all trading partners invol ved. It can be seen how comparative advantage is still a useful and important concept in explaining international trade. Jones and Neary conferred their opinion on the ongoing validity of the theory: While the principle of comparative advantage may thus be defended as a basic explanation of trade patterns, it is not a primitive explanation, since it assumes rather than explains inter-country differences in autarkic relative prices (Reinert, Rajan Glass,2009,199). Revealed comparative advantage is an index devised by Bella Balassa used to calculate the relative advantage or disadvantage a country may have in a specific class or category of goods or services. This advantage can be assessed through analysing trade flows. The index attempts to uncover a revealed comparative advantage by assessing the countrys specialism in exports in relation to others. It is a highly useful means of assessing how useful Comparative advantage is in explaining contemporary trade patterns. A large number of empirical tests of comparative advantage have been undertaken to test the theory of comparative advantage. MacDougall tested the hypothesis that the export ratios of two countries to a third market were a function of labour productivity ratios of the two countries in question. The results were supportive of the Ricardian model, and his work demonstrated that trade between the United States and the United Kingdom in 1937 followed Ricardos prediction. CONCLUSION Throughout this essay, it can be seen how the ideas forged in the original theory of comparative advantage have eminently formed a large part of the basis for understanding how international trade is conducted today. Since its advent, attaining a comparative advantage has been heavily reliant on recognising and exploiting the natural resources and competencies that are present within a country. Even to this day, countries specialise their economies depending on the factors of production that enable them to produce most efficiently, all the while recognising that holding a comparative advantage is a cornerstone of effective trade practices. In the modern era and most likely in the coming years, comparative advantage is likely to continue to become an increasingly more man made factor, with the utilisation of new technologies resulting in the likelihood of significantly increasing production efficiency, and thus affecting the areas on which a country holds an absolute and comparative advantage. Although the original theory of comparative advantage may not subscribe to the current economic environment, it is still a relevant means of determining the most beneficial trading strategy for a countrys economy. Adaptations to the theory since its inception have facilitated the continued utilisation of the idea in the current climate. According to Gale, the changes that have taken place over time are a product of globalisation, for example, new trade barriers and changes in agricultural policy have caused a decrease in some countries manufacturing prowess and has resulted in a subsequent reduction in its comparative advantage (2002,27). The current trend of globalization means that the assumptions associated with comparative advantage are becoming increasingly more difficult to apply.  Ã‚  Despite this, it is still a relevant means of describing international trade patterns today and the ways in which a country can best exploit its natural endowment of resources. To reinforce this point, Paul Samuelson has stated that comparative advantage is the only law of economics which can stand comparison with the laws generated by hard sciences. Modern conditions may cloud our law but, suitably qualified, it still holds (Gray,2000,316). Through my research into the growth of comparative advantage from its inception, I believe that the concept still aptly demonstrates the fundamental importance of the effects, determinants and nature of international trade. Bibliography Aldrich J, Journal of the History of Economic Thought, Volume 26, Number 3, September 2004 (pg 396) (26, 3, 379-399) Bernhofen, Daniel M. (2005), Gottfried Haberlers 1930 Reformulation of Comparative Advantage in Retrospect,   Review of International Economics; Nov2005, Vol. 13 Issue 5, p997-1000, 4p Calhoun, Craig and Gerteis, Joseph (2007) Classical Sociological Theory, Blackwell Publishing Chipman, John S and Inoue, Tadashi (2001), Intertemporal Comparative Advantage, I *(pg. 2) http://www.econ.umn.edu/~jchipman/econ8402f05/INTERTMP.PDF Eicher, Theo S., Mutti John H. and Turnovsky Michelle H (2009), International Economics, Routledge; 1 edition, (pg. 68) Faulkner, David and Segal-Horn, Susan (2004), The economics of international comparative advantage in the modern world, European Business Journal; 2004 1st Quarter, Vol. 16 Issue 1, p20-31, 12p. Gale, Fred (2002) Chinas Food and Agriculture: Issues for the 21st Century / AIB-775, Economic Research Service/USDA (pg27) Gomes, Leonard (2003), The economics and ideology of free trade: a historical review, Edward Elgar Publishing Ltd Gray, H (2000) A Review of Maneschi, Andrea, Comparative Advantage in International Trade: A Historical Perspective, International Trade Journal; Fall2000, Vol. 14 Issue 3, p315-320, 6p Kemp, Murray C., and Okawa Masayuki (2006), The Torrens-Ricardo principal of Comparative Advantage: An Extension Review of International Economics, Vol. 14, No. 3, pp. 466-477, August 2006 Maneschi, Andrea (1998) Comparative advantage in international trade: a historical perspective (pg52) Mill, James (1821), Elements of Political Economy, London: Henry G. Bohn   chapter III, pg63 Reinert, Kenneth A., Rajan, Ramkishen S. and Glass, Amy Jocelyn (2009), The Princeton encyclopedia of the world economy, Vol 2, Princeton University Press Rogoff, Kenneth (2005), Paul Samuelsons Contributions to International Economics Harvard University, pg 8 http://www.economics.harvard.edu/files/faculty/51_Samuelson.pdf Ruffin, Roy J. (2002) History of Political Economy; Winter2002, Vol. 34 Issue 4, p727-748, 22p Smith Adam (1776) An Inquiry into the Nature and Causes of the Wealth of Nations, Hackett Publishing Company Inc. Book IV, Chapter III (IV.3.33) The evolution of the comparative advantage argument for free trade. http://www.econ.ku.dk/kgp/doc/Lectfrms/evolution%20of%20comparative%20advantage.pdf Tian, Yiqian (2008), A New Idea about Ricardos Comparative Advantage Theory on Condition of Multi-Commodity and Multi-Country International Journal of Business and Management Vol.3, No. 12, December 2008 Viner, Jacob (1937), Studies in the Theory of International Trade, New York: Harper and Brothers Publishers, Chapter VIII Introduction In the course of this essay, I intend to outline the development of the principle of the quantity theory of money, from its initial inception in the 16th century right up to the current outlook on the theory in the 21st century. Subsequently I hope to outline the theorys importance as a catalyst for the development of monetarism in the 20th century, and outline how monetarism has progressed since that point in time. The quantity theory of money provides a means of answering the question what gives money value? We know that intrinsically, a bank note is a valueless piece of paper and ink, and that its perceived value stems from the quantity of it in supply. Due to the value of money being variable, a change in money demand or supply will yield a change in the value of money and in the price level. The more money that is in circulation means that each individual bill becomes worth less. This will result in it taking more bills to purchase goods and services, and as a result, price level will increase accordingly. The quantity theory of money states that the value of money is based on the amount of money in the economy that the nominal money supply is a function of the equivalent changes in price levels as it relates to the demand for money necessary to meet the needs of current transactions. For example, in Ireland, according to the theory, when the central bank increases the money supply, the value of money falls and the price level increases. Main body The theory states that a one-time change in the stock of money has no lasting effect on real variables but will lead to a proportionate change in the money price of good. In other words, it declares that moneys value or purchasing power varies inversely with its quantity. To this day, there exists prevalent academic discussion as to who developed the theory. The first possible statement of the quantity theory of money originated in the work of Nicholaus Copernicus In 1526, when Copernicus wrote a study on the value of money, Monetae cudendae ratio, in which he noted the increase in prices following the import of gold and silver from the new world. He expressed the findings of his studies into the value of money, and in this work, he formulated a version of the quantity theory of money. Copernicus observed that the value of money would fall if it was issued to excessive quantities, to the point where it was almost valueless. Volckart notes that Money can lose its value through excessive abundance, if so much silver is coined as to heighten peoples demand for silver bullion. For in this way, the coinages estimation vanishes when it cannot buy as much silver as the money itself contains. The solution is to mint no more coinage until it recovers its par value (1997,433). Jean Bodin took a different stance in the middle of the sixteenth century. In 1568, he drew attention to the influx of gold and silver into Spain, and consequently the rest of Europe, from the Americas. He argued that the price level had risen along with the stock of bullion available for monetary purposes and was able to draw a conclusion about the link between these events. John Locke accepted this idea and stated the Quantity Theory of Money as a general rule, that if the supply of money increased, the prices of all goods will rise. If money supply fell and the prices of goods fell, than the prices of foreign goods would rise relative to domestic goods both of which will keep us poor (Locke, 1692). The first concise statement about the existence of a quantity theory was that made by David Hume in 1752. His theory stated that the general level of prices depended upon the quantity of money currently in circulation. Where coin is in greater plenty; as a greater quantity of it is required to represent the same quantity of goods; it can have no effect, either good or bad that great plenty of money is rather disadvantageous, by raising the price of every kind of labour. (Hume, 1752, Pg 15) He also outlined the relationship between supply of money and prices All augmentation (of gold and silver) has no other effect than to heighten the price of labour and commodities; and even this variation is little more than that of a name (Hume, 1752, 296-7). Alfred Marshalls version of the quantity theory was an attempt to give microeconomic underpinnings to the macroeconomic theory that prices and the quantity of money varied directly. He did this by elaborating a theory of household and firm behaviour and integrating it with the macroeconomic question with the macroeconomic question of the general level of prices to explain the demand for money. Marshall reasoned that households and firms would desire to hold in cash balances a fraction of their money income In the late nineteenth and early twentieth centuries, two versions of the theory competed. One advanced by the American economist Irving Fisher, treated the theory as a complete and self-contained explanation of price level. The other, propounded by the Swedish economist Knut Wicksell, saw it as part of a broader model in which the difference between market and natural rates of interest jointly determine bank money and price level changes. Fisher, in particular spent considerable effort in discussing the temporary effects during the period of transition separately from the permanent or ultimate effects (which) follow after a new equilibrium is established if, indeed, such a condition as equilibrium may be said ever to be established (Fisher,1911,p55-6). In this statement, he finds that the quantity theory will not hold true strictly during transition periods. His work was a forerunner in what would later become known as monetarism. He attempted to take the classical schools equation of exchange and convert it into a general theory of price and price level. The contrasts between the two approaches were striking. Fishers version was consistently quantity theoretic throughout and focused on the classical propositions of neutrality, money-to-price causality, and independence of money supply and demand. By contrast, Wicksells version contained certain elements seemingly at odds with the theory. These elements included a real shock explanation of monetary and price movements, the absence of currency in the hypothetical extreme case of a pure credit economy, and the identity between deposit supply and demand at all price levels in that same pure credit case rendering prices indeterminate. Wicksell tried to develop a theory of money that explained fluctuations in income as well as fluctuations in price levels. He argued that the quantity theory of money failed to explain why the monetary demand for goods exceeds or falls short of the supply of goods in given conditions. The quantify theory fell into disrepute in the 1930s, in part because it seemed at the time that the theory could not explain the Great Depression, and partly because of the publication in 1936 of Keyness theory. Although some economists continued to advocate the quantity theory, many economists became Keynesians and simply viewed the quantity theory as a historical curiosity. Only in the mid and late 1950s did the quantity theory once again emerge as a plausible rival to the Keynesian theory. There were several reasons for the revival. Contrary to the prediction of many Keynesians, upon the conclusion of World War II, the American economy did not revert to the depressed conditions of the 1930s, but instead underwent inflation. Secondly, one of the benefits of the Keynesian revolution had been its demonstration that by manipulating expenditures and taxes, governments can keep the economy close to full employment. In fact, it emerged that there were serious political as well as economic difficulties in actually changing government expenditures and tax rates in this ways, and that Keynesian theory in this area was less useful than it had been thought originally. However the resurgence of the quantity theory should not be attributed merely to impersonal historical events. It is also due to the fact that several influential economists advocated this theory. Don Patinkin of Hebrew University restated the quantity theory in a rigorous way that avoids many of the crudities that infested earlier expositions. Milton Friedman, of the University of Chicago was influential in providing a framework that allowed one to test empirically the proposition that changes in the quantity of money dominate changes in income. Moreover Friedman and Anna Schwartz of the National Bureau of Economic Research argued in a lengthy study that the experience of the Great Depression should be interpreted as confirming the prediction of the quantity theory rather than that of Keynesian theory. Subsequently they showed that in both the United States and Britain, longer run movements in nominal income were highly correlated with movements in the money stock. Despite the resurgence of the Quantity Theory in the 1970s and early 1980s it is still far from universally accepted by economists. Controversies about the theorys validity and applicability still exist, featuring similar questions and themes regarding the Quantity Theory of Money that have arisen since the 18th century. These include the definition of money, the relationship between correlation and causation, and the transmission mechanism. Controversy has continued because of the technical difficulty of sorting out the direction of causation running between money and prices, and because ideological concerns about the viability of market mechanisms are at stake. The first instance of Monetarism stems from the ideas of Irving Fisher. The ideas that produced the quantity theory of money go back to the time of David Hume, and arguably earlier. However, the equation of exchange and the transformation of the quantity theory of money into a tool for making quantitative analyses and predictions of the price level, inflation, and interest rates were due to the contributions of Irving Fisher. The theory provides a theoretical basis for monetarism, and there is empirical evidence to show that the quantity theory does operate. For example, as the Spanish brough gold back from the new world, the money supply increased in their native Spain. In line with the theory, prices rose because there was no corresponding increased in the transactions demand for money which is a function of an increase in output. This initial formulation of monetarism fell short on the question of understanding business cycle fluctuations in employment and output. Due to a flaws and a lack of sophistication of this first form of monetarism, some economists became disillusioned with monetarist analysis. One of these economists, John Maynard Keynes, stated that the quantity-theoretic analysis was of little use expanded on these initial contributions. Many economists agreed with Keyness evaluation of monetarism, most notably Milton Friedman. According to Friedman, there was a belief in the value provided by the quantity theory of money, the quantity theory of money provides the best way of understanding monetary behaviour (1971, 2-3), and that substantial changes in prices and nominal income are almost invariably the result of changes in the nominal supply of money (Friedman, 1968, 434). Following this, came the emergence of the Old Chicago Monetarism of Viner, Simons and Knight. This form of Monetarism emphasised the variability of velocity and its potential correlation with the rate of inflation. In economic policy they blamed monetary forces that caused deflation as the source of depression. According to Viner, in order to remedy economic depression, use of large scale stimulative monetary expansion, large government deficits or policies which encouraged deflation, should be balanced. The exponents of Old Chicago Monetarism did not believe that the velocity of money, in other words the rate at which money is exchanged from one transaction to another, was stable. They also did not believe that control of the money supply was straightforward or that the velocity of money was stable, because inflation lowered and deflation raised the opportunity cost of holding real balances. Classic monetarism emerged from Old Chicago Monetarism. It was described by Friedman in 1953, as well as in the works of Brunner (1968) and Brunner and Meltzer (1972). Classic Monetarism contained elements of institutional reform, analytical thinking and views on the political economy. J. Bradford De Long discusses how classic monetarism contained empirical demonstrations which showed that money demand functions could retain stability under the most extreme hyperinflationary conditions. It contained studies which analysed the limits imposed on stabilization policy by lags of policy instruments and also the belief that the natural rate of unemployment is close to the average rate of unemployment. (2000, 83-94). Political Monetarism argued not that velocity could be made stable if monetary shocks were avoided, but that velocity was in fact already stable. As a result, money stock emerged as a sufficient statistic for forecasting nominal demand. Political Monetarism argued that the central bank controlled shifts in the money supply. As a result, the view was taken that everything that went wrong in the macroeconomy was a direct result of the central bank failing to make the money supply grow at the appropriate rate. Political Monetarism concluded that any policy that does not affect the qu

Tuesday, August 20, 2019

Effect of Structural Pounding During Seismic Events

Effect of Structural Pounding During Seismic Events Abstract This project entitled aims at the investigation of the effect of structural pounding to the dynamic response of structures subject to strong ground motions. In many cases structural pounding during earthquake may result in considerable and incalculable damages. It usually need to be accounted for in the case of adjacent structures, bridges, base isolated buildings, industrial and port facilities, and in ground pipelines. The phenomenon of that impact force pounding has been noted by researchers and engineers over the past several decades. As we see through dull historical strokes and performance, in different investigations of past and recent earthquakes damage have illustrated several cases of pounding damage such as those that have occurred in the Imperial Valley (May 18, 1940), the Sequenay earthquake in Canada (1988), Kasai Maison (1991), the Cairo earthquake (1992), the Northridge earthquake (1994), California (1994), Kobe, Japan (1995) Turkey (1999), Taiwan (1999) and Bhuj, Ce ntral Western India (2001). Some of the most memorable seismic events were in the 1972 Managua earthquake, when the five-storey Grant Hotel suffered a complete collapse, also in the 1964 Alaska earthquake, the 14-storey Anchorage Westwood hotel pounded against its low rise ballroom and the most recently extent of pounding in Mexico City in 1985 confirmed this as a major problem. Those all evidences have continued to illustrate the annihilation of earthquakes, with devastation of engineered in both buildings and bridges structures. Amongst the feasible structural destructions, seismic produced pounding has been frequently distinguished in numerous earthquakes, as a result this phenomenon plays a key role to the structures. As engineers, we have a responsibility to prevent it or take the necessary steps to mitigate it for the future constructions by considering the properties that affect and led pounding to occur. In order to examine the effect of the various parameters associated wit h pounding forces on the dynamic response of a seismically excited structure, a number of simulations and parametric studies have been performed, using SAP2000. By more precise investigations that have been done from professional earthquake investigators and engineers pounding produces acceleration and shear at various story levels. Also, significantly depends on the gap size between superstructure segments, which we will examine later on in the project. The main aim of the project is to conduct a detailed investigation on pounding-involved response structure during a seismic event as well as observed the structural behaviour as the result of ground motion excitation by examine the properties that affect pounding and determine the solutions and the mitigations that we have to take into account before we construct a structure in order to avoid future disasters. INTRODUCTION 1.1 Seismic Pounding effect (Overview) Looking throughout the time, investigations and observations of the effects of historical earthquakes have demonstrated that many structures are susceptible to significant damage which may lead to collapse. Numerous devastating earthquakes have hit various seismically active regions. Some investigations that have been followed after those seismic events are distinguished fact providing that, an earthquake within the range of six is capable of creating and generating incalculable and irreversible damages, of both buildings and bridges. Those seismic losses have further consequences, most likely to present economical problem to the community hit. The main target of most seismic excitations are, the primary frequencies of rigid buildings between the ranges of low to medium height, resulting by this in significant accumulations of soil acceleration. Also, addition to this is the causing the presence of the inevitable enduring seismic loads in engineered structures, creating inflexible re sponses. In recent years it becomes more urgent need to minimize seismic damage not only to avoid structures failures but especially in crucial building facilities such as hospitals, telecommunications etc. as well as the protection of the critical equipment that is accommodated by those buildings. (a)barrier rail damage (Northridge earthquake 1994) (b)Connector collapse (Northridge earthquake 1994) In seismically active areas the phenomenon of pounding may need to be accounted for, in the case of closely spaced structures to avoid extensive damages and human losses. The phenomenon of that impact force-pounding has been noted by earthquake investigators over the past several decades when the presence of pounding occurred into an extent. Looking throughout the time, some historical performance of pounding has been denoted, different investigations of past and recent earthquakes damage have illustrated several cases of pounding damage such as those that have occurred in the Imperial Valley (May 18, 1940), California (1994) the Northridge earthquake, Kobe, Japan (1995) and etc. in both engineered structures, buildings and bridges. One of the most remarkable example of pounding-involved destruction resulted from interactions between the Olive View Hospital main building and one of its independently standing stairway towers during the San Fernando earthquake of 1971. The extent of po unding was recently observed in Mexico City in 1985, which then it follows the most recent one in Central Western India (2001). Considerable pounding was observed at sites over 90 km from the epicentre thus indicating the possible catastrophic damage that may occur during future earthquakes having closer epicentres. Is remarkable to denote that pounding of adjacent buildings could have defective damage such as adjacent structures with different dynamic characteristics which vibrate out of phase and there is inadequate separation gap or energy diffusion system to board the relative moderate motions of adjacent buildings. (a)Collapse of a department store building (Northridge earthquake 1994) (b)Collapse of the first story of a wooden residential building (Northridge earthquake 1994) Several researchers considered the topic of pounding between adjacent buildings (Anagnostopoulos 1988; Maison Kasai, 1990; Papadramakis et al, 1996) with proving or deriving mathematical expression in order to evaluate and calculate the pounding force, by using experimental procedures. But few people have actually addressed the topic of pounding between adjacent buildings (Tsai, 1997; Malhotra, 1997; Matsagar Jangid, 2003; Komodromos et al 2007) for which the behaviour and the requirements differ from the conventional structures. Likewise, those projects are limited especially to the study and investigation of pounding between adjacent buildings and based isolated buildings without investigating the case of conflict with neighbouring buildings and the resulting of great deformations of the superstructure. In the past engineers couldnt prevent the pounding due to some factors such as the past seismic codes did not give explicit guidance, because of this and due to particular economical factors and considerations, that are concerning the maximum land usage requirements, especially in the high density populated areas of cities pounding was unavoidable. Due to that, we are able to identify and investigate many buildings in global system which are already been built in contact or overmuch close to another that could easily cause them to suffer from pounding damage in future earthquake strikes. A large rupture is controvertible from both aspects. The overcrowded construction system in many cities complements a dominant apprehension for seismic pounding damage. For these major reasons, it has been comprehensively acquired that pounding is a disastrous phenomenon that should be anticipated or mitigated. Acceleration range will guidance in many cases to quake activities which are appreciably h igher than designed by the design codes that have been used up to now. The most affordable and easy active way for mitigating pounding effects and diminishing pounding damage, is to consider enough separation gap size between close adjacent structures, this causing difficulties to be accomplished, owing to the detailing engineered work that supposed to be done and the high cost of land in this present time. A flipside to the seismic separation gap precaution in the construction design is to reduce the effect or pounding force through devaluating lateral motion, some researchers involved in extent with lateral ground motions due to pounding such as (Kasaiet al. 1996, Abdullah et a.2001, Jankowski et al 2000, Ruangrassamee Kawashima 2003, Kawashima Shoji 2000). This procedure can be accomplished by joining adjacent structures at critical locations of the supports so that their motion could be in-phase with one another or by lessening the pounding buildings damping capacity by means of passive structural control of energy dissipation system. 1.2 Pounding force and impact element Various impact elements are usually used to illustrate the pounding between adjoining construction buildings or bridge structures. Pounding between two conflicting structures, is often simulates by using contact force-based impact models such as the linear spring, Kelvin-Voigt element and Hertz contact model element, and additionally the restitution momentum-based stereo mechanical method. (a) (b) (c) Figure 1.2.1 shows the pounding problem in: (a) bridge structures [1] S. Mithikimar and R. DesRoches 2006; (b) adjacent buildings with link elements [2] V. Annasaheb Matsagar and R. Shyam Jangid 2005; (c) adjacent building with gap size structures [1] S. Mithikimar and R. DesRoches 2006; Also another view of pounding effect beyond that in buildings is on the bridges. Many damages during strong earthquakes have occurred in bridge due to pounding between the girders when the gap is not sufficient. From many experimental studies that have been made showed that pounding damage of a bridge can have severe after-effects as it has been observed in many major earthquakes, such as the 1994 Northridge earthquake etc. As we can see from our daily routine bridges belong to one of the important lifeline systems, their proper function play major role in both our life and in the culture, especially after a devastating earthquake in order to survive and/or recovery. According to some studies [3] Chouw and Hao (2003) and [4] Hai SUI et al. (2004) showed that gap size in the bridges plays the major key role for a bridge to survive under a pounding impact force. The examined the gap size and the outcomes showed that a smaller gap size can expect larger pounding force; therefore the possibility of damage of bridge decks is higher. So on in general designs a small gap should be avoided, if is possible. Moreover according to their experiment the results showed that friction device can decrease pounding impact force that works in different earthquakes. a) Multiple-pier bridge model [4] H. SU, et al 2004; b) Two Single degree of freedom model [4] H. SU, et al 2004; An adequate gap size can contribute to the reduction of pounding effect, but nevertheless in real life the gap size for the designs is unavoidable and due to the limited space that we have to build the design the gap size end up to has smaller values. And thus we resort to other solutions in order to reduce the pounding effect, such as the friction device and bumpers (steel spring with viscous damper). Moreover friction device is much more practical and effective than bumpers. Bumpers can avoid the immediate damage but they cannot reduce the pounding force between the bridge girders, in the other hand friction device can be applied to any earthquake and also is less sensitive to various ground movements. Linear spring element The linear spring element is the easiest and simplest contact element that used to model impact. When the gap between the adjoining structures adjournments, the spring take effect and is presentational of the force established in the meanwhile of impact force. According to Maison Kasai [5] (1992) have used this model widely, to study further analyse pounding between adjacent buildings. Nonetheless, the linear spring cannot resolve the energy dissipation during impact. The linear spring element illustrated in Figure 1.2.3(a). The Kelvin-Voigt Element The Kelvin-Voigt element can be described by a linear spring in parallel with a damper, as depicted in Figure 1.2.3(b), this model has been used in some studies [6] Anagnostopoulos, 1988; [7] Anagnostopoulos and Spiliopoulos, 1992; [8] Jankowski 2005; The linear spring illustrates the force during impact and the damper accounts for the energy dissipation during impact and is mostly used. The damping coefficient (ck) can be related to the coefficient of restitution (e), by equating the energy dissipations during impact, following the form of equations below: Where, and Kk is the stiffness of the contact spring, and m1, m2 are the masses of the colliding bodies. Hertz contact law Additionally, a non linear spring based on Hertz contact law can be used to model impact, as depicted in Figure 1.2.3(c). Nonetheless, the Hertz contact law is a characteristic representing of the static contact between elastic bodies and fails to contain energy loss during impact. The impact force can be expressed in the form of the equation below: Where R is the impact stiffness parameter that depends on the material properties of the colliding structures and the contact surface geometry, g is the at-rest separation and n is the Hertz coefficient. The use of the Hertz contact law has an intuitive appeal in modelling pounding, since one would expect the contact area between the colliding structures to increase as the contact force increases, leading to a non-linear stiffness described by the Hertz coefficient n which typically is taken ad 1.5. Several analysts have adopted this approach, including [9] Davis 1992; [10] Pantelides and Ma 1998; [11] Chau and Wei 2001; and [3] Chau et al. 2003; More, for pounding simulation we can also meet the Hertzdamp model, which is a contact model based on the Hertz contact law and using a non linear hysteresis damper. According to experimental theories, for low peak ground acceleration levels, Hertz model produces sufficing results and the Hertzdamp model can be used in advance for moderate and high peak ground acceleration levels (PGA). The contact element approach has its limitations, with the exact value of spring stiffness to be used, being unclear. Uncertainty in the impact stiffness arises from the unknown geometry of the impact surfaces, uncertain material properties under loading and variable impact velocities. The contact spring stiffness is typically taken as the in plane axial stiffness of the colliding structure (Maison and Kasai, 1990). Another reasonable estimate is twenty times the stiffness of the stiffer structure [6] Anagnostopoulos, 1988; However, using a very stiff spring can lead to numerical convergence difficulties and unrealistically high impact forces. The solution difficulties arise from the large changes in stiffness upon impact or contact loss, thus resulting in large unbalanced forces affecting the stability of the assembled equations of motion. (a) Linear spring element (b) Kelvin Voigt Element (c) Hertz non-linear spring element Figure 1.2.3: Various impact models and their contact force relations [12] Thomas G.Mezger 2006; 1.3 Method of Seismic Analysis 1.3.1 Non-linear Dynamic Analysis Non-linear Dynamic analysis involves step-by step in time integration of the non-linear governing equations of motion, a powerful analysis that can evaluate any given seismic event motion. An earthquake accelerogram is correlated and the consistent response-history of a structural model during seismic events is evaluated. Computer softwares have been designed for these kinds of purposes. Sap can utilized a non-linear dynamic analysis for both linear elastic and non-linear inelastic material response, using step by step integration methods. Is a suitable computer program that is able to evaluate and analyze the response of a two-dimensional and a three-dimensional non-linear structure taking as an input the accelerogram component of an Earthquake? This program will be used to analyse our structural model and to produce a real time of time-history displacement. In a nonlinear dynamic procedure the building model followed static procedures incorporating directly the inelastic material r esponse using in general finite elements. Because this program is using step-by step integration method of analysis the response of the structure, is one of the most sophisticated analysis procedure for predicting forces and displacements under seismic input. However, the calculated response can be very sensitive to the characteristics of the individual ground motion used as seismic input; therefore several time-history analyses are required using different ground motion records. The main value of nonlinear dynamic procedures has the objective to simulate the behaviour of a building structure in detail. 1.4 Main Objectives of this project The main focus of this project is the development of an analytical model that pounding force will present based on the classical impact theory by using parametric study to identify the most important parameters that affecting pounding. Those factors that give arise to that impact force, therefore investigate of the different practical types of structures that pounding can be occurred. The main objective and scope of this study are, to explore the global response of buildings structures when the pounding effects take place under seismic events, therefore to review the main outcomes of the literature and how the impact theory come across to the practical cases. Create a structural modelling and perform a non linear time history analysis on it. Examine the realistic model of pounding that we will create if it satisfies the properties in order for the structure to work. Determine the relative importance of the dynamic characteristics of pounding. Dynamic analysis will be carried out on the model structure to observe the displacement of the structure due to earthquake excitation. When we examine the main structure we are mainly concerned with displacement, velocity and acceleration, the general dynamic behaviour of the structure under the action of dynamic loads such as earthquake lateral loads. For the purpose of the project appropriate computer software will be used for its purposes (e.g. SAP2000). Creation and versatile of the model, accomplishment of the analysis, and checking and breakthrough of the design must be all done through this interface. Graphical displays of the results, including the real-time of time-history displacements will be easily produced by the use of that software. At the end of that modelling analysis by gathering all the necessary and useful outcomes and explored in deep the main parameters derived by this, the conclusion and results of what we have to adopt as engineering before retrofitting a structure. The appropriate structural parameters are the separation gap size between adjacent structures (storey mass, structural stiffness and yield strength etc.), the dynamic behaviour of a damped multi-degree of freedom bridge system separated by an expansion joint, considering the limited width of clearance around a seismically isolated buildings, that pounding can cause high over stresses when the colliding buildings have different height, periods or masses and the isolators in bridge structures are effective in mitigating the induced seismic forces, cable restrainers etc. Engineers should adopt those realistic facts before they construct new structures in order to succeed future sustainability of the structures and avoiding by this the impact phenomenon of pounding. Accomplish to mitigate the phenomenon of pounding in order to prevent future collisions and/or engineering disasters when seismic events occur. REVIEW OF LITERATURE 2.1 Practical Cases Pounding-impact force generated by earthquakes between different analytical structure models may provoke extensive damage and in general most of the times the result of that force is not pleasant, it may lead the structure to a total collision as it can be seen from different practical cases. Pounding problem is phenomenon that has been observed during earthquakes and in accordance to ground motions, and has been extensively investigated by various researchers that have used a variety of impact analytical models. Because of the importance of what pounding will have as a result of different engineering structures, attracted the attention of several scientists and analyzers? This absorption is a consequence fact of a plenty growing amount of evidence, which can be found in reports and journals, which have been created after dominant exceeding earthquakes. Demonstrating, the power of that certain impact force which may cause considerable damage. The conclusions and results of successive series of various numerical, integrated analytical and experimental studies have been conducted using individual structural models and administering different models of practical cases confirm that pounding, due to constraining additional impact forces, may result in damage as well as significantly increase the structural response. Moreover, there are many practical case histories of engineered buildings with different dynamic properties and characteristics, which have been constructed under the old earthquake resistant design codes. Analogous conditions concern also bridge constructions. When a structure is under earthquake vibrations will move according to ground motions. These vibrations can be entirely exaggerated, creating at the same time stresses and deformations throughout the structure. Evaluation of methods can be carry out in engineering practise to estimate the parameters that give a rise to pounding. The accuracy and the ability of computational appliance have increased a lot this century by helping us evaluate the seismic structural response of structure, a variety of softwares computing programs have been designed for those purposes, and can accomplished to calculate the dynamic seismic response of a structure which help engineers mitigate pounding effects in structure by avoiding future disaster s . Linear and nonlinear models are realistic pounding models that have been used for studying the performance of a structural system under the mode of structural pounding effect under seismic events. Significance to notice in seismically active areas the serious hazard that pounding can cause and in what practical cases does it occurs by review of some critical and enlightened journals and reports, according to history performance of an exceeding major earthquakes. Also a time history analysis is a dynamic tool for the investigation of a structural seismic enforcement. Because of all the above reasons, investigations have been carried out on pounding mitigation in order to improve the seismic response. 2.1.1 Linear and non-linear pounding of structural systems Pantellides and Ma [13] examined by experimental procedures, the dynamic response of a damped single degree-of-freedom structural model during a seismic event. They analysed the structural behaviour of SDF with both elastic and inelastic structural impact response by using realistic parameters for the pounding model in numerical calculations of the earthquake response. The method of analysis that they used can be used to examine pounding in both buildings and bridges. In order to accomplished to evaluate the effects that concerning pounding force during earthquake in structures, they made a comparison between linear and non-linear models. In the non-linear pounding model they produced results that showed the one-sided pounding model produces more dangerous effects than the two-sided. In their analysis they derived a mathematical equation that concerns the impact force effects in order to represent pounding model for both elastic and inelastic structures. A realistic pounding element was used for this studying and numerical simulations have demonstrated that pounding impact behaviour is not responsive to the values of the stiffness parameter. Furthermore, their experimental results for both elastic and inelastic structures in order to balance damping levels have showed that the higher deformation occurred in the elastic model. According to some observations that have been made the values of pounding force is relatively small in the inelastic structures in comparison to the elastic structures. The value codes of moderate the damping levels are controlled as compared to the actual seismic separation gap size found through the analysis of SDF structural model. The value of seismic gap is decreased considerably as the damping capacity of the pounding structural model is increased. Jankowski [14], addressed to an extent of a non-linear modelling due of earthquake that generated pounding of structural buildings, by deriving the essential fundamental mathematical expressions, involving the function and the applications of the non-linear analysis. By analysing various earthquake records, he derived appropriate mathematical expressions showing the limitation and the feasibility of a non-linear model, in anticipating values for a seismic pounding gap size as well as values for mass, elastic stiffness and damping coefficients between buildings. In his analysis of two inadequately separated buildings with different dynamic characteristics, modelled by elastoplastic multi-degree-of-freedom lumped mass models are used to simulate the functioning structural behaviour and non-linear viscoelastic impact specificity elements are applied to a model collision. The results of the study demonstrate that pounding has an indicative impact on the behaviour of structural buildings, and furthermore the results that he derived confirm the performance of the non-linear, viscoelastic model which endures to simulate the pounding phenomenon more accurately. 2.1.2 Seismic Pounding Effects between adjacent buildings In these last decades, the pounding phenomenon between closely spaced building structures can be a serious hazard especially in seismically active areas with strong ground motion. Because of that critical fact a beneficial awareness of pounding response on engineer structures and numerical formulas for calculating building separation gap size based on linear or analogous linear methods have been introduced. Abdel Raheem [14] established and achieved a tool for the inelastic analysis of seismic pounding effect between buildings. He carried out a parametric study on buildings pounding response as well as proper seismic hazard mitigation practice for adjacent buildings. Three categories of recorded earthquake excitation were used for input. He studied the effect of impact using linear and nonlinear contact force model for different separation distances and compared with nominal model without pounding consideration. Therefore the results of these studies lean on the stimulation characteristics and the relationship between the buildings fundamental period. Furthermore because pounding produces acceleration and shear in various story levels that are greater than those from the no pounding case. Westermo [16] suggested, in order improving the earthquake response of structures without adequate in-between space of the structures, to linking buildings by beams, which can carry the forces between the structures and thus annihilating collisions. Anagnostopoulos [6] analysed the effect of pounding for buildings under strong ground motions by a simplified single-degree-of-freedom (SDOF) model. Miller and Fatemi [17] explored in to an extent the phenomenon of pounding-impact force, of adjacent buildings subjected to harmonic motions by the vibroimpact concept. Maison and Kasai [18] modelled the buildings as multiple-degree-of-freedom systems and analysed the response of structural pounding with different types of idealizations. Papadrakakis et al. [19] studied the pounding response of two or more close separated buildings based on the Lagrange multiplier approach by which the geometric compatibility conditions due to proximity are constrained. A three-dimensional model developed for the simulation of the pounding behaviour of adjacent buildings is presented by Papadrakakis et al. [20]. In the evaluation of building separation, Jeng et al. [18] estimated the minimum separation distance required to avoid pounding of adjacent buildings by the spectral difference (SPD) method. Kasai et al. [4] extended Jengs results and proposed a simplified rule to predict the inelastic vibration phase of buildings based on the numerical results of dynamic time-history analyses. Anagnostopoulos and Spiliopoulos [7] examined the behaviour of common pounding between adjacent buildings in city blocks to several strong earthquakes. In the study, the buildings were idealized as lumped-mass, shear beam type, multi-degree-of-freedom (MDOF) systems with bilinear force deformation characteristics and with bases supported on translational and rocking spring dashpots. Collisions between adjacent masses can occur at any level and are simulated by means of viscoelastic impact elements. They used five real earthquake motions to study the effects of the following factors: building configuration and relative size, seismic separation distance and impact element properties. It was found that pounding can cause high over stresses, mainly when the colliding buildings have significantly different heights, periods or masses. They suggest a possibility for introducing a set of conditions into the codes, combined with some special measures, as an alternative to the seismic separati on requirement. Figure 2.1.2-2 on the left there is a finite element mathematical model and on the right shows the elevation view of a 2 different height building with the separation gap size [14] Abdel Raheem 2006; 2.1.3 SEISMIC POUNDING EFFECT AND RESTRAINERS ON SEISMIC RESPONCE OF MULTIPLE-FRAME BRIDGES DesRoches and Muthukumar [22] used analytical illustrations to check out, the factors and the parameters affecting the worldwide reaction and behaviour of a multiple-frame bridge as a result of pounding of adjacent frames. They have conducted parameter studies of one-sided and two-sided pounding, to dispose the effects of frame stiffness ratio, ground motion characteristics, frame yielding, and restrainers on the pounding behaviour of bridge frames. They showed that the addition of restrainers has a minor effect on the one-sided pounding response of highly out-of-phase frames. It is determined that the most important parameters are the frame period ratio and the characteristic period of the ground motion. The current study explores the effect that pounding impact-force and restrainers have on the worldwide appeal of bridge frames in a multi-frame bridge. They used investigations of two-sided pounding using MDOF models, which showed a favourable post impact response for the flexible f rame and a detrimental effect for the stiff frame demand, for all period ratios. The results from both one-sided and two-sided impact reveal that the response of bridge frames due to pounding, irrespective of the ground motion period ratio, thus validating the recommendations suggested by Caltrans. Current recommendations by Caltrans for limitations in frame period ratios to reduce the effects of pounding are evaluated through an example case. The effect of restrainers on the pounding response of bridge frames is evaluated. The results show that restrainers have very little effect on the demands on bridge frames compared with pounding. 2.1.4 GIRDER POUNDING ON BRIDGES Hao and Chouw [23] introduced a new design principle for anticipating Effect of Structural Pounding During Seismic Events Effect of Structural Pounding During Seismic Events Abstract This project entitled aims at the investigation of the effect of structural pounding to the dynamic response of structures subject to strong ground motions. In many cases structural pounding during earthquake may result in considerable and incalculable damages. It usually need to be accounted for in the case of adjacent structures, bridges, base isolated buildings, industrial and port facilities, and in ground pipelines. The phenomenon of that impact force pounding has been noted by researchers and engineers over the past several decades. As we see through dull historical strokes and performance, in different investigations of past and recent earthquakes damage have illustrated several cases of pounding damage such as those that have occurred in the Imperial Valley (May 18, 1940), the Sequenay earthquake in Canada (1988), Kasai Maison (1991), the Cairo earthquake (1992), the Northridge earthquake (1994), California (1994), Kobe, Japan (1995) Turkey (1999), Taiwan (1999) and Bhuj, Ce ntral Western India (2001). Some of the most memorable seismic events were in the 1972 Managua earthquake, when the five-storey Grant Hotel suffered a complete collapse, also in the 1964 Alaska earthquake, the 14-storey Anchorage Westwood hotel pounded against its low rise ballroom and the most recently extent of pounding in Mexico City in 1985 confirmed this as a major problem. Those all evidences have continued to illustrate the annihilation of earthquakes, with devastation of engineered in both buildings and bridges structures. Amongst the feasible structural destructions, seismic produced pounding has been frequently distinguished in numerous earthquakes, as a result this phenomenon plays a key role to the structures. As engineers, we have a responsibility to prevent it or take the necessary steps to mitigate it for the future constructions by considering the properties that affect and led pounding to occur. In order to examine the effect of the various parameters associated wit h pounding forces on the dynamic response of a seismically excited structure, a number of simulations and parametric studies have been performed, using SAP2000. By more precise investigations that have been done from professional earthquake investigators and engineers pounding produces acceleration and shear at various story levels. Also, significantly depends on the gap size between superstructure segments, which we will examine later on in the project. The main aim of the project is to conduct a detailed investigation on pounding-involved response structure during a seismic event as well as observed the structural behaviour as the result of ground motion excitation by examine the properties that affect pounding and determine the solutions and the mitigations that we have to take into account before we construct a structure in order to avoid future disasters. INTRODUCTION 1.1 Seismic Pounding effect (Overview) Looking throughout the time, investigations and observations of the effects of historical earthquakes have demonstrated that many structures are susceptible to significant damage which may lead to collapse. Numerous devastating earthquakes have hit various seismically active regions. Some investigations that have been followed after those seismic events are distinguished fact providing that, an earthquake within the range of six is capable of creating and generating incalculable and irreversible damages, of both buildings and bridges. Those seismic losses have further consequences, most likely to present economical problem to the community hit. The main target of most seismic excitations are, the primary frequencies of rigid buildings between the ranges of low to medium height, resulting by this in significant accumulations of soil acceleration. Also, addition to this is the causing the presence of the inevitable enduring seismic loads in engineered structures, creating inflexible re sponses. In recent years it becomes more urgent need to minimize seismic damage not only to avoid structures failures but especially in crucial building facilities such as hospitals, telecommunications etc. as well as the protection of the critical equipment that is accommodated by those buildings. (a)barrier rail damage (Northridge earthquake 1994) (b)Connector collapse (Northridge earthquake 1994) In seismically active areas the phenomenon of pounding may need to be accounted for, in the case of closely spaced structures to avoid extensive damages and human losses. The phenomenon of that impact force-pounding has been noted by earthquake investigators over the past several decades when the presence of pounding occurred into an extent. Looking throughout the time, some historical performance of pounding has been denoted, different investigations of past and recent earthquakes damage have illustrated several cases of pounding damage such as those that have occurred in the Imperial Valley (May 18, 1940), California (1994) the Northridge earthquake, Kobe, Japan (1995) and etc. in both engineered structures, buildings and bridges. One of the most remarkable example of pounding-involved destruction resulted from interactions between the Olive View Hospital main building and one of its independently standing stairway towers during the San Fernando earthquake of 1971. The extent of po unding was recently observed in Mexico City in 1985, which then it follows the most recent one in Central Western India (2001). Considerable pounding was observed at sites over 90 km from the epicentre thus indicating the possible catastrophic damage that may occur during future earthquakes having closer epicentres. Is remarkable to denote that pounding of adjacent buildings could have defective damage such as adjacent structures with different dynamic characteristics which vibrate out of phase and there is inadequate separation gap or energy diffusion system to board the relative moderate motions of adjacent buildings. (a)Collapse of a department store building (Northridge earthquake 1994) (b)Collapse of the first story of a wooden residential building (Northridge earthquake 1994) Several researchers considered the topic of pounding between adjacent buildings (Anagnostopoulos 1988; Maison Kasai, 1990; Papadramakis et al, 1996) with proving or deriving mathematical expression in order to evaluate and calculate the pounding force, by using experimental procedures. But few people have actually addressed the topic of pounding between adjacent buildings (Tsai, 1997; Malhotra, 1997; Matsagar Jangid, 2003; Komodromos et al 2007) for which the behaviour and the requirements differ from the conventional structures. Likewise, those projects are limited especially to the study and investigation of pounding between adjacent buildings and based isolated buildings without investigating the case of conflict with neighbouring buildings and the resulting of great deformations of the superstructure. In the past engineers couldnt prevent the pounding due to some factors such as the past seismic codes did not give explicit guidance, because of this and due to particular economical factors and considerations, that are concerning the maximum land usage requirements, especially in the high density populated areas of cities pounding was unavoidable. Due to that, we are able to identify and investigate many buildings in global system which are already been built in contact or overmuch close to another that could easily cause them to suffer from pounding damage in future earthquake strikes. A large rupture is controvertible from both aspects. The overcrowded construction system in many cities complements a dominant apprehension for seismic pounding damage. For these major reasons, it has been comprehensively acquired that pounding is a disastrous phenomenon that should be anticipated or mitigated. Acceleration range will guidance in many cases to quake activities which are appreciably h igher than designed by the design codes that have been used up to now. The most affordable and easy active way for mitigating pounding effects and diminishing pounding damage, is to consider enough separation gap size between close adjacent structures, this causing difficulties to be accomplished, owing to the detailing engineered work that supposed to be done and the high cost of land in this present time. A flipside to the seismic separation gap precaution in the construction design is to reduce the effect or pounding force through devaluating lateral motion, some researchers involved in extent with lateral ground motions due to pounding such as (Kasaiet al. 1996, Abdullah et a.2001, Jankowski et al 2000, Ruangrassamee Kawashima 2003, Kawashima Shoji 2000). This procedure can be accomplished by joining adjacent structures at critical locations of the supports so that their motion could be in-phase with one another or by lessening the pounding buildings damping capacity by means of passive structural control of energy dissipation system. 1.2 Pounding force and impact element Various impact elements are usually used to illustrate the pounding between adjoining construction buildings or bridge structures. Pounding between two conflicting structures, is often simulates by using contact force-based impact models such as the linear spring, Kelvin-Voigt element and Hertz contact model element, and additionally the restitution momentum-based stereo mechanical method. (a) (b) (c) Figure 1.2.1 shows the pounding problem in: (a) bridge structures [1] S. Mithikimar and R. DesRoches 2006; (b) adjacent buildings with link elements [2] V. Annasaheb Matsagar and R. Shyam Jangid 2005; (c) adjacent building with gap size structures [1] S. Mithikimar and R. DesRoches 2006; Also another view of pounding effect beyond that in buildings is on the bridges. Many damages during strong earthquakes have occurred in bridge due to pounding between the girders when the gap is not sufficient. From many experimental studies that have been made showed that pounding damage of a bridge can have severe after-effects as it has been observed in many major earthquakes, such as the 1994 Northridge earthquake etc. As we can see from our daily routine bridges belong to one of the important lifeline systems, their proper function play major role in both our life and in the culture, especially after a devastating earthquake in order to survive and/or recovery. According to some studies [3] Chouw and Hao (2003) and [4] Hai SUI et al. (2004) showed that gap size in the bridges plays the major key role for a bridge to survive under a pounding impact force. The examined the gap size and the outcomes showed that a smaller gap size can expect larger pounding force; therefore the possibility of damage of bridge decks is higher. So on in general designs a small gap should be avoided, if is possible. Moreover according to their experiment the results showed that friction device can decrease pounding impact force that works in different earthquakes. a) Multiple-pier bridge model [4] H. SU, et al 2004; b) Two Single degree of freedom model [4] H. SU, et al 2004; An adequate gap size can contribute to the reduction of pounding effect, but nevertheless in real life the gap size for the designs is unavoidable and due to the limited space that we have to build the design the gap size end up to has smaller values. And thus we resort to other solutions in order to reduce the pounding effect, such as the friction device and bumpers (steel spring with viscous damper). Moreover friction device is much more practical and effective than bumpers. Bumpers can avoid the immediate damage but they cannot reduce the pounding force between the bridge girders, in the other hand friction device can be applied to any earthquake and also is less sensitive to various ground movements. Linear spring element The linear spring element is the easiest and simplest contact element that used to model impact. When the gap between the adjoining structures adjournments, the spring take effect and is presentational of the force established in the meanwhile of impact force. According to Maison Kasai [5] (1992) have used this model widely, to study further analyse pounding between adjacent buildings. Nonetheless, the linear spring cannot resolve the energy dissipation during impact. The linear spring element illustrated in Figure 1.2.3(a). The Kelvin-Voigt Element The Kelvin-Voigt element can be described by a linear spring in parallel with a damper, as depicted in Figure 1.2.3(b), this model has been used in some studies [6] Anagnostopoulos, 1988; [7] Anagnostopoulos and Spiliopoulos, 1992; [8] Jankowski 2005; The linear spring illustrates the force during impact and the damper accounts for the energy dissipation during impact and is mostly used. The damping coefficient (ck) can be related to the coefficient of restitution (e), by equating the energy dissipations during impact, following the form of equations below: Where, and Kk is the stiffness of the contact spring, and m1, m2 are the masses of the colliding bodies. Hertz contact law Additionally, a non linear spring based on Hertz contact law can be used to model impact, as depicted in Figure 1.2.3(c). Nonetheless, the Hertz contact law is a characteristic representing of the static contact between elastic bodies and fails to contain energy loss during impact. The impact force can be expressed in the form of the equation below: Where R is the impact stiffness parameter that depends on the material properties of the colliding structures and the contact surface geometry, g is the at-rest separation and n is the Hertz coefficient. The use of the Hertz contact law has an intuitive appeal in modelling pounding, since one would expect the contact area between the colliding structures to increase as the contact force increases, leading to a non-linear stiffness described by the Hertz coefficient n which typically is taken ad 1.5. Several analysts have adopted this approach, including [9] Davis 1992; [10] Pantelides and Ma 1998; [11] Chau and Wei 2001; and [3] Chau et al. 2003; More, for pounding simulation we can also meet the Hertzdamp model, which is a contact model based on the Hertz contact law and using a non linear hysteresis damper. According to experimental theories, for low peak ground acceleration levels, Hertz model produces sufficing results and the Hertzdamp model can be used in advance for moderate and high peak ground acceleration levels (PGA). The contact element approach has its limitations, with the exact value of spring stiffness to be used, being unclear. Uncertainty in the impact stiffness arises from the unknown geometry of the impact surfaces, uncertain material properties under loading and variable impact velocities. The contact spring stiffness is typically taken as the in plane axial stiffness of the colliding structure (Maison and Kasai, 1990). Another reasonable estimate is twenty times the stiffness of the stiffer structure [6] Anagnostopoulos, 1988; However, using a very stiff spring can lead to numerical convergence difficulties and unrealistically high impact forces. The solution difficulties arise from the large changes in stiffness upon impact or contact loss, thus resulting in large unbalanced forces affecting the stability of the assembled equations of motion. (a) Linear spring element (b) Kelvin Voigt Element (c) Hertz non-linear spring element Figure 1.2.3: Various impact models and their contact force relations [12] Thomas G.Mezger 2006; 1.3 Method of Seismic Analysis 1.3.1 Non-linear Dynamic Analysis Non-linear Dynamic analysis involves step-by step in time integration of the non-linear governing equations of motion, a powerful analysis that can evaluate any given seismic event motion. An earthquake accelerogram is correlated and the consistent response-history of a structural model during seismic events is evaluated. Computer softwares have been designed for these kinds of purposes. Sap can utilized a non-linear dynamic analysis for both linear elastic and non-linear inelastic material response, using step by step integration methods. Is a suitable computer program that is able to evaluate and analyze the response of a two-dimensional and a three-dimensional non-linear structure taking as an input the accelerogram component of an Earthquake? This program will be used to analyse our structural model and to produce a real time of time-history displacement. In a nonlinear dynamic procedure the building model followed static procedures incorporating directly the inelastic material r esponse using in general finite elements. Because this program is using step-by step integration method of analysis the response of the structure, is one of the most sophisticated analysis procedure for predicting forces and displacements under seismic input. However, the calculated response can be very sensitive to the characteristics of the individual ground motion used as seismic input; therefore several time-history analyses are required using different ground motion records. The main value of nonlinear dynamic procedures has the objective to simulate the behaviour of a building structure in detail. 1.4 Main Objectives of this project The main focus of this project is the development of an analytical model that pounding force will present based on the classical impact theory by using parametric study to identify the most important parameters that affecting pounding. Those factors that give arise to that impact force, therefore investigate of the different practical types of structures that pounding can be occurred. The main objective and scope of this study are, to explore the global response of buildings structures when the pounding effects take place under seismic events, therefore to review the main outcomes of the literature and how the impact theory come across to the practical cases. Create a structural modelling and perform a non linear time history analysis on it. Examine the realistic model of pounding that we will create if it satisfies the properties in order for the structure to work. Determine the relative importance of the dynamic characteristics of pounding. Dynamic analysis will be carried out on the model structure to observe the displacement of the structure due to earthquake excitation. When we examine the main structure we are mainly concerned with displacement, velocity and acceleration, the general dynamic behaviour of the structure under the action of dynamic loads such as earthquake lateral loads. For the purpose of the project appropriate computer software will be used for its purposes (e.g. SAP2000). Creation and versatile of the model, accomplishment of the analysis, and checking and breakthrough of the design must be all done through this interface. Graphical displays of the results, including the real-time of time-history displacements will be easily produced by the use of that software. At the end of that modelling analysis by gathering all the necessary and useful outcomes and explored in deep the main parameters derived by this, the conclusion and results of what we have to adopt as engineering before retrofitting a structure. The appropriate structural parameters are the separation gap size between adjacent structures (storey mass, structural stiffness and yield strength etc.), the dynamic behaviour of a damped multi-degree of freedom bridge system separated by an expansion joint, considering the limited width of clearance around a seismically isolated buildings, that pounding can cause high over stresses when the colliding buildings have different height, periods or masses and the isolators in bridge structures are effective in mitigating the induced seismic forces, cable restrainers etc. Engineers should adopt those realistic facts before they construct new structures in order to succeed future sustainability of the structures and avoiding by this the impact phenomenon of pounding. Accomplish to mitigate the phenomenon of pounding in order to prevent future collisions and/or engineering disasters when seismic events occur. REVIEW OF LITERATURE 2.1 Practical Cases Pounding-impact force generated by earthquakes between different analytical structure models may provoke extensive damage and in general most of the times the result of that force is not pleasant, it may lead the structure to a total collision as it can be seen from different practical cases. Pounding problem is phenomenon that has been observed during earthquakes and in accordance to ground motions, and has been extensively investigated by various researchers that have used a variety of impact analytical models. Because of the importance of what pounding will have as a result of different engineering structures, attracted the attention of several scientists and analyzers? This absorption is a consequence fact of a plenty growing amount of evidence, which can be found in reports and journals, which have been created after dominant exceeding earthquakes. Demonstrating, the power of that certain impact force which may cause considerable damage. The conclusions and results of successive series of various numerical, integrated analytical and experimental studies have been conducted using individual structural models and administering different models of practical cases confirm that pounding, due to constraining additional impact forces, may result in damage as well as significantly increase the structural response. Moreover, there are many practical case histories of engineered buildings with different dynamic properties and characteristics, which have been constructed under the old earthquake resistant design codes. Analogous conditions concern also bridge constructions. When a structure is under earthquake vibrations will move according to ground motions. These vibrations can be entirely exaggerated, creating at the same time stresses and deformations throughout the structure. Evaluation of methods can be carry out in engineering practise to estimate the parameters that give a rise to pounding. The accuracy and the ability of computational appliance have increased a lot this century by helping us evaluate the seismic structural response of structure, a variety of softwares computing programs have been designed for those purposes, and can accomplished to calculate the dynamic seismic response of a structure which help engineers mitigate pounding effects in structure by avoiding future disaster s . Linear and nonlinear models are realistic pounding models that have been used for studying the performance of a structural system under the mode of structural pounding effect under seismic events. Significance to notice in seismically active areas the serious hazard that pounding can cause and in what practical cases does it occurs by review of some critical and enlightened journals and reports, according to history performance of an exceeding major earthquakes. Also a time history analysis is a dynamic tool for the investigation of a structural seismic enforcement. Because of all the above reasons, investigations have been carried out on pounding mitigation in order to improve the seismic response. 2.1.1 Linear and non-linear pounding of structural systems Pantellides and Ma [13] examined by experimental procedures, the dynamic response of a damped single degree-of-freedom structural model during a seismic event. They analysed the structural behaviour of SDF with both elastic and inelastic structural impact response by using realistic parameters for the pounding model in numerical calculations of the earthquake response. The method of analysis that they used can be used to examine pounding in both buildings and bridges. In order to accomplished to evaluate the effects that concerning pounding force during earthquake in structures, they made a comparison between linear and non-linear models. In the non-linear pounding model they produced results that showed the one-sided pounding model produces more dangerous effects than the two-sided. In their analysis they derived a mathematical equation that concerns the impact force effects in order to represent pounding model for both elastic and inelastic structures. A realistic pounding element was used for this studying and numerical simulations have demonstrated that pounding impact behaviour is not responsive to the values of the stiffness parameter. Furthermore, their experimental results for both elastic and inelastic structures in order to balance damping levels have showed that the higher deformation occurred in the elastic model. According to some observations that have been made the values of pounding force is relatively small in the inelastic structures in comparison to the elastic structures. The value codes of moderate the damping levels are controlled as compared to the actual seismic separation gap size found through the analysis of SDF structural model. The value of seismic gap is decreased considerably as the damping capacity of the pounding structural model is increased. Jankowski [14], addressed to an extent of a non-linear modelling due of earthquake that generated pounding of structural buildings, by deriving the essential fundamental mathematical expressions, involving the function and the applications of the non-linear analysis. By analysing various earthquake records, he derived appropriate mathematical expressions showing the limitation and the feasibility of a non-linear model, in anticipating values for a seismic pounding gap size as well as values for mass, elastic stiffness and damping coefficients between buildings. In his analysis of two inadequately separated buildings with different dynamic characteristics, modelled by elastoplastic multi-degree-of-freedom lumped mass models are used to simulate the functioning structural behaviour and non-linear viscoelastic impact specificity elements are applied to a model collision. The results of the study demonstrate that pounding has an indicative impact on the behaviour of structural buildings, and furthermore the results that he derived confirm the performance of the non-linear, viscoelastic model which endures to simulate the pounding phenomenon more accurately. 2.1.2 Seismic Pounding Effects between adjacent buildings In these last decades, the pounding phenomenon between closely spaced building structures can be a serious hazard especially in seismically active areas with strong ground motion. Because of that critical fact a beneficial awareness of pounding response on engineer structures and numerical formulas for calculating building separation gap size based on linear or analogous linear methods have been introduced. Abdel Raheem [14] established and achieved a tool for the inelastic analysis of seismic pounding effect between buildings. He carried out a parametric study on buildings pounding response as well as proper seismic hazard mitigation practice for adjacent buildings. Three categories of recorded earthquake excitation were used for input. He studied the effect of impact using linear and nonlinear contact force model for different separation distances and compared with nominal model without pounding consideration. Therefore the results of these studies lean on the stimulation characteristics and the relationship between the buildings fundamental period. Furthermore because pounding produces acceleration and shear in various story levels that are greater than those from the no pounding case. Westermo [16] suggested, in order improving the earthquake response of structures without adequate in-between space of the structures, to linking buildings by beams, which can carry the forces between the structures and thus annihilating collisions. Anagnostopoulos [6] analysed the effect of pounding for buildings under strong ground motions by a simplified single-degree-of-freedom (SDOF) model. Miller and Fatemi [17] explored in to an extent the phenomenon of pounding-impact force, of adjacent buildings subjected to harmonic motions by the vibroimpact concept. Maison and Kasai [18] modelled the buildings as multiple-degree-of-freedom systems and analysed the response of structural pounding with different types of idealizations. Papadrakakis et al. [19] studied the pounding response of two or more close separated buildings based on the Lagrange multiplier approach by which the geometric compatibility conditions due to proximity are constrained. A three-dimensional model developed for the simulation of the pounding behaviour of adjacent buildings is presented by Papadrakakis et al. [20]. In the evaluation of building separation, Jeng et al. [18] estimated the minimum separation distance required to avoid pounding of adjacent buildings by the spectral difference (SPD) method. Kasai et al. [4] extended Jengs results and proposed a simplified rule to predict the inelastic vibration phase of buildings based on the numerical results of dynamic time-history analyses. Anagnostopoulos and Spiliopoulos [7] examined the behaviour of common pounding between adjacent buildings in city blocks to several strong earthquakes. In the study, the buildings were idealized as lumped-mass, shear beam type, multi-degree-of-freedom (MDOF) systems with bilinear force deformation characteristics and with bases supported on translational and rocking spring dashpots. Collisions between adjacent masses can occur at any level and are simulated by means of viscoelastic impact elements. They used five real earthquake motions to study the effects of the following factors: building configuration and relative size, seismic separation distance and impact element properties. It was found that pounding can cause high over stresses, mainly when the colliding buildings have significantly different heights, periods or masses. They suggest a possibility for introducing a set of conditions into the codes, combined with some special measures, as an alternative to the seismic separati on requirement. Figure 2.1.2-2 on the left there is a finite element mathematical model and on the right shows the elevation view of a 2 different height building with the separation gap size [14] Abdel Raheem 2006; 2.1.3 SEISMIC POUNDING EFFECT AND RESTRAINERS ON SEISMIC RESPONCE OF MULTIPLE-FRAME BRIDGES DesRoches and Muthukumar [22] used analytical illustrations to check out, the factors and the parameters affecting the worldwide reaction and behaviour of a multiple-frame bridge as a result of pounding of adjacent frames. They have conducted parameter studies of one-sided and two-sided pounding, to dispose the effects of frame stiffness ratio, ground motion characteristics, frame yielding, and restrainers on the pounding behaviour of bridge frames. They showed that the addition of restrainers has a minor effect on the one-sided pounding response of highly out-of-phase frames. It is determined that the most important parameters are the frame period ratio and the characteristic period of the ground motion. The current study explores the effect that pounding impact-force and restrainers have on the worldwide appeal of bridge frames in a multi-frame bridge. They used investigations of two-sided pounding using MDOF models, which showed a favourable post impact response for the flexible f rame and a detrimental effect for the stiff frame demand, for all period ratios. The results from both one-sided and two-sided impact reveal that the response of bridge frames due to pounding, irrespective of the ground motion period ratio, thus validating the recommendations suggested by Caltrans. Current recommendations by Caltrans for limitations in frame period ratios to reduce the effects of pounding are evaluated through an example case. The effect of restrainers on the pounding response of bridge frames is evaluated. The results show that restrainers have very little effect on the demands on bridge frames compared with pounding. 2.1.4 GIRDER POUNDING ON BRIDGES Hao and Chouw [23] introduced a new design principle for anticipating