Wednesday, October 30, 2019

Retirement Issues Essay Example | Topics and Well Written Essays - 3750 words

Retirement Issues - Essay Example The essay "Retirement Issues" intends to overview the causes behind retirement plan and offer the suggestions to the younger generation on the successful accomplishment of a resilient retirement plan that is expected to get rid of the lacunas of the earlier generations in framing the same. , the term unforeseen itself is stochastic or probabilistic in nature. There is no definite way to measure all unanticipated events that might require added financial assistance and thereby extra precautionary savings. Hence, at the end of the day, it is all about series of assumptions and to some extent careful gambling with a hope that the dice would fall in the expected number. If the dice falls otherwise; then all the associated dreams with the retirement life fall into pieces like a glass house stuck with a stone. Hope does not die though, people keep on thinking that their planning considering the retirement life is on the right track and then out of the blue an emergency pops up forcing the individual and his family into the ocean of despair. By the time the individual realizes something is not right in his planning, it is too late. The most ironic thing is that this can happen owing to human error or purely out of the accident. This point to the fact that even if one is perfect regarding his retirement planning; he might face the brunt of the unforeseen events similar to anyone else. It is easily understandable now that why retirement plan is important and difficult to optimize.

Monday, October 28, 2019

Afghanistan War Essay Example for Free

Afghanistan War Essay Afghanistan has been in war with the U. S. mainly because, The Taliban refuses to follow through with the commands that the U. S. gave them, as well as ‘The Three Phases’, Also the planned attack in 2001, but mainly because the U. S. wants the mineral resources that are found in Afghanistan. The Taliban refused to undertake 3 simple tasks: shutting down the terrorist training camps, giving up the Al-Qaeda leaders and returning all American and Foreign citizens, which is part of the reason that we went to war with Afghanistan in 2001. The ‘Three Phases’ started in 1987-present the first phase was to topple the Taliban and destroy all terrorist camps from 1987-1997, the second phase was to defeat the Taliban military and re-build core institutes in the afghan state from 1997- September11, 2001, and the third phase was to turn to counterinsurgency doctrine due to increased military troop presence from 2001-present. America was planning an attack on Afghanistan to start off the third phase but what they didn’t know is that Afghanistan was planning an attack to the Twin towers to get back at the U.  S. for what they did in the past years. The main reason to the war in Afghanistan was to get most of the mineral resources in Afghanistan that are very valuable, and costs lots of money. Afghanistan and The U. S. A. are at war d The Taliban has refused to do the commands that the president of the U. S. (Mr. Bush) told them to do. More than two weeks ago from October 7th, 2001 the Taliban refused to shut down their terrorist camps, give up their leaders, and return all American and foreign citizens. Even though the U. S. s at war with Afghanistan, president Bush set up an arrangement so that afghan people who were suffering from starvation and medical issues could be cured with the droppings of food, medical aid and, clean drinking water so that they can survive and keep the afghan population alive, and for them to know what America can do when other Countries are in a time of need. George W. Bush on Sunday October 7th, 2001 said â€Å"The Military action is a part of our campaign against terrorism†¦We will win this conflict by the patient accumulation of successes, by meeting a series of challenges with determination and will and purpose. (Bush) This quote states that America will do everything in its power to get revenge on Osama bin Laden (Jerry Robinson) the person that President Bush thinks that he was behind the attack of the Twin Towers. Barack Obama dramatically increased the military troop presence in Afghanistan to have a larger force to protect the population from Taliban attacks due to, the ‘Three Phases’. The Three Phases consist of: 1) Toppling the Taliban 2) Defeating the Taliban military and rebuilding core institutions of the afghan state 3) a turn to counterinsurgency doctrine due to the increasing rates of military troops in Afghanistan(Witte). Phase one had lasted from 1979-1989 when soviet troops were withdrawn. Phase two lasted from1989-2001 the forces the United States and its allies had trained and armed now fought each other in complex coalitions for control of Afghanistan. The Third Phase lasted from September 11, 2001-present during Phase three on September 22, 2001 The United Arab Emirates and later Saudi Arabia withdrew their recognition of the Taliban as the legal government of Afghanistan, leaving neighboring Pakistan as the only remaining country with diplomatic ties. o some extent most of the Terrorist camps in Afghanistan were destroyed, and the government was ousted. Also, The Taliban surrendered within two months, much more quickly than expected. The Taliban and al-Qaeda began to regroup in 2003, after the United States shifted its military efforts to fighting the war in Iraq, and attacks on U. S. and NATO troops have continued since. The overall aim now is to ensure a stable Afghanistan that is no longer a hotbed for terrorist organisations. This all happened due to the Three Phases.

Saturday, October 26, 2019

Essay --

You see the images that the public is demanding. Why more reality-based TV? You'd think that after the first Survivor it would have gone away, but it hasn't. The public demands it because they get all caught up in the personal stories, and want to see more and more.† Montel Williams tells his guest audience about how the press is always wanting to get up close and personal in people's business. As a star and MS patient, Williams knows exactly what he is talking about. Born Montel Brian Anthony Williams on July 3, 1959 in Baltimore, Maryland, Williams was bound for greatness. Growing up, Williams was already a star to everyone. In high school he was the class president his junior and senior year. He was an athlete, musician, and all around a great student. He was well known throughout his community, he was always active in county-wide government issues that was for all the students. As he got older, people still knew his name and his stardom was still advancing. After graduating in 1974 from high school, Williams enlisted into the U.S. Marine Corps. Being very impressed with Montel's strength and leadership, his superiors requested him to be placed in the Naval Academy Preparatory School at Newport, Rhode Island. Later he was accepted to the U.S. Academy at Annapolis and that is where this stars real battle begins. The well-known Montel Williams was hit with some devastating news that would later change his life drastically and forever. Although the now famous talk show host from The Montel Williams Show, the movie star, and the award winner has had to live and deal with multiple sclerosis for a good portion of his life, he has overcome many adversities such as Hollywood shame, pain, and the fear of giving up. Before graduating in ... ...has done. Every goal that was set was accomplished and every thought of defeat was pushed aside. Montel Williams is a true fighter and refused to let any of his adversities hold him back. He has never given up and still today informs his audience and the people about this disease and makes them aware that they are not invincible from it. Williams has made it through the recognition, the pain, the press, and the suffering. He has become one of the worlds most well-known and accomplished star and philanthropists. He has dedicated his life to helping others and informing others about multiple sclerosis. He knows the heartaches and the pain these people have been through, go through, and will go through. He knows what these patients need to fight back and win. He knows because he is a fighter himself and he defeats his illness everyday and in his eyes reigns victorious.

Thursday, October 24, 2019

Models of Instructional Design

In the article, Reclaiming Instructional Design, Merrill et al. (1966) highlight the significant relationship between science of instruction and the technology of instructional design (ID). They argue that science and instructional design with the application and production of technology are closely associated with each other. They also highlight the role of instructional design in the development and improvement of the learning processes and outcomes since instructional design follows scientific bases and strategies found in the existing literature regarding technology and education.The International Board of Standards for Training, Performance and Instruction (IBSTPI; 2003) provides code of ethical standards for instructional designers in order to ensure a good working environment and condition with the company and other people in the workplace. This paper presents the concepts, theories, and components of instructional design, including its relationship with the learning theories, and the tasks and skills required for instructional designers as they contribute to the positive outcome of learning with the use of technology.Instructional design (ID) has been thought of as a variation or modification of the concept of educational technology which evolved in the United States in the 1950s (Peters, 1967). It is associated with modes of artistic production and it is considered as a mode of producing or developing instruction, specific means of cultural transmission, and a way of organizing learning processes in the educational arena (Dijkstra, Schott, Seel, & Tennyson, 1997, p. 27).Instructional design, as perceived by Dijkstra et al. (1997, p. 28) is in some ways different from educational technology because: (1) it involves different learning cultures from different â€Å"pedagogies† and sciences (Reigeluth, 1996); (2) â€Å"it reaches beyond the isolated ‘culture-free’ concepts by thoroughly analyzing the contexts into which the units are e mbedded (Jencks, 1975); and it integrates any of the different modes of production whose products are the outcome of open-ended structures that promote self-directed learning processes.It is assumed that instructional design involves the conditions of learning should be appropriate to the learning outcomes, problem-solving, and assessment activities (Jonassen, 2004, p. 146). Instructional design differentiates instructional design process from the production process. According to Gentry (1994), designing instruction is more important for it involves the identification and development of objectives, activities and evaluation protocols to promote learning while production process focuses on the creation and design of the tangible products such as videotapes, posters, booklets, worksheets as the outcome of the overall instructional design.Learning theories are often confused with Instructional design theories. However, the theory of learning can be differentiated from the instructional design theory in such a way that the former is descriptive – describes how learning occurs – while the latter offers direct guidance in effectively helping people in learning and development which may include cognitive, emotional, social, physical, and spiritual aspects (Reigeluth, 1983, p. 5).Contemporary theory of learning holds the view that â€Å"ideas have little, if any, meaning unless and until they are embedded in some authentic context† (Spiro et al., 1987 cited in Jonassen, 2004, p. 102). Instruction need to be clear, specific, and detailed in explaining particular contexts instead of teaching abstract rules and principles which are usually difficult to understand. This way, learning and understanding concepts would be easily retained, more generative and meaningful, and more broadly and more accurately be transferred.Schema theory, like the theory of human development, is one of the learning theories. It suggests that â€Å"new knowledge is acquired by accretion into an existing schema, by tuning that schema when minor inconsistencies emerge, and by restructuring that schema when major inconsistencies arise† (Rummelhart & Norman, 1978 cited in Reigeluth, 1983, p. 12). It means that the learner can better understand a concept when there is already an existing knowledge about a new concept.On the other hand, Instructional design does not describe what goes on inside a learner's head when learning occurs. Instead, they describe specific events outside of the learner which can be more directly and easily applied in solving problems. An important characteristic of Instructional design theories is that they are design or goal oriented. ID theories are not like descriptive theories, which are used for prediction or for explanation (Reigeluth, 1983, p. 7).Although instructional design theories are more effective, the theories of learning are still important in education since it is important foe instructional designers to also k now theories of learning and human development (Winn, 1997, p. 37) for they are the actually the foundation for understanding how Instructional design theory works to help educators invent new and efficient instructional methods (Reigeluth, 1983, p. 13; Dijkstra et al., 1997, pp. 55-56).Two components of instructional design theories include (1) methods of instruction, those that are used in facilitating human learning and development, and (2) situations (those aspects of the context that do influence selection of methods) that determine whether those methods are to be used or not. This component proposes that â€Å"one method may work best in one situation, while another may work best in a different situation† (Reigeluth, 1983, p. 8). ID methods are also considered componential because each of them has different components or features that can be used or done in different ways and in different time (Reigeluth, 1983, p. 10). It is therefore important to apply methods only whe n they are appropriate or needed in a particular instance.Instructional designers are called to use deductive method of instruction by analyzing and sequencing concepts based on importance, complexity or specificity. They should also integrate and review concepts since elaboration and repetition can help them understand better the lessons to be learned (Reigeluth, 1983; Reigeluth & Darwazeh, 1982 cited in Dijkstra et al., 1997, p. 9). They are also required to repeat the process of decontextualization of the knowledge resource and recontextualizion of the knowledge for the intended use (p. 24).Modern classroom teachers, as instructional designers (Dick & Carey,1978), should have at least a basic understanding of instructional media production in order to work effectively regardless of the extent or frequency of his/her participation (Brown, 2004, p. 265). Milheim & Osciak (2001, p. 355) contend that the instructional designers’ task is to use various instructional methods to achieve their instructional goals. Howard Gardner's Theory of Multiple Intelligences (Gardner, 1993) may be considered when planning specific instructional activities and the traditional instructional strategies may be integrated to effectively cater to the different learning environments, resources, and students.Zhang (2001) asserts that taking into considerations individual differences can make ID produce a desirable outcome. Thus, motivation and the recognition of psychological characteristics of each learner are also important. According to Winn (1987, pp. 39–41), instructional designers should focus their concentration on the mechanisms by means of which decisions are made instead of getting involved direct instructional decision-making. They are also required to use instructional strategies that mesh with cognitive theory and regularly track the students’ learning condition in all aspects of development.In conclusion, instructional design as a scientific process that involves the process and production of technology can be used to improve and develop learners to become more effective not only in understanding concepts but also in making-decisions logically, and applying things they have learned efficiently. Successful use and implementation of ID requires instructional designers’ or teachers’ capability to use teaching and assessment methods that are appropriate to the situation, time, resources, students’ abilities and individual differences.ReferencesBrown, A. (2004). Building Blocks for Information Architects: Teaching Digital Media Production within an Instructional Design Program. Journal of Educational Multimedia and Hypermedia 13(3), 265+.Dick, W. & Carey, L. (2001). The systematic design of instruction: Origins of systematically designed instruction. In Ely, D.P. & Plomp, T. (Eds.), Classic writings on instructional technology 2. (pp. 71-80) Englewood, CO: Libraries Unlimited.Dijkstra, S., Schott, F., Seel, N. M ., & Tennyson, R. D. (1997). Instructional Design: International Perspectives 1. Mahwah, NJ: Lawrence Erlbaum Associates.Gentry, C.G. (1994). Introduction to instructional development: Process and technique. In Brown, A. (2004). Building Blocks for Information Architects: Teaching Digital Media Production within an Instructional Design Program. Journal of Educational Multimedia and Hypermedia 13(3), 265.Jencks C. ( 1975). The rise of post-modern architecture. (17-34). In Dijkstra et al., (1997). Instructional Design: International Perspectives 1 (p. 28). Mahwah, NJ: Lawrence Erlbaum Associates.Jonassen, D. H. (Ed.) (2004). Learning to Solve Problems: An Instructional Design Guide. San Francisco: Pfeiffer.Milheim, W. D. & Osciak, S. Y. (2001). Multiple Intelligence and the Design of Web-Based Instruction. Journal Title: International Journal of Instructional Media 28(4), 355+.Peters O. ( 1967). Das Fernstudium an Università ¤ten und Hochschulen, didaktische Struktur und vergleichend e Interpretation. In Dijkstra et al., (1997). Instructional Design: International Perspectives 1 (p. 27). Mahwah, NJ: Lawrence Erlbaum Associates.Reigeluth, C. M. ( 1983). â€Å"Instructional design: What is it and why is it?† In C. M. Reigeluth (Ed.), Instructional design theories and models (pp. 279-333). Hillsdale, NJ: Lawrence Erlbaum Associates.Reigeluth, C. M. ( 1996). A new paradigm of ISD? Educational Technology (pp. 13-20). In Dijkstra et al., (1997). Instructional Design: International Perspectives 1 (p. 28). Mahwah, NJ: Lawrence Erlbaum Associates.Reigeluth, C. M., & Darwazeh, A. N. ( 1982). â€Å"The elaboration theory's procedures for designing instruction: A conceptual approach†. Journal of Instructional Development, 5, 22-32.Reigeluth, C. M. (Ed.) (1983). Instructional design Theories and Models: A New Paradigm of Instructional Theory 2. Hillsdale, NJ: Lawrence Erlbaum Associates.Spiro, R. J., et al., (1987). Knowledge Acquisition for Application: Cognit ive Flexibility and Transfer in Complex Content Domains. In Jonassen, D. H. (Ed.) (2004). Learning to Solve Problems: An Instructional Design Guide. San Francisco: Pfeiffer, p. 102.Winn, W. (1997). Advantages of a theory-building curriculum in instructional technology. Educational Technology, 37(1), 34–41.Zhang, J. X. (2001). Cultural Diversity in Instructional Design. Journal Title: International Journal of Instructional Media 28(3), 299.

Wednesday, October 23, 2019

Econometrics

People management at the Seafood Restaurant, Potato's. Background From a humble beginning, Rick and Jill Stein established a small seafood restaurant on the harbor side in Potato's in 1975. The business has expanded to include a number of different food establishments at different price points which appeal to a wide client group, with all but one of the sites based in Potato's.The reputation of the business for quality of food and service, coupled with Rick's high profile TV appearances, have ensured Patriots place on the map in respect of ‘destination inning. Culture Rick and Jill remain at the head of the business and, with no external shareholders, retain a strong, personal position in terms of the culture and development of the business. More recently, their son Jack who is only 33 has been appointed as Executive Chef.As with many owner-led organizations, the culture of the organization continues to reflect the tolerant, generous, family-spirited ethos of the initial, much smaller business. With expansion and increasing headcount, this culture can however become tested and more challenging to maintain. There is a deed to develop some policies in order to ensure a degree of consistency in how people are managed and set out the behaviors that are expected from employees of all levels in the course of their work. This needs to be achieved in such a way that the culture of the business is retained.Staffing needs and employee constituent Staffing needs reflect the seasonal peaks and troughs of the restaurant business: in the busy season, weekly takings will be six times takings in the quieter season. Headcount needs to rise and fall accordingly. The seasonal maximum headcount is just under 400 employees, with a requirement around 100 less employees out of season. This reduction is achieved through ‘natural wastage' as many of the seasonal employees are either students or non-students who return year on year to the business also specifically to work i n the busy season.Therefore, whilst 1 50 leavers per annum appears at first to be very high level of staff turnover, this is typical for the hospitality industry and very much fits the needs of those workers who Join, leave, and often return the following year. Reflecting the high numbers of students who ark seasonally, the age profile of the business is young: 40% of employees are under 24 years of age. The growth of the business has meant that, for those who Join initially as seasonal workers and then express an interest in a longer-term role with the business, this is often possible.The business is able to recruit new employees as required without the use of recruitment agencies thereby avoiding costly agency fees. The business remained highly profitable through the recession however a number of cost factors led to a reduced profit forecast for 2012: these included capital investment, a programmer of upgrading premises and food and fuel inflation. Additionally, the payroll of the business had increased over time to reflect the growth of the business. In effect, it appears there was no development plan per SE.Rick Stein is quoted as saying: Little did Jill and I know when we opened a small seafood bistro on the harbor side in Potato's in 1975 with red checked tablecloths and candles in verdict bottles that the business would grow into four restaurants, 40 bedrooms, 3 shops, a cookery school and a pub. We did not have a Master plan. It Just happened†¦ E Just wanted people to stay here for a little while knowing they could eat differently everyday†¦ Despite the unstructured approach to business development plans, turnover among permanent staff is low, and the owners are keen to reward employees with a yearly increment.As passing on the increased costs to customers would have been counterproductive, the logical approach was to consider operational costs and to rethink staffing. In many organizations, this would involve potential redundancies. Rick and Jill did not want to make any employee redundant, and so the HRS function et about considering other approaches to making reductions in payroll expenditure. Location and community Relations with the local, close-knit community are very important to the business which is a major employer in the area.Further expansion could include opening restaurants in other locations: this would bring a fresh set of challenges to the business, not least in respect of people management. Additional information: 1 . Organizational structure 2. Map of Potato's showing names and locations of Rick and Jill Stein's businesses Task; For each question below, you should show that you have considered theoretical respective, legal requirements, commercial needs and potential responses from the workforce to come up with balanced solutions and demonstrate that you are aware of any associated risks. Assignment questions: 1 .Identify the current strategic approach to managing people in this organization taking in to account advantages and disadvantages. Your answer should include a discussion of how this strategic approach is likely to impact on operational people management issues (for example, recruitment, performance management, staff benefits, absence management, discipline and grievance). If any changes are squired, which approach would you recommend? 2. With the expansion of the business, it has been prudent to consider the development of some policies in order to ensure a degree of consistency in how 2. Which people management policy would you recommend is implemented as the highest priority in the business? 2. 2 Justify your recommendation. 2. 3 Outline the aims and key elements of the policy. 2. 4 Discuss how you would implement this policy: consider how you would ensure managers and employees ‘buy in' to the policy and identify any potential resistance. 3. At the Seafood Restaurant, Rick and Jill did not want to make any employee attendant, and so the HRS function set about c onsidering other approaches to making reductions in payroll expenditure. . 1 Discuss the benefits to the business of avoiding redundancies. 3. 2 Discuss the possible approaches to reducing payroll expenditure; consider the merits and drawbacks of each approach and identify which you would recommend. 4. Potato's has been home to the Seafood Restaurant for a considerable length of time and expansion has occurred within the locality. If the business were to expand to another location, what would be the people management considerations in respect of: 4. Recruitment 4. 2 Employee communication 4. Consistency of culture across the business Assignment 1: Assessment Criteria Criteria Excellent Very good Could be better Marks available Theoretical knowledge and critical understanding 30 Evidence of a critical understanding of relevant theories, models and frameworks that inform the situation described by the case study Demonstrates clear understanding of key arguments, debates and contempora ry issues/ideas relating to people management Work is informed by clear reference to appropriate literature Application of theoretical knowledge/research to practicePerspectives, arguments, models and frameworks from the literature are clearly applied to the case study scenario Issues of practical and, where relevant, strategic importance for the organization are clearly identified and addressed Practices described in the case study are critically analyses and evaluated through the use and application of relevant academic literature Written Communication and Presentation Referencing/citations follow Harvard protocol Work is written clearly, using appropriate style and language Spelling, grammar and layout are to a professional standardMaterial is clearly and effectively organized to provide a highly structured, logical and coherent set of arguments Conclusions and recommendations follow logically and are realistic in the context of the scenario Format requirements Please see below A SSIGNMENT 2: REFLECTIVE JOURNAL Individual reflective Journal (30%): You must also produce an individual personal and reflective Journal, which demonstrates that you understand the role and value of reflection for individual development. You should also consider what you have learned on the module, and how it builds on your previous knowledge and experience.You must demonstrate through your reflections how and what you are learning on the module, and reflect on how your skills, ideas and attitudes to people management are developing. This will include identifying any gaps in your existing knowledge or skills and how you plan to work to develop them. You will be encouraged to reflect on a weekly basis and to produce regular entries in your journal enabling you to build this assignment as the module progresses. Your completed Journal is likely to be approximately 1500 words in length.Task You are required to reflect on the learning on this module and produce a Journal. This should be written in report format critically reflecting on what you have learnt and identifying areas of development. Details Final report- word Count: 1 500 words (minus daily logs). The deadlines- see above: This is an individual assignment. Your work should contain: 1 . A clear introduction, introducing the report and your ideas about people management, with a brief comment on your knowledge and skills in relation to this. 2. A brief discussion of your skills at the beginning of the module to include: a.Your views of your own strengths and weaknesses as a potential manager working with people. B. How you will use your opportunities to minimize/overcome weaknesses and potential threats c. Remember to identify development areas. 3. You need to have a section on what you learnt from the module in terms of skills/ knowledge and perhaps how yourself concept has been challenged as a result of participating in activities on the module. A. In addition, you will need to identify consequences of yo ur learning for the future. B. What does this learning mean for your career development? Has it got any relevance? . Evidence of action planning for future development. An indication of short/ tedium/long-term development plan is essential. It is important that you also comment on how you will work on your weaknesses and your measures of success. 5. Regular entries reflecting on your learning on the module. You should aim to reflect on each day/ on a daily basis, either on the Lecture content or seminar activities, and need to have at least 8 in addition to your introduction and conclusion (Weekly Reflective logs must be put in the appendices as evidence to support contents of your report). 6.A complete list of references used Assignment 2: Assessment Criteria Assessment Criteria: DOD Missing Demonstration of your ability to use reflective writing to: 1. Create a focus for your learning 2. Describe and evaluate your learning 3. Make sense of your learning experiences 4. Demonstrate an understanding of the value of reflection Applying your learning 1 . Identify consequences of your learning for the future 2. Application of learning experiences to your personal/professional development 3. Evidence of action planning for future development Structure and presentation 20 1 .

Tuesday, October 22, 2019

Epic Of Gilgamesh Essays - Gilgamesh, Group TAC, Shotaro Ishinomori

Epic Of Gilgamesh Essays - Gilgamesh, Group TAC, Shotaro Ishinomori Epic Of Gilgamesh But then I ask the question: How many men must die before we can really have a free and true and peaceful society?How long will it take?If we can catch the spirit, and the true meaning of this experience, I believe that this nation can be transformed into a society of love, of justice, peace, and brotherhood where all men can really be brothers. -Reverend Dr. Martin Luther King, Jr. Since the beginning of early civilization, differences in races and cultures have been a part of society. Along with these differences, there evolved a hatred against what was not considered the norm . For many years, prejudice, especially in the form of racism, has sparked many hate crimes and wars. Over generations, people have devised strategies to combat these injustices in the most effective way possible, whether it be civil or violent ways of protest. August Wilsons Pulitzer Prize winning play, The Piano, is set in the early 1930s at a time when racism was spreading like wild fire. The play takes a close look into two dynamically different approaches to overcoming prejudice in America. Although their strategies differ greatly, both Berniece and Boy Willie both find ways to combat the problems associated with living in a racist culture. Slavery is still fresh in the minds of many blacks and whites during the 30s and so are many harsh feelings. Berniece and Boy Willie tackle the racism of their time in the same way their parents did. Bernices personality is very similar to her mothers, Mama Ola. She chooses to avoid conflicts over racism whenever possible, even if it means keeping quiet about subjects that should be addressed. She finds it easier to lay low than to create a situation. Berniece views the history of the piano with the same disdain and sorrow that her mother held for so many years. In one of many heated arguments with Boy Willie, Berniece says, Mama Ola polished over this piano with her tears for seventeen years...seventeen years worth of cold nights and an emp ty bed. For what a piano?...To get even with somebody....and what did it ever lead to? more killing and more thieving. When Boy Willie speaks, one can almost hear the vigor and determination of his father, Papa Boy Charles voice. He, much like his father, believes in the theory: by whatever means necessary. Boy Willie is willing to do whatever it takes and remove whoever stands in his way; and that includes getting rid of any white man that poses a threat against his dreams. Boy Willie is very proud that his father gave his life to steal the piano, with the carvings of his familys history, from Sutter, the man who enslaved his great grandmother and his grandfather. Papa Boy Charles believed that his family would always be slaves as long as Sutter still had ownership of the piano. Boy Willie tells Berniece that she should tell her daughter, Maretha, about the story behind the piano so that she can be proud of her grandfather. You ought to mark down on the calendar the day that Papa B oy Charles brought that piano into the house...throw a party...have a celebration. Although their points of view are similar to their parents, they are very opposed in their strategies for dealing with racism. At a time when racism is at its peak due to unresolved issues on both sides, the future for blacks in America seems bleak. Although slavery has ended, brutal attacks against blacks still exist and many are worse off financially than they were as slaves. Berniece looks at her lifestyle from a realists point of view with little optimism. She sees no chance of growth for blacks and expresses this when she says, Im going to tell her [Maretha] the truth...you at the bottom with the rest of us...thats just where she living. Berneice believes that blacks are at the bottom of life and they may never overcome their situation. Although she believes that blacks can find success; she feels that successs is limited to the boundaries in which blacks are born. She follows the idea that some blacks refer to as

Monday, October 21, 2019

Free Essays on The Vietnam War

THE VIETNAM WAR At last for thirteen years of fighting with North Vietnam and allies with South Vietnam, The Paris Peace Talks Treaty was finally signed, the war was ended. The Peace Treaty created a compromise to reunite the two Vietnams, and allow the United States to withdraw their military forces. This ended any conflict between Vietnam and the United States. The United States was drawn into the war based on the economic and the previously controlled colony of France need to secure the rubber and banana plantations, for the South Vietnams businessmen. Also oil from off shore drilling was located close to the port of Saigon, which used the port to supply oil tankers. The treaties of S.E.A.T.O.{South East Asia Treaty Organization}joined Australia, Japan, New Zealand, South Korea, Thailand, The Phillipines, and the United States, to become allies to the Republic of South Vietnam. The division of Vietnam into two separate nations gave little choice for the allies to support. The South Vietnam resources and the port of Saigon would be a free republic as long the allies could defend them from domination from the industrialized nation of North Vietnam. North Vietnam was supported by the U.S.S.R. and supplied by Soviet block with weapons. Led by Ho Chi Mien a nationalist schooled in Russia, The North Vietnam armies stood in opposition to the South Vietnam Government. After the fall of Saigon the city was renamed Ho Chi Mien City and still has that name to Another ally to North Vietnam government was the weather and terrain vast jungles, swamps, rice patties, and mountains made slow to impossible travel for the large military machine U.S. Monsoon season virtually stopped all actions of the Americans for about ninety days a year. Typhoons were always a threat to the South Vietnam allies grounding flights and stopping highway supplies. Heat and humidity of ... Free Essays on The Vietnam War Free Essays on The Vietnam War THE VIETNAM WAR At last for thirteen years of fighting with North Vietnam and allies with South Vietnam, The Paris Peace Talks Treaty was finally signed, the war was ended. The Peace Treaty created a compromise to reunite the two Vietnams, and allow the United States to withdraw their military forces. This ended any conflict between Vietnam and the United States. The United States was drawn into the war based on the economic and the previously controlled colony of France need to secure the rubber and banana plantations, for the South Vietnams businessmen. Also oil from off shore drilling was located close to the port of Saigon, which used the port to supply oil tankers. The treaties of S.E.A.T.O.{South East Asia Treaty Organization}joined Australia, Japan, New Zealand, South Korea, Thailand, The Phillipines, and the United States, to become allies to the Republic of South Vietnam. The division of Vietnam into two separate nations gave little choice for the allies to support. The South Vietnam resources and the port of Saigon would be a free republic as long the allies could defend them from domination from the industrialized nation of North Vietnam. North Vietnam was supported by the U.S.S.R. and supplied by Soviet block with weapons. Led by Ho Chi Mien a nationalist schooled in Russia, The North Vietnam armies stood in opposition to the South Vietnam Government. After the fall of Saigon the city was renamed Ho Chi Mien City and still has that name to Another ally to North Vietnam government was the weather and terrain vast jungles, swamps, rice patties, and mountains made slow to impossible travel for the large military machine U.S. Monsoon season virtually stopped all actions of the Americans for about ninety days a year. Typhoons were always a threat to the South Vietnam allies grounding flights and stopping highway supplies. Heat and humidity of ...

Sunday, October 20, 2019

Zygorhiza Facts and Figures

Zygorhiza Facts and Figures Name: Zygorhiza (Greek for yoke root); pronounced ZIE-go-RYE-za Habitat: Shores of North America Historical Epoch: Late Eocene (40-35 million years ago) Size and Weight: About 20 feet long and one ton Diet: Fish and squids Distinguishing Characteristics: Long, narrow body; long head About Zygorhiza Like its fellow prehistoric whale Dorudon, Zygorhiza was closely related to the monstrous Basilosaurus, but differed from both of its cetacean cousins in that it had an unusually sleek, narrow body and a long head perched on a short neck. Strangest of all, Zygorhizas front flippers were hinged at the elbows, a hint that this prehistoric whale may have lumbered up onto land to give birth to its young. By the way, along with Basilosaurus, Zygorhiza is the state fossil of Mississippi; the skeleton at the Mississippi Museum of Natural Science is affectionately known as Ziggy.

Saturday, October 19, 2019

Operations Management (Flow Charts) Case Study Example | Topics and Well Written Essays - 1250 words

Operations Management (Flow Charts) - Case Study Example The surgeon uses staples to dissect the stomach into upper and bottom section. The upper section is usually smaller while the bottom section is larger (Klein 86). The smaller upper section is where the food flows after eating. The smaller upper section, also called the pouch, is compared to the size of a walnut. This section holds about a single ounce of food. The second procedure for this surgery is called the bypass. During this step, the surgeon connects jejuna to a small hole in a patient’s pouch. The eaten food will flow from the pouch to the small intestines. This will enable the patient to absorb fewer calories. Bypass surgery can be carried out in two ways. In open surgery the surgeon makes a surgical cut to open the belly. Bypass will be done by working on the patients small intestines, stomach, and other parts. Consequently, the surgeon might use the tiny camera referred as laparoscope (Apple, Lock, and Peebles 76). This process is termed as the laparoscopy; camera i s put in the patient's body. In laparoscopy, the surgeon makes small cuts in the patient's belly. Then he passes the camera through one of the cuts. The process is linked to the monitor of the video in the operating room. The surgeon will keep track of the belly at the screen. The surgeon then uses surgical instruments to carry out the bypass. The process can be represented in the form of a flow chart as shown below. 2. The minimum time the patient takes in the hospital before being discharged after paying cash is four days. The average time for those using insurance is about two weeks. Subsequently, the patient undergoing a laparoscopic surgery takes only two days. When the patient pays cash for the bariatric surgery, it will save the patient that stress of going through counseling, and various tests. Paying cash will also save the patient the agony of proving to the surgeon that he has tried other means of weight loss. Consequently, it reduces the patient stress of waiting for hal f a year before the procedure. Therefore, paying cash is something that the patient needs to consider (Apple, Lock, and Peebles 76). When surgery is paid in cash, they give the patient an option of choosing the surgeon to carry out the surgery. It does not involve longer procedures like the insurance. When the patient pays by cash, he normally spends one to three days in the hospital. When a patient undergoes laparoscopy, he stays in the hospital for two to three days. When he patients undergo this procedure, they recover faster and return to normal in two weeks time (McGowan and Chopra 89). The hernias rate in open surgery is reduced significantly. Therefore, the patients who pay cash are better off based on the procedural types to select from. Paying cash enables the patient to choose his location for the surgery and the kind of surgeon to be attended to. Dealing with insurance is always frustrating, but most insurance companies have realized that to cover procedures of bariatric makes financial sense (Apple, Lock, and Peebles 54). Paying cash enables the patient to have surgery almost immediately and also discharging is soon. The patient does not undergo the risk of being turned down due to coverage issues. There are reported cases of turn down from insurance companies at the last minutes of the surgery. 6. Assuming the patients get treatment by an insurance cover and go for open surgery. The Bariatric center will make 945,000 Dollars: Number of

Marketing project Research Paper Example | Topics and Well Written Essays - 750 words

Marketing project - Research Paper Example Also, another figure worth mentioning is the Egyptian Wael Mhgoub, who will be running a coffee branch in Dubai. It is the bargaining power of our consumers which plays a vital role in establishing our desirability from a customer point of view. UAE environment comprises of guaranteed customers for any specialty coffee industry. The ability of the consumers bargaining strength is proportional to the ability of the consumers to bring down prices and bargain for best-quality services and products. UAE customers are quite capable of being able to pit rival business firms against one another. This was one of the many considerations made before Starbucks ventured into UAE. Here at UAE, there is a vast population with the financial capability, since Starbucks products don’t come cheap (Miller, 2009). Starbucks is the most expansive and leading coffeehouse. Starbucks Corporation is a multibillion international coffeehouse chain, and it is enlisted in the New York Stock Exchange (NSE), where its shares are traded globally. This Corporation has 17,133 stores in 49 countries, 87 of these stores being based in the UAE. Starbucks headquarters are based in Seattle, Washington, USA. Starbucks is the market leader of the coffee market in the world, and in UAE, it is the leading coffee chain cafes. Starbucks is known for its exceptional high quality services and customers highly commend them, since they are happy with their excellent service. Customer feedback shows that the customers are satisfied with the quality as well as taste of the coffee. Based on customer feedback, the UAE love our coffee brands. They are impressed with the wide range of coffee brands we offer. Starbucks is in over 40 countries in the world and in UAE alone, we have 92 branches. Starbucks as a brand alone sells due to its high brand awareness and a globally known high quality coffee brand. Since it is a multibillion dollar

Friday, October 18, 2019

Educational Technology Proposal Research Example | Topics and Well Written Essays - 1500 words

Educational Technology - Research Proposal Example It is known as the â€Å"Little School Across the River.† Lafarque Elementary enrollment is roughly 736 students with a population consisting of 75% white students and 25% black students, grades ranging from Pre-Kindergarten to Sixth grade. As the only 6th grade math teacher, I teach a total of 98 students in four 90 minute blocks. My students are seated in groups of four in order to incorporate cooperative learning, as well as complete group assignments. I have pertinent math material posted throughout my classroom as well. I have 15 special needs students (1 IEP and 14 IAP’s). A certified Special Education teacher does come into my room daily for a total of 30 minutes to assist as needed. The reason for choosing my classes for the study is to see how technology can improve their ability to learn and comprehend 6th grade mathematical concepts and practices. Their strengths are, hopefully, the cooperative learning that has been instituted in the classroom from Day 1. I hoped that by using cooperative learning that the students would be able to help one another in the learning of Math. This grouping of four also allows me top teach them through group assignments. Their challenges are the ability to get along with one another while attempting to complete the group assignments or help one another through cooperative learning. ... However, the research shows the positives heavily outweigh the negative aspects when it comes to using technology in the classroom. A lot of the sources found for this research show us that many of the research studies used in the sources themselves are the same. Most of those particular studies shows us that technology affects the students’ ability to learn in positive ways. On the website Education World there is an article, entitled Technology in Schools: Does it Make a Difference, written by Glori Chaika back in 1999, which was originally from the website TechnicalSchool.org, but placed on this site in 2006. This article opens up with the information of the Clinton administration back in 1998 setting aside an additional $25 million for integrating technology into the schools and instructing the teachers in the use of technology for the classroom. Furthermore, this article quotes Darla Waldrop, a junior-high computer lab coordinator in Louisiana. She states, â€Å"Children who don’t do anything in class will work if it’s on the computer. It takes that ‘I’m not working for an authority figure’ out of it. They’re working at their own pace, and they love the multimedia effect.† This article also tells what makes some programs more successful than others. And it gives the pro resources for technology in schools. Best Evidence Encyclopedia released a booklet in July of this past year that was written by two members of Johns Hopkins University, Alan C.K. Cheung and Robert Slavin. This booklet was entitled, The Effectiveness of Educational Technology Applications for Enhancing Mathematics Achievement in K -12 Classrooms: A Meta-Analysis. This meta-analysis shows us that technology in mathematics classrooms help the students

Business Ethics and Leadership- Whistle-Blowing Essay

Business Ethics and Leadership- Whistle-Blowing - Essay Example To do this effectively, one also has to have a strong sense of leadership, vision and determination to continue moving forward with the ethical values which one believes are correct. Examining the several perspectives of whistle - blowing in an organization can also determine the ethical legitimacy that is a part of this as well as how it takes a specific sense of leadership to follow through with the situation. The Institution of Ethical Decision Making The current concept of ethics within businesses is now recognized as an institution. This is designed because corporations are expected to follow through with a code of conduct that assists with doing what is right and fair for employees and the general public. The institution began with the ethics that were associated with Enron and the complexities which came from the financial situation and deceptions which occurred. This was followed by several believing that a framework needed to be followed within corporations, specifically whi ch would create programs, guidelines and practices that would hold various companies liable for the actions which they were supposed to follow. The defined elements of this institution are based on the cultural and social expectations, relationship to politics and looking at withholding standards in real life situations which occur. By examining and contributing to these various expectations in the right manner, there is the ability to withhold the expectations through the performance of the company and the results which the public is able to look into (Ferrell, Fraedrich, p. 15). The framework which has been built with the institution of ethics is followed with the understanding of moral problems and how this creates specific responses from employees and to the public. The main response through the institution is based on ethical management, meaning that a company has to make specific promises to the community and follow through with these. More important, practices that would caus e harm to employees or the public are supposed to be prevented and hold to specific standards. While there are certain issues which don’t carry a difference between right and wrong, others are determined by the harm which it may cause, which becomes the basis for the standard business practices which are to be followed. While each business is able to withhold the standards and practices, there is also a direct association with others holding corporations accountable for actions which may become public at any time (Geva, p. 133). Ethics and Whistle - Blowing The concept of whistle - blowing is able to move up into an organization because of ethical standards which have been violated. These ethics are based on the institutional standards that are withheld by an organization and which are expected by the public. If there are violations of the employees, organization or to the public, then an individual has the right to point fingers at those responsible. Whistle blowing takes pl ace when an individual decides to point out the faults of a company, specifically with a focus on illegal, immoral or illegitimate practices that are taking place over the organization. It is expected that the response to the whistle blowing will be a large amount of publicity as well as mediation which takes place to resolve the issue. It is further expected that there will be sets of questions which are asked pertaining to

Thursday, October 17, 2019

Management Information and Communication System Essay - 1

Management Information and Communication System - Essay Example In order to achieve the constant supply of raw materials and the supply of goods and services to the consumers, a business firm should ensure that their supply chain management systems are effective. In this case, only an effective supply chain management system can enhance the firm’s responsiveness to its customers’ needs and utilisation of its resources. In effect, the supply chain management system enables a firm’s coordination during the processes of planning, production, and logistics with the suppliers. Business Benefits of Supply Chain Management Systems A business should be able to evaluate the status of its supplies and resources while maintaining an inventory system along the supply chain. Bowersox, Closs and Cooper (2010, p. 133) called this visibility, which is the ability of a business to track its resources and inventory along the supply chain while evaluating and managing any information regarding the resources and inventory. In effect, supply chai n management systems benefits a business by using the information in the supply chain to plan against any potential problems along the supply chain. Consequently, the evaluation of these problems enables businesses to manage any potential risks, which enhances the responsiveness of a business towards its consumers’ needs. ... In addition, a business will benefit by planning for the consumers’ constraints such as transportation and storage capacities, raw materials required, and the amount a firm should produce in order to meet the consumers’ demands. Supply Chain Management Systems and Coordination of Planning, Production, and Logistics with Suppliers As earlier indicated, one business benefit of supply chain management systems to a firm is the ability of a firm to remain responsive to its consumers’ demand. In effect, an effective system will enhance a firm’s planning of its production to meet the market demand, which is the process of demand management. Bowersox, Closs and Cooper (2010, p. 133) noted, â€Å"Demand management develops the forecast that drives anticipatory supply chain processes.† The importance of the â€Å"anticipatory supply chain processes† in a business firm is to establish the amount of products to produce and the raw materials required in t he production of the products. In effect, an organisation maintains a steady contact with the suppliers of raw materials based on the firm’s projections and stock available. A supply chain management system enables a business to identify the goods that require production in a firm. In this way, a firm will be able to balance between its ability in terms of resources available and the manufacturing stock. It is important to point out that these resources include the most significant resource of human capital. Bowersox, Closs, and Cooper (2010, p. 135) called this product planning and noted, â€Å"It uses the statement of requirements obtained from demand management in conjunction with

Write an analysis of an authors works, first discussing the authors Research Paper

Write an analysis of an authors works, first discussing the authors life to put the author in an accurate time and place relevant to your analysis of his or her work - Research Paper Example The works of Faulkner, Hughes and Poe represents a typical middle class American family in the early 19th century struggling to handle financial challenges. The compositions reflect a time when the society was reeling from the effects of wars (Miller 3). Despite the difference in the backgrounds of the three composers, they were investigative in their compositions. Henceforth, people refer to them as gothic composers. Indeed, they pondered at miseries in the societies as presented in their compositions. The authors utilized imagery and symbolism in their creations. This piece analyses the works of Faulkner, Hughes and Poe in relation to their lifestyles. William Faulkner grew up from a humble background in Mississippi where he joined the military and later rose to the rank of a sergeant. The military provided Faulkner an exposure like no other. While working in the forces, he interacted with people from various backgrounds (Aiken 7). At first, it was hard for William to acknowledge the challenges that people were facing in the society. The author got the idea of writing creative compositions from his environment. This was his mission to salvage the society from the problems people were facing. His first work was a novel he wrote in 1925 (Aiken 2). The compositions that followed were influenced by the stories he heard from his elders about the Americas history. He used imagery in his works. He devoted to inform the audience the decadence that was going on in the southern states. Hughes focused on enlightening the American society from undertaking several odd jobs. At the time, he had intended to reflect on the challenges that affected blacks in America. According to Miller (8), literature gave Hughes an opportunity of reflect on the challenges that affected blacks in America. The previous scholars who had written works on

Wednesday, October 16, 2019

Management Information and Communication System Essay - 1

Management Information and Communication System - Essay Example In order to achieve the constant supply of raw materials and the supply of goods and services to the consumers, a business firm should ensure that their supply chain management systems are effective. In this case, only an effective supply chain management system can enhance the firm’s responsiveness to its customers’ needs and utilisation of its resources. In effect, the supply chain management system enables a firm’s coordination during the processes of planning, production, and logistics with the suppliers. Business Benefits of Supply Chain Management Systems A business should be able to evaluate the status of its supplies and resources while maintaining an inventory system along the supply chain. Bowersox, Closs and Cooper (2010, p. 133) called this visibility, which is the ability of a business to track its resources and inventory along the supply chain while evaluating and managing any information regarding the resources and inventory. In effect, supply chai n management systems benefits a business by using the information in the supply chain to plan against any potential problems along the supply chain. Consequently, the evaluation of these problems enables businesses to manage any potential risks, which enhances the responsiveness of a business towards its consumers’ needs. ... In addition, a business will benefit by planning for the consumers’ constraints such as transportation and storage capacities, raw materials required, and the amount a firm should produce in order to meet the consumers’ demands. Supply Chain Management Systems and Coordination of Planning, Production, and Logistics with Suppliers As earlier indicated, one business benefit of supply chain management systems to a firm is the ability of a firm to remain responsive to its consumers’ demand. In effect, an effective system will enhance a firm’s planning of its production to meet the market demand, which is the process of demand management. Bowersox, Closs and Cooper (2010, p. 133) noted, â€Å"Demand management develops the forecast that drives anticipatory supply chain processes.† The importance of the â€Å"anticipatory supply chain processes† in a business firm is to establish the amount of products to produce and the raw materials required in t he production of the products. In effect, an organisation maintains a steady contact with the suppliers of raw materials based on the firm’s projections and stock available. A supply chain management system enables a business to identify the goods that require production in a firm. In this way, a firm will be able to balance between its ability in terms of resources available and the manufacturing stock. It is important to point out that these resources include the most significant resource of human capital. Bowersox, Closs, and Cooper (2010, p. 135) called this product planning and noted, â€Å"It uses the statement of requirements obtained from demand management in conjunction with

Tuesday, October 15, 2019

Stastistics Assignment Example | Topics and Well Written Essays - 2250 words

Stastistics - Assignment Example In order to calculate the risk, uncertainty estimates are provided by HSBC bank. The ‘Mean’ of uncertain returns for US Super Cars are equal to $2,092,868 and the fixed amount offered by HSBC is $2,150,000. If the uncertain revenues and the amount offered by HSBC are compared, the offer appears to be a very beneficial for US Supper Cars as the uncertain revenues are less than the money offered by HSBC and also by accepting this offer the exchange rate risk will be transferred from USA Super Cars to HSBC. In addition US Super Cars will not have to pay any additional charges (contractual fee etc.) to enter into the contract. Introduction The today’s business environment is highly globalized and is truly lacking borders. The businesses have moved beyond domestic and national boundaries to include markets around the globe which has resulted in the increased interconnectedness of distant localities. The goods produced in one area of the world are consumed in a separate distant locality. The companies are outsourcing their productions to areas where labor or material costs are lower in the effort to earn higher profits. When local competition intensifies the organizations start catering to markets in other countries where competition is less and market is relatively immature. In such cross border transactions, organizations are exposed to exchange rate risk which they will not face if they are involved in merely domestic transactions. Moving beyond the boundaries has several positive as well as various negative aspects. On the positive side, the market is expanding day by day and the number of customers is increasing with the same proportion, therefore, the business sales increase rapidly. Keeping in view the fact, the US Super Car company has taken a step to sell their luxury sport cars in five (5) countries including the United Kingdom, Japan, Canada, South Africa and United State of America. While doing business across the world, there is alway s a risk of fluctuation in the exchange rate that may result in an increase or decrease in the profit margin. There are certain calculations include but not limited to the standard deviations, mean, average, sum, and the profit which needs to be performed in order to cope with exchange rate uncertainty. Mean & Standard Deviation Initially, it is vital to calculate the mean, standard deviation and variance on the selling price of the customer. In order to perform the calculations, it is equivalently important to convert / exchange the money currencies into dollars (as we need to answer in dollars) (AGAInstitute, n.d). The following table shows the details of the calculated values of the mean, standard deviation and variance. In order to calculate the uncertain revenues in dollars, the dollar per local currency rate provided by HSBC bank is multiplied by the selling price and the quantity for example in order to calculate uncertain revenues generated by sales in UK, $1.41/? mean provi ded by HSBC is multiplied by the selling price in the UK which is equal to ?57,000 and the quantity sold (12). The product obtained is the uncertain revenue generated through sales in the UK. The same process is repeated to get uncertain revenues in each individual foreign country (Japan 1, Japan 2, Canada 1, Canada 2, and South Africa). The total

Monday, October 14, 2019

Child labor Essay Example for Free

Child labor Essay Children are the future of the nation. They are flowers of our national garden. It is our duty to protect these flowers. Child labour is a socio-economic problem. Child labour is not a new phenomenon in India. From ancient times, children were required to do some work either at home or in the field along with their parents. However, we find in Manusmriti and Arthashastra that the king made education for every child, boy or girl, compulsory. In those days there was a system of trade of children, who were purchased and converted to slaves by some people. The problem of child labour was identified as a major problem in the 19th century when the first factory was started in mid-19th century. Legislative measures were first adopted as early as 1881. Since independence there have been several laws and regulations regarding child labour. Child labour has been defined as any work done by the children in order to economically benefits their family or themselves directly or indirectly, at the cost of their physical, mental or social development. Child is the loveliest creation of nature. But it is the circumstances which force them to hard labour. They have to earn livelihood from early childhood, stopping their mental development. The nation suffers a net loss of their capacity as mature adults. Child labour is a global problem. It is more common in underdeveloped countries. Child labour, by and large, is a problem of poor and destitute families, where parents cannot afford education of their children. They have to depend on the earning of their children. The prevalence of child labour is a blot on society. It is a national disgrace that millions of children in this country have to spend a major part of their daily routine in hazardous works. The problem of child labour in India is the result of traditional attitudes, urbanisation, industrialisation, migration, lack of education, etc. However, extreme poverty is the main cause of child labour. According to the UNICEF, India is said to have the largest number of world’s working children. Over 90% of them live in rural areas. The participation rate in rural urban areas is  6.3% and 2.5% respectively. According to a recent report, 17 million children in our country are engaged in earning their livelihood. This constitutes 5% of the total child population of the nation. It is about one-third of the total child labourers of the world. In India, working children are engaged in different organised and unorganised sectors, both rural and urban areas. In rural sector, children are engaged in field plantations, domestic jobs, forestry, fishing and cottage industry. In urban sector they are employed at houses, shops, restaurants, small and large industries, transport, communication, garages, etc. In India, working children are also self-employed as newspaper, milk boys, shoeshine boys, rag pickers, rickshaw-pullers, etc. About 78.71% of child workers are engaged in cultivation and agriculture, 6.3% are employed in fishing, hunting and plantation, 8.63% in manufacturing, processing, repairs, house industry, etc., 3.21% in construction, transport, storage, communication and trade and 3.15% in other services. Child Labour is exploited in several ways. Preference of child labour by many employers is mainly due to the fact that it is cheap, safe and without any liability. Many children take up the job just because of the non-availability of schools in their areas and thus rather than sitting idle, they prefer to go to work. Illiteracy and ignorance of parents is also an important factor. These parents do not consider child labour as evil. Child labourers have to work more than adult workers. They are exploited by their employers. There are several constitutional and legal provisions to protect working children. At present there are 14 major acts and laws that provide legal protection to the working children. Notwithstanding, the evils of child labour is on the increase. The biggest cause behind its spread is poverty. It cannot be completely eradicated from society unless its root cause is not addressed. Child labour perpetuates poverty. Child labour is economically unsound, psychologically disastrous and ethically wrong. It should be strictly banned. The general improvement in socio-economic conditions of people will result in gradual elimination of child labour.

Sunday, October 13, 2019

VaR Models in Predicting Equity Market Risk

VaR Models in Predicting Equity Market Risk Chapter 3 Research Design This chapter represents how to apply proposed VaR models in predicting equity market risk. Basically, the thesis first outlines the collected empirical data. We next focus on verifying assumptions usually engaged in the VaR models and then identifying whether the data characteristics are in line with these assumptions through examining the observed data. Various VaR models are subsequently discussed, beginning with the non-parametric approach (the historical simulation model) and followed by the parametric approaches under different distributional assumptions of returns and intentionally with the combination of the Cornish-Fisher Expansion technique. Finally, backtesting techniques are employed to value the performance of the suggested VaR models. 3.1. Data The data used in the study are financial time series that reflect the daily historical price changes for two single equity index assets, including the FTSE 100 index of the UK market and the SP 500 of the US market. Mathematically, instead of using the arithmetic return, the paper employs the daily log-returns. The full period, which the calculations are based on, stretches from 05/06/2002 to 22/06/2009 for each single index. More precisely, to implement the empirical test, the period will be divided separately into two sub-periods: the first series of empirical data, which are used to make the parameter estimation, spans from 05/06/2002 to 31/07/2007. The rest of the data, which is between 01/08/2007 and 22/06/2009, is used for predicting VaR figures and backtesting. Do note here is that the latter stage is exactly the current global financial crisis period which began from the August of 2007, dramatically peaked in the ending months of 2008 and signally reduced significantly in the middle of 2009. Consequently, the study will purposely examine the accuracy of the VaR models within the volatile time. 3.1.1. FTSE 100 index The FTSE 100 Index is a share index of the 100 most highly capitalised UK companies listed on the London Stock Exchange, began on 3rd January 1984. FTSE 100 companies represent about 81% of the market capitalisation of the whole London Stock Exchange and become the most widely used UK stock market indicator. In the dissertation, the full data used for the empirical analysis consists of 1782 observations (1782 working days) of the UK FTSE 100 index covering the period from 05/06/2002 to 22/06/2009. 3.1.2. SP 500 index The SP 500 is a value weighted index published since 1957 of the prices of 500 large-cap common stocks actively traded in the United States. The stocks listed on the SP 500 are those of large publicly held companies that trade on either of the two largest American stock market companies, the NYSE Euronext and NASDAQ OMX. After the Dow Jones Industrial Average, the SP 500 is the most widely followed index of large-cap American stocks. The SP 500 refers not only to the index, but also to the 500 companies that have their common stock included in the index and consequently considered as a bellwether for the US economy. Similar to the FTSE 100, the data for the SP 500 is also observed during the same period with 1775 observations (1775 working days). 3.2. Data Analysis For the VaR models, one of the most important aspects is assumptions relating to measuring VaR. This section first discusses several VaR assumptions and then examines the collected empirical data characteristics. 3.2.1. Assumptions 3.2.1.1. Normality assumption Normal distribution As mentioned in the chapter 2, most VaR models assume that return distribution is normally distributed with mean of 0 and standard deviation of 1 (see figure 3.1). Nonetheless, the chapter 2 also shows that the actual return in most of previous empirical investigations does not completely follow the standard distribution. Figure 3.1: Standard Normal Distribution Skewness The skewness is a measure of asymmetry of the distribution of the financial time series around its mean. Normally data is assumed to be symmetrically distributed with skewness of 0. A dataset with either a positive or negative skew deviates from the normal distribution assumptions (see figure 3.2). This can cause parametric approaches, such as the Riskmetrics and the symmetric normal-GARCH(1,1) model under the assumption of standard distributed returns, to be less effective if asset returns are heavily skewed. The result can be an overestimation or underestimation of the VaR value depending on the skew of the underlying asset returns. Figure 3.2: Plot of a positive or negative skew Kurtosis The kurtosis measures the peakedness or flatness of the distribution of a data sample and describes how concentrated the returns are around their mean. A high value of kurtosis means that more of data’s variance comes from extreme deviations. In other words, a high kurtosis means that the assets returns consist of more extreme values than modeled by the normal distribution. This positive excess kurtosis is, according to Lee and Lee (2000) called leptokurtic and a negative excess kurtosis is called platykurtic. The data which is normally distributed has kurtosis of 3. Figure 3.3: General forms of Kurtosis Jarque-Bera Statistic In statistics, Jarque-Bera (JB) is a test statistic for testing whether the series is normally distributed. In other words, the Jarque-Bera test is a goodness-of-fit measure of departure from normality, based on the sample kurtosis and skewness. The test statistic JB is defined as: where n is the number of observations, S is the sample skewness, K is the sample kurtosis. For large sample sizes, the test statistic has a Chi-square distribution with two degrees of freedom. Augmented Dickey–Fuller Statistic Augmented Dickey–Fuller test (ADF) is a test for a unit root in a time series sample. It is an augmented version of the Dickey–Fuller test for a larger and more complicated set of time series models. The ADF statistic used in the test is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence. ADF critical values: (1%) –3.4334, (5%) –2.8627, (10%) –2.5674. 3.2.1.2. Homoscedasticity assumption Homoscedasticity refers to the assumption that the dependent variable exhibits similar amounts of variance across the range of values for an independent variable. Figure 3.4: Plot of Homoscedasticity Unfortunately, the chapter 2, based on the previous empirical studies confirmed that the financial markets usually experience unexpected events, uncertainties in prices (and returns) and exhibit non-constant variance (Heteroskedasticity). Indeed, the volatility of financial asset returns changes over time, with periods when volatility is exceptionally high interspersed with periods when volatility is unusually low, namely volatility clustering. It is one of the widely stylised facts (stylised statistical properties of asset returns) which are common to a common set of financial assets. The volatility clustering reflects that high-volatility events tend to cluster in time. 3.2.1.3. Stationarity assumption According to Cont (2001), the most essential prerequisite of any statistical analysis of market data is the existence of some statistical properties of the data under study which remain constant over time, if not it is meaningless to try to recognize them. One of the hypotheses relating to the invariance of statistical properties of the return process in time is the stationarity. This hypothesis assumes that for any set of time instants ,†¦, and any time interval the joint distribution of the returns ,†¦, is the same as the joint distribution of returns ,†¦,. The Augmented Dickey-Fuller test, in turn, will also be used to test whether time-series models are accurately to examine the stationary of statistical properties of the return. 3.2.1.4. Serial independence assumption There are a large number of tests of randomness of the sample data. Autocorrelation plots are one common method test for randomness. Autocorrelation is the correlation between the returns at the different points in time. It is the same as calculating the correlation between two different time series, except that the same time series is used twice once in its original form and once lagged one or more time periods. The results can range from  +1 to -1. An autocorrelation of  +1 represents perfect positive correlation (i.e. an increase seen in one time series will lead to a proportionate increase in the other time series), while a value of -1 represents perfect negative correlation (i.e. an increase seen in one time series results in a proportionate decrease in the other time series). In terms of econometrics, the autocorrelation plot will be examined based on the Ljung-Box Q statistic test. However, instead of testing randomness at each distinct lag, it tests the overall randomness based on a number of lags. The Ljung-Box test can be defined as: where n is the sample size,is the sample autocorrelation at lag j, and h is the number of lags being tested. The hypothesis of randomness is rejected if whereis the percent point function of the Chi-square distribution and the ÃŽ ± is the quantile of the Chi-square distribution with h degrees of freedom. 3.2.2. Data Characteristics Table 3.1 gives the descriptive statistics for the FTSE 100 and the SP 500 daily stock market prices and returns. Daily returns are computed as logarithmic price relatives: Rt = ln(Pt/pt-1), where Pt is the closing daily price at time t. Figures 3.5a and 3.5b, 3.6a and 3.6b present the plots of returns and price index over time. Besides, Figures 3.7a and 3.7b, 3.8a and 3.8b illustrate the combination between the frequency distribution of the FTSE 100 and the SP 500 daily return data and a normal distribution curve imposed, spanning from 05/06/2002 through 22/06/2009. Table 3.1: Diagnostics table of statistical characteristics on the returns of the FTSE 100 Index and SP 500 index between 05/06/2002 and 22/6/2009. DIAGNOSTICS SP 500 FTSE 100 Number of observations 1774 1781 Largest return 10.96% 9.38% Smallest return -9.47% -9.26% Mean return -0.0001 -0.0001 Variance 0.0002 0.0002 Standard Deviation 0.0144 0.0141 Skewness -0.1267 -0.0978 Excess Kurtosis 9.2431 7.0322 Jarque-Bera 694.485*** 2298.153*** Augmented Dickey-Fuller (ADF) 2 -37.6418 -45.5849 Q(12) 20.0983* Autocorre: 0.04 93.3161*** Autocorre: 0.03 Q2 (12) 1348.2*** Autocorre: 0.28 1536.6*** Autocorre: 0.25 The ratio of SD/mean 144 141 Note: 1. *, **, and *** denote significance at the 10%, 5%, and 1% levels, respectively. 2. 95% critical value for the augmented Dickey-Fuller statistic = -3.4158 Figure 3.5a: The FTSE 100 daily returns from 05/06/2002 to 22/06/2009 Figure 3.5b: The SP 500 daily returns from 05/06/2002 to 22/06/2009 Figure 3.6a: The FTSE 100 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.6b: The SP 500 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.7a: Histogram showing the FTSE 100 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.7b: Histogram showing the SP 500 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8a: Diagram showing the FTSE 100’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8b: Diagram showing the SP 500’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 The Table 3.1 shows that the FTSE 100 and the SP 500 average daily return are approximately 0 percent, or at least very small compared to the sample standard deviation (the standard deviation is 141 and 144 times more than the size of the average return for the FTSE 100 and SP 500, respectively). This is why the mean is often set at zero when modelling daily portfolio returns, which reduces the uncertainty and imprecision of the estimates. In addition, large standard deviation compared to the mean supports the evidence that daily changes are dominated by randomness and small mean can be disregarded in risk measure estimates. Moreover, the paper also employes five statistics which often used in analysing data, including Skewness, Kurtosis, Jarque-Bera, Augmented Dickey-Fuller (ADF) and Ljung-Box test to examining the empirical full period, crossing from 05/06/2002 through 22/06/2009. Figure 3.7a and 3.7b demonstrate the histogram of the FTSE 100 and the SP 500 daily return data with the normal distribution imposed. The distribution of both the indexes has longer, fatter tails and higher probabilities for extreme events than for the normal distribution, in particular on the negative side (negative skewness implying that the distribution has a long left tail). Fatter negative tails mean a higher probability of large losses than the normal distribution would suggest. It is more peaked around its mean than the normal distribution, Indeed, the value for kurtosis is very high (10 and 12 for the FTSE 100 and the SP 500, respectively compared to 3 of the normal distribution) (also see Figures 3.8a and 3.8b for more details). In other words, the most prominent deviation from the normal distributional assumption is the kurtosis, which can be seen from the middle bars of the histogram rising above the normal distribution. Moreover, it is obvious that outliers still exist, which indicates that excess kurtosis is still present. The Jarque-Bera test rejects normality of returns at the 1% level of significance for both the indexes. So, the samples have all financial characteristics: volatility clustering and leptokurtosis. Besides that, the daily returns for both the indexes (presented in Figure 3.5a and 3.5b) reveal that volatility occurs in bursts; particularly the returns were very volatile at the beginning of examined period from June 2002 to the middle of June 2003. After remaining stable for about 4 years, the returns of the two well-known stock indexes in the world were highly volatile from July 2007 (when the credit crunch was about to begin) and even dramatically peaked since July 2008 to the end of June 2009. Generally, there are two recognised characteristics of the collected daily data. First, extreme outcomes occur more often and are larger than that predicted by the normal distribution (fat tails). Second, the size of market movements is not constant over time (conditional volatility). In terms of stationary, the Augmented Dickey-Fuller is adopted for the unit root test. The null hypothesis of this test is that there is a unit root (the time series is non-stationary). The alternative hypothesis is that the time series is stationary. If the null hypothesis is rejected, it means that the series is a stationary time series. In this thesis, the paper employs the ADF unit root test including an intercept and a trend term on return. The results from the ADF tests indicate that the test statistis for the FTSE 100 and the SP 500 is -45.5849 and -37.6418, respectively. Such values are significantly less than the 95% critical value for the augmented Dickey-Fuller statistic (-3.4158). Therefore, we can reject the unit root null hypothesis and sum up that the daily return series is robustly stationary. Finally, Table 3.1 shows the Ljung-Box test statistics for serial correlation of the return and squared return series for k = 12 lags, denoted by Q(k) and Q2(k), respectively. The Q(12) statistic is statistically significant implying the present of serial correlation in the FTSE 100 and the SP 500 daily return series (first moment dependencies). In other words, the return series exhibit linear dependence. Figure 3.9a: Autocorrelations of the FTSE 100 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figure 3.9b: Autocorrelations of the SP 500 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figures 3.9a and 3.9b and the autocorrelation coefficient (presented in Table 3.1) tell that the FTSE 100 and the SP 500 daily return did not display any systematic pattern and the returns have very little autocorrelations. According to Christoffersen (2003), in this situation we can write: Corr(Rt+1,Rt+1-ÃŽ ») ≈ 0, for ÃŽ » = 1,2,3†¦, 100 Therefore, returns are almost impossible to predict from their own past. One note is that since the mean of daily returns for both the indexes (-0.0001) is not significantly different from zero, and therefore, the variances of the return series are measured by squared returns. The Ljung-Box Q2 test statistic for the squared returns is much higher, indicating the presence of serial correlation in the squared return series. Figures 3.10a and 3.10b) and the autocorrelation coefficient (presented in Table 3.1) also confirm the autocorrelations in squared returns (variances) for the FTSE 100 and the SP 500 data, and more importantly, variance displays positive correlation with its own past, especially with short lags. Corr(R2t+1,R2t+1-ÃŽ ») > 0, for ÃŽ » = 1,2,3†¦, 100 Figure 3.10a: Autocorrelations of the FTSE 100 squared daily returns Figure 3.10b: Autocorrelations of the SP 500 squared daily returns 3.3. Calculation of Value At Risk The section puts much emphasis on how to calculate VaR figures for both single return indexes from proposed models, including the Historical Simulation, the Riskmetrics, the Normal-GARCH(1,1) (or N-GARCH(1,1)) and the Student-t GARCH(1,1) (or t-GARCH(1,1)) model. Except the historical simulation model which does not make any assumptions about the shape of the distribution of the assets returns, the other ones commonly have been studied under the assumption that the returns are normally distributed. Based on the previous section relating to the examining data, this assumption is rejected because observed extreme outcomes of the both single index returns occur more often and are larger than predicted by the normal distribution. Also, the volatility tends to change through time and periods of high and low volatility tend to cluster together. Consequently, the four proposed VaR models under the normal distribution either have particular limitations or unrealistic. Specifically, the historical simulation significantly assumes that the historically simulated returns are independently and identically distributed through time. Unfortunately, this assumption is impractical due to the volatility clustering of the empirical data. Similarly, although the Riskmetrics tries to avoid relying on sample observations and make use of additional information contained in the assumed distribution function, its normally distributional assumption is also unrealistic from the results of examining the collected data. The normal-GARCH(1,1) model and the student-t GARCH(1,1) model, on the other hand, can capture the fat tails and volatility clustering which occur in the observed financial time series data, but their returns standard distributional assumption is also impossible comparing to the empirical data. Despite all these, the thesis still uses the four models under the standard distributional assumption of returns to comparing and evaluating their estimated results with the predicted results based on the student distributional assumption of returns. Besides, since the empirical data experiences fatter tails more than that of the normal distribution, the essay intentionally employs the Cornish-Fisher Expansion technique to correct the z-value from the normal distribution to account for fatter tails, and then compare these results with the two results above. Therefore, in this chapter, we purposely calculate VaR by separating these three procedures into three different sections and final results will be discussed in length in chapter 4. 3.3.1. Components of VaR measures Throughout the analysis, a holding period of one-trading day will be used. For the significance level, various values for the left tail probability level will be considered, ranging from the very conservative level of 1 percent to the mid of 2.5 percent and to the less cautious 5 percent. The various VaR models will be estimated using the historical data of the two single return index samples, stretches from 05/06/2002 through 31/07/2007 (consisting of 1305 and 1298 prices observations for the FTSE 100 and the SP 500, respectively) for making the parameter estimation, and from 01/08/2007 to 22/06/2009 for predicting VaRs and backtesting. One interesting point here is that since there are few previous empirical studies examining the performance of VaR models during periods of financial crisis, the paper deliberately backtest the validity of VaR models within the current global financial crisis from the beginning in August 2007. 3.3.2. Calculation of VaR 3.3.2.1. Non-parametric approach Historical Simulation As mentioned above, the historical simulation model pretends that the change in market factors from today to tomorrow will be the same as it was some time ago, and therefore, it is computed based on the historical returns distribution. Consequently, we separate this non-parametric approach into a section. The chapter 2 has proved that calculating VaR using the historical simulation model is not mathematically complex since the measure only requires a rational period of historical data. Thus, the first task is to obtain an adequate historical time series for simulating. There are many previous studies presenting that predicted results of the model are relatively reliable once the window length of data used for simulating daily VaRs is not shorter than 1000 observed days. In this sense, the study will be based on a sliding window of the previous 1305 and 1298 prices observations (1304 and 1297 returns observations) for the FTSE 100 and the SP 500, respectively, spanning from 05/06/2002 through 31/07/2007. We have selected this rather than larger windows is since adding more historical data means adding older historical data which could be irrelevant to the future development of the returns indexes. After sorting in ascending order the past returns attributed to equally spaced classes, the predicted VaRs are determined as that log-return lies on the target percentile, say, in the thesis is on three widely percentiles of 1%, 2.5% and 5% lower tail of the return distribution. The result is a frequency distribution of returns, which is displayed as a histogram, and shown in Figure 3.11a and 3.11b below. The vertical axis shows the number of days on which returns are attributed to the various classes. The red vertical lines in the histogram separate the lowest 1%, 2.5% and 5% returns from the remaining (99%, 97.5% and 95%) returns. For FTSE 100, since the histogram is drawn from 1304 daily returns, the 99%, 97.5% and 95% daily VaRs are approximately the 13th, 33rd and 65th lowest return in this dataset which are -3.2%, -2.28% and -1.67%, respectively and are roughly marked in the histogram by the red vertical lines. The interpretation is that the VaR gives a number such that there is, say, a 1% chance of losing more than 3.2% of the single asset value tomorrow (on 01st August 2007). The SP 500 VaR figures, on the other hand, are little bit smaller than that of the UK stock index with -2.74%, -2.03% and -1.53% corresponding to 99%, 97.5% and 95% confidence levels, respectively. Figure 3.11a: Histogram of daily returns of FTSE 100 between 05/06/2002 and 31/07/2007 Figure 3.11b: Histogram of daily returns of SP 500 between 05/06/2002 and 31/07/2007 Following predicted VaRs on the first day of the predicted period, we continuously calculate VaRs for the estimated period, covering from 01/08/2007 to 22/06/2009. The question is whether the proposed non-parametric model is accurately performed in the turbulent period will be discussed in length in the chapter 4. 3.3.2.2. Parametric approaches under the normal distributional assumption of returns This section presents how to calculate the daily VaRs using the parametric approaches, including the RiskMetrics, the normal-GARCH(1,1) and the student-t GARCH(1,1) under the standard distributional assumption of returns. The results and the validity of each model during the turbulent period will deeply be considered in the chapter 4. 3.3.2.2.1. The RiskMetrics Comparing to the historical simulation model, the RiskMetrics as discussed in the chapter 2 does not solely rely on sample observations; instead, they make use of additional information contained in the normal distribution function. All that needs is the current estimate of volatility. In this sense, we first calculate daily RiskMetrics variance for both the indexes, crossing the parameter estimated period from 05/06/2002 to 31/07/2007 based on the well-known RiskMetrics variance formula (2.9). Specifically, we had the fixed decay factor ÃŽ »=0.94 (the RiskMetrics system suggested using ÃŽ »=0.94 to forecast one-day volatility). Besides, the other parameters are easily calculated, for instance, and are the squared log-return and variance of the previous day, correspondingly. After calculating the daily variance, we continuously measure VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under different confidence levels of 99%, 97.5% and 95% based on the normal VaR formula (2.6), where the critical z-value of the normal distribution at each significance level is simply computed using the Excel function NORMSINV. 3.3.2.2.2. The Normal-GARCH(1,1) model For GARCH models, the chapter 2 confirms that the most important point is to estimate the model parameters ,,. These parameters has to be calculated for numerically, using the method of maximum likelihood estimation (MLE). In fact, in order to do the MLE function, many previous studies efficiently use professional econometric softwares rather than handling the mathematical calculations. In the light of evidence, the normal-GARCH(1,1) is executed by using a well-known econometric tool, STATA, to estimate the model parameters (see Table 3.2 below). Table 3.2. The parameters statistics of the Normal-GARCH(1,1) model for the FTSE 100 and the SP 500 Normal-GARCH(1,1)* Parameters FTSE 100 SP 500 0.0955952 0.0555244 0.8907231 0.9289999 0.0000012 0.0000011 + 0.9863183 0.9845243 Number of Observations 1304 1297 Log likelihood 4401.63 4386.964 * Note: In this section, we report the results from the Normal-GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the normal distribution with significance level of 5%. According to Table 3.2, the coefficients of the lagged squared returns () for both the indexes are positive, concluding that strong ARCH effects are apparent for both the financial markets. Also, the coefficients of lagged conditional variance () are significantly positive and less than one, indicating that the impact of ‘old’ news on volatility is significant. The magnitude of the coefficient, is especially high (around 0.89 – 0.93), indicating a long memory in the variance. The estimate of was 1.2E-06 for the FTSE 100 and 1.1E-06 for the SP 500 implying a long run standard deviation of daily market return of about 0.94% and 0.84%, respectively. The log-likehood for this model for both the indexes was 4401.63 and 4386.964 for the FTSE 100 and the SP 500, correspondingly. The Log likehood ratios rejected the hypothesis of normality very strongly. After calculating the model parameters, we begin measuring conditional variance (volatility) for the parameter estimated period, covering from 05/06/2002 to 31/07/2007 based on the conditional variance formula (2.11), where and are the squared log-return and conditional variance of the previous day, respectively. We then measure predicted daily VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under confidence levels of 99%, 97.5% and 95% using the normal VaR formula (2.6). Again, the critical z-value of the normal distribution under significance levels of 1%, 2.5% and 5% is purely computed using the Excel function NORMSINV. 3.3.2.2.3. The Student-t GARCH(1,1) model Different from the Normal-GARCH(1,1) approach, the model assumes that the volatility (or the errors of the returns) follows the Student-t distribution. In fact, many previous studies suggested that using the symmetric GARCH(1,1) model with the volatility following the Student-t distribution is more accurate than with that of the Normal distribution when examining financial time series. Accordingly, the paper additionally employs the Student-t GARCH(1,1) approach to measure VaRs. In this section, we use this model under the normal distributional assumption of returns. First is to estimate the model parameters using the method of maximum likelihood estimation and obtained by the STATA (see Table 3.3). Table 3.3. The parameters statistics of the Student-t GARCH(1,1) model for the FTSE 100 and the SP 500 Student-t GARCH(1,1)* Parameters FTSE 100 SP 500 0.0926120 0.0569293 0.8946485 0.9354794 0.0000011 0.0000006 + 0.9872605 0.9924087 Number of Observations 1304 1297 Log likelihood 4406.50 4399.24 * Note: In this section, we report the results from the Student-t GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the student distribution with significance level of 5%. The Table 3.3 also identifies the same characteristics of the student-t GARCH(1,1) model parameters comparing to the normal-GARCH(1,1) approach. Specifically, the results of , expose that there were evidently strong ARCH effects occurred on the UK and US financial markets during the parameter estimated period, crossing from 05/06/2002 to 31/07/2007. Moreover, as Floros (2008) mentioned, there was also the considerable impact of ‘old’ news on volatility as well as a long memory in the variance. We at that time follow the similar steps as calculating VaRs using the normal-GARCH(1,1) model. 3.3.2.3. Parametric approaches under the normal distributional assumption of returns modified by the Cornish-Fisher Expansion technique The section 3.3.2.2 measured the VaRs using the parametric approaches under the assumption that the returns are normally distributed. Regardless of their results and performance, it is clearly that this assumption is impractical since the fact that the collected empirical data experiences fatter tails more than that of the normal distribution. Consequently, in this section the study intentionally employs the Cornish-Fisher Expansion (CFE) technique to correct the z-value from the assumption of the normal distribution to significantly account for fatter tails. Again, the question of whether the proposed models achieved powerfully within the recent damage time will be assessed in length in the chapter 4. 3.3.2.3.1. The CFE-modified RiskMetrics Similar VaR Models in Predicting Equity Market Risk VaR Models in Predicting Equity Market Risk Chapter 3 Research Design This chapter represents how to apply proposed VaR models in predicting equity market risk. Basically, the thesis first outlines the collected empirical data. We next focus on verifying assumptions usually engaged in the VaR models and then identifying whether the data characteristics are in line with these assumptions through examining the observed data. Various VaR models are subsequently discussed, beginning with the non-parametric approach (the historical simulation model) and followed by the parametric approaches under different distributional assumptions of returns and intentionally with the combination of the Cornish-Fisher Expansion technique. Finally, backtesting techniques are employed to value the performance of the suggested VaR models. 3.1. Data The data used in the study are financial time series that reflect the daily historical price changes for two single equity index assets, including the FTSE 100 index of the UK market and the SP 500 of the US market. Mathematically, instead of using the arithmetic return, the paper employs the daily log-returns. The full period, which the calculations are based on, stretches from 05/06/2002 to 22/06/2009 for each single index. More precisely, to implement the empirical test, the period will be divided separately into two sub-periods: the first series of empirical data, which are used to make the parameter estimation, spans from 05/06/2002 to 31/07/2007. The rest of the data, which is between 01/08/2007 and 22/06/2009, is used for predicting VaR figures and backtesting. Do note here is that the latter stage is exactly the current global financial crisis period which began from the August of 2007, dramatically peaked in the ending months of 2008 and signally reduced significantly in the middle of 2009. Consequently, the study will purposely examine the accuracy of the VaR models within the volatile time. 3.1.1. FTSE 100 index The FTSE 100 Index is a share index of the 100 most highly capitalised UK companies listed on the London Stock Exchange, began on 3rd January 1984. FTSE 100 companies represent about 81% of the market capitalisation of the whole London Stock Exchange and become the most widely used UK stock market indicator. In the dissertation, the full data used for the empirical analysis consists of 1782 observations (1782 working days) of the UK FTSE 100 index covering the period from 05/06/2002 to 22/06/2009. 3.1.2. SP 500 index The SP 500 is a value weighted index published since 1957 of the prices of 500 large-cap common stocks actively traded in the United States. The stocks listed on the SP 500 are those of large publicly held companies that trade on either of the two largest American stock market companies, the NYSE Euronext and NASDAQ OMX. After the Dow Jones Industrial Average, the SP 500 is the most widely followed index of large-cap American stocks. The SP 500 refers not only to the index, but also to the 500 companies that have their common stock included in the index and consequently considered as a bellwether for the US economy. Similar to the FTSE 100, the data for the SP 500 is also observed during the same period with 1775 observations (1775 working days). 3.2. Data Analysis For the VaR models, one of the most important aspects is assumptions relating to measuring VaR. This section first discusses several VaR assumptions and then examines the collected empirical data characteristics. 3.2.1. Assumptions 3.2.1.1. Normality assumption Normal distribution As mentioned in the chapter 2, most VaR models assume that return distribution is normally distributed with mean of 0 and standard deviation of 1 (see figure 3.1). Nonetheless, the chapter 2 also shows that the actual return in most of previous empirical investigations does not completely follow the standard distribution. Figure 3.1: Standard Normal Distribution Skewness The skewness is a measure of asymmetry of the distribution of the financial time series around its mean. Normally data is assumed to be symmetrically distributed with skewness of 0. A dataset with either a positive or negative skew deviates from the normal distribution assumptions (see figure 3.2). This can cause parametric approaches, such as the Riskmetrics and the symmetric normal-GARCH(1,1) model under the assumption of standard distributed returns, to be less effective if asset returns are heavily skewed. The result can be an overestimation or underestimation of the VaR value depending on the skew of the underlying asset returns. Figure 3.2: Plot of a positive or negative skew Kurtosis The kurtosis measures the peakedness or flatness of the distribution of a data sample and describes how concentrated the returns are around their mean. A high value of kurtosis means that more of data’s variance comes from extreme deviations. In other words, a high kurtosis means that the assets returns consist of more extreme values than modeled by the normal distribution. This positive excess kurtosis is, according to Lee and Lee (2000) called leptokurtic and a negative excess kurtosis is called platykurtic. The data which is normally distributed has kurtosis of 3. Figure 3.3: General forms of Kurtosis Jarque-Bera Statistic In statistics, Jarque-Bera (JB) is a test statistic for testing whether the series is normally distributed. In other words, the Jarque-Bera test is a goodness-of-fit measure of departure from normality, based on the sample kurtosis and skewness. The test statistic JB is defined as: where n is the number of observations, S is the sample skewness, K is the sample kurtosis. For large sample sizes, the test statistic has a Chi-square distribution with two degrees of freedom. Augmented Dickey–Fuller Statistic Augmented Dickey–Fuller test (ADF) is a test for a unit root in a time series sample. It is an augmented version of the Dickey–Fuller test for a larger and more complicated set of time series models. The ADF statistic used in the test is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence. ADF critical values: (1%) –3.4334, (5%) –2.8627, (10%) –2.5674. 3.2.1.2. Homoscedasticity assumption Homoscedasticity refers to the assumption that the dependent variable exhibits similar amounts of variance across the range of values for an independent variable. Figure 3.4: Plot of Homoscedasticity Unfortunately, the chapter 2, based on the previous empirical studies confirmed that the financial markets usually experience unexpected events, uncertainties in prices (and returns) and exhibit non-constant variance (Heteroskedasticity). Indeed, the volatility of financial asset returns changes over time, with periods when volatility is exceptionally high interspersed with periods when volatility is unusually low, namely volatility clustering. It is one of the widely stylised facts (stylised statistical properties of asset returns) which are common to a common set of financial assets. The volatility clustering reflects that high-volatility events tend to cluster in time. 3.2.1.3. Stationarity assumption According to Cont (2001), the most essential prerequisite of any statistical analysis of market data is the existence of some statistical properties of the data under study which remain constant over time, if not it is meaningless to try to recognize them. One of the hypotheses relating to the invariance of statistical properties of the return process in time is the stationarity. This hypothesis assumes that for any set of time instants ,†¦, and any time interval the joint distribution of the returns ,†¦, is the same as the joint distribution of returns ,†¦,. The Augmented Dickey-Fuller test, in turn, will also be used to test whether time-series models are accurately to examine the stationary of statistical properties of the return. 3.2.1.4. Serial independence assumption There are a large number of tests of randomness of the sample data. Autocorrelation plots are one common method test for randomness. Autocorrelation is the correlation between the returns at the different points in time. It is the same as calculating the correlation between two different time series, except that the same time series is used twice once in its original form and once lagged one or more time periods. The results can range from  +1 to -1. An autocorrelation of  +1 represents perfect positive correlation (i.e. an increase seen in one time series will lead to a proportionate increase in the other time series), while a value of -1 represents perfect negative correlation (i.e. an increase seen in one time series results in a proportionate decrease in the other time series). In terms of econometrics, the autocorrelation plot will be examined based on the Ljung-Box Q statistic test. However, instead of testing randomness at each distinct lag, it tests the overall randomness based on a number of lags. The Ljung-Box test can be defined as: where n is the sample size,is the sample autocorrelation at lag j, and h is the number of lags being tested. The hypothesis of randomness is rejected if whereis the percent point function of the Chi-square distribution and the ÃŽ ± is the quantile of the Chi-square distribution with h degrees of freedom. 3.2.2. Data Characteristics Table 3.1 gives the descriptive statistics for the FTSE 100 and the SP 500 daily stock market prices and returns. Daily returns are computed as logarithmic price relatives: Rt = ln(Pt/pt-1), where Pt is the closing daily price at time t. Figures 3.5a and 3.5b, 3.6a and 3.6b present the plots of returns and price index over time. Besides, Figures 3.7a and 3.7b, 3.8a and 3.8b illustrate the combination between the frequency distribution of the FTSE 100 and the SP 500 daily return data and a normal distribution curve imposed, spanning from 05/06/2002 through 22/06/2009. Table 3.1: Diagnostics table of statistical characteristics on the returns of the FTSE 100 Index and SP 500 index between 05/06/2002 and 22/6/2009. DIAGNOSTICS SP 500 FTSE 100 Number of observations 1774 1781 Largest return 10.96% 9.38% Smallest return -9.47% -9.26% Mean return -0.0001 -0.0001 Variance 0.0002 0.0002 Standard Deviation 0.0144 0.0141 Skewness -0.1267 -0.0978 Excess Kurtosis 9.2431 7.0322 Jarque-Bera 694.485*** 2298.153*** Augmented Dickey-Fuller (ADF) 2 -37.6418 -45.5849 Q(12) 20.0983* Autocorre: 0.04 93.3161*** Autocorre: 0.03 Q2 (12) 1348.2*** Autocorre: 0.28 1536.6*** Autocorre: 0.25 The ratio of SD/mean 144 141 Note: 1. *, **, and *** denote significance at the 10%, 5%, and 1% levels, respectively. 2. 95% critical value for the augmented Dickey-Fuller statistic = -3.4158 Figure 3.5a: The FTSE 100 daily returns from 05/06/2002 to 22/06/2009 Figure 3.5b: The SP 500 daily returns from 05/06/2002 to 22/06/2009 Figure 3.6a: The FTSE 100 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.6b: The SP 500 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.7a: Histogram showing the FTSE 100 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.7b: Histogram showing the SP 500 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8a: Diagram showing the FTSE 100’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8b: Diagram showing the SP 500’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 The Table 3.1 shows that the FTSE 100 and the SP 500 average daily return are approximately 0 percent, or at least very small compared to the sample standard deviation (the standard deviation is 141 and 144 times more than the size of the average return for the FTSE 100 and SP 500, respectively). This is why the mean is often set at zero when modelling daily portfolio returns, which reduces the uncertainty and imprecision of the estimates. In addition, large standard deviation compared to the mean supports the evidence that daily changes are dominated by randomness and small mean can be disregarded in risk measure estimates. Moreover, the paper also employes five statistics which often used in analysing data, including Skewness, Kurtosis, Jarque-Bera, Augmented Dickey-Fuller (ADF) and Ljung-Box test to examining the empirical full period, crossing from 05/06/2002 through 22/06/2009. Figure 3.7a and 3.7b demonstrate the histogram of the FTSE 100 and the SP 500 daily return data with the normal distribution imposed. The distribution of both the indexes has longer, fatter tails and higher probabilities for extreme events than for the normal distribution, in particular on the negative side (negative skewness implying that the distribution has a long left tail). Fatter negative tails mean a higher probability of large losses than the normal distribution would suggest. It is more peaked around its mean than the normal distribution, Indeed, the value for kurtosis is very high (10 and 12 for the FTSE 100 and the SP 500, respectively compared to 3 of the normal distribution) (also see Figures 3.8a and 3.8b for more details). In other words, the most prominent deviation from the normal distributional assumption is the kurtosis, which can be seen from the middle bars of the histogram rising above the normal distribution. Moreover, it is obvious that outliers still exist, which indicates that excess kurtosis is still present. The Jarque-Bera test rejects normality of returns at the 1% level of significance for both the indexes. So, the samples have all financial characteristics: volatility clustering and leptokurtosis. Besides that, the daily returns for both the indexes (presented in Figure 3.5a and 3.5b) reveal that volatility occurs in bursts; particularly the returns were very volatile at the beginning of examined period from June 2002 to the middle of June 2003. After remaining stable for about 4 years, the returns of the two well-known stock indexes in the world were highly volatile from July 2007 (when the credit crunch was about to begin) and even dramatically peaked since July 2008 to the end of June 2009. Generally, there are two recognised characteristics of the collected daily data. First, extreme outcomes occur more often and are larger than that predicted by the normal distribution (fat tails). Second, the size of market movements is not constant over time (conditional volatility). In terms of stationary, the Augmented Dickey-Fuller is adopted for the unit root test. The null hypothesis of this test is that there is a unit root (the time series is non-stationary). The alternative hypothesis is that the time series is stationary. If the null hypothesis is rejected, it means that the series is a stationary time series. In this thesis, the paper employs the ADF unit root test including an intercept and a trend term on return. The results from the ADF tests indicate that the test statistis for the FTSE 100 and the SP 500 is -45.5849 and -37.6418, respectively. Such values are significantly less than the 95% critical value for the augmented Dickey-Fuller statistic (-3.4158). Therefore, we can reject the unit root null hypothesis and sum up that the daily return series is robustly stationary. Finally, Table 3.1 shows the Ljung-Box test statistics for serial correlation of the return and squared return series for k = 12 lags, denoted by Q(k) and Q2(k), respectively. The Q(12) statistic is statistically significant implying the present of serial correlation in the FTSE 100 and the SP 500 daily return series (first moment dependencies). In other words, the return series exhibit linear dependence. Figure 3.9a: Autocorrelations of the FTSE 100 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figure 3.9b: Autocorrelations of the SP 500 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figures 3.9a and 3.9b and the autocorrelation coefficient (presented in Table 3.1) tell that the FTSE 100 and the SP 500 daily return did not display any systematic pattern and the returns have very little autocorrelations. According to Christoffersen (2003), in this situation we can write: Corr(Rt+1,Rt+1-ÃŽ ») ≈ 0, for ÃŽ » = 1,2,3†¦, 100 Therefore, returns are almost impossible to predict from their own past. One note is that since the mean of daily returns for both the indexes (-0.0001) is not significantly different from zero, and therefore, the variances of the return series are measured by squared returns. The Ljung-Box Q2 test statistic for the squared returns is much higher, indicating the presence of serial correlation in the squared return series. Figures 3.10a and 3.10b) and the autocorrelation coefficient (presented in Table 3.1) also confirm the autocorrelations in squared returns (variances) for the FTSE 100 and the SP 500 data, and more importantly, variance displays positive correlation with its own past, especially with short lags. Corr(R2t+1,R2t+1-ÃŽ ») > 0, for ÃŽ » = 1,2,3†¦, 100 Figure 3.10a: Autocorrelations of the FTSE 100 squared daily returns Figure 3.10b: Autocorrelations of the SP 500 squared daily returns 3.3. Calculation of Value At Risk The section puts much emphasis on how to calculate VaR figures for both single return indexes from proposed models, including the Historical Simulation, the Riskmetrics, the Normal-GARCH(1,1) (or N-GARCH(1,1)) and the Student-t GARCH(1,1) (or t-GARCH(1,1)) model. Except the historical simulation model which does not make any assumptions about the shape of the distribution of the assets returns, the other ones commonly have been studied under the assumption that the returns are normally distributed. Based on the previous section relating to the examining data, this assumption is rejected because observed extreme outcomes of the both single index returns occur more often and are larger than predicted by the normal distribution. Also, the volatility tends to change through time and periods of high and low volatility tend to cluster together. Consequently, the four proposed VaR models under the normal distribution either have particular limitations or unrealistic. Specifically, the historical simulation significantly assumes that the historically simulated returns are independently and identically distributed through time. Unfortunately, this assumption is impractical due to the volatility clustering of the empirical data. Similarly, although the Riskmetrics tries to avoid relying on sample observations and make use of additional information contained in the assumed distribution function, its normally distributional assumption is also unrealistic from the results of examining the collected data. The normal-GARCH(1,1) model and the student-t GARCH(1,1) model, on the other hand, can capture the fat tails and volatility clustering which occur in the observed financial time series data, but their returns standard distributional assumption is also impossible comparing to the empirical data. Despite all these, the thesis still uses the four models under the standard distributional assumption of returns to comparing and evaluating their estimated results with the predicted results based on the student distributional assumption of returns. Besides, since the empirical data experiences fatter tails more than that of the normal distribution, the essay intentionally employs the Cornish-Fisher Expansion technique to correct the z-value from the normal distribution to account for fatter tails, and then compare these results with the two results above. Therefore, in this chapter, we purposely calculate VaR by separating these three procedures into three different sections and final results will be discussed in length in chapter 4. 3.3.1. Components of VaR measures Throughout the analysis, a holding period of one-trading day will be used. For the significance level, various values for the left tail probability level will be considered, ranging from the very conservative level of 1 percent to the mid of 2.5 percent and to the less cautious 5 percent. The various VaR models will be estimated using the historical data of the two single return index samples, stretches from 05/06/2002 through 31/07/2007 (consisting of 1305 and 1298 prices observations for the FTSE 100 and the SP 500, respectively) for making the parameter estimation, and from 01/08/2007 to 22/06/2009 for predicting VaRs and backtesting. One interesting point here is that since there are few previous empirical studies examining the performance of VaR models during periods of financial crisis, the paper deliberately backtest the validity of VaR models within the current global financial crisis from the beginning in August 2007. 3.3.2. Calculation of VaR 3.3.2.1. Non-parametric approach Historical Simulation As mentioned above, the historical simulation model pretends that the change in market factors from today to tomorrow will be the same as it was some time ago, and therefore, it is computed based on the historical returns distribution. Consequently, we separate this non-parametric approach into a section. The chapter 2 has proved that calculating VaR using the historical simulation model is not mathematically complex since the measure only requires a rational period of historical data. Thus, the first task is to obtain an adequate historical time series for simulating. There are many previous studies presenting that predicted results of the model are relatively reliable once the window length of data used for simulating daily VaRs is not shorter than 1000 observed days. In this sense, the study will be based on a sliding window of the previous 1305 and 1298 prices observations (1304 and 1297 returns observations) for the FTSE 100 and the SP 500, respectively, spanning from 05/06/2002 through 31/07/2007. We have selected this rather than larger windows is since adding more historical data means adding older historical data which could be irrelevant to the future development of the returns indexes. After sorting in ascending order the past returns attributed to equally spaced classes, the predicted VaRs are determined as that log-return lies on the target percentile, say, in the thesis is on three widely percentiles of 1%, 2.5% and 5% lower tail of the return distribution. The result is a frequency distribution of returns, which is displayed as a histogram, and shown in Figure 3.11a and 3.11b below. The vertical axis shows the number of days on which returns are attributed to the various classes. The red vertical lines in the histogram separate the lowest 1%, 2.5% and 5% returns from the remaining (99%, 97.5% and 95%) returns. For FTSE 100, since the histogram is drawn from 1304 daily returns, the 99%, 97.5% and 95% daily VaRs are approximately the 13th, 33rd and 65th lowest return in this dataset which are -3.2%, -2.28% and -1.67%, respectively and are roughly marked in the histogram by the red vertical lines. The interpretation is that the VaR gives a number such that there is, say, a 1% chance of losing more than 3.2% of the single asset value tomorrow (on 01st August 2007). The SP 500 VaR figures, on the other hand, are little bit smaller than that of the UK stock index with -2.74%, -2.03% and -1.53% corresponding to 99%, 97.5% and 95% confidence levels, respectively. Figure 3.11a: Histogram of daily returns of FTSE 100 between 05/06/2002 and 31/07/2007 Figure 3.11b: Histogram of daily returns of SP 500 between 05/06/2002 and 31/07/2007 Following predicted VaRs on the first day of the predicted period, we continuously calculate VaRs for the estimated period, covering from 01/08/2007 to 22/06/2009. The question is whether the proposed non-parametric model is accurately performed in the turbulent period will be discussed in length in the chapter 4. 3.3.2.2. Parametric approaches under the normal distributional assumption of returns This section presents how to calculate the daily VaRs using the parametric approaches, including the RiskMetrics, the normal-GARCH(1,1) and the student-t GARCH(1,1) under the standard distributional assumption of returns. The results and the validity of each model during the turbulent period will deeply be considered in the chapter 4. 3.3.2.2.1. The RiskMetrics Comparing to the historical simulation model, the RiskMetrics as discussed in the chapter 2 does not solely rely on sample observations; instead, they make use of additional information contained in the normal distribution function. All that needs is the current estimate of volatility. In this sense, we first calculate daily RiskMetrics variance for both the indexes, crossing the parameter estimated period from 05/06/2002 to 31/07/2007 based on the well-known RiskMetrics variance formula (2.9). Specifically, we had the fixed decay factor ÃŽ »=0.94 (the RiskMetrics system suggested using ÃŽ »=0.94 to forecast one-day volatility). Besides, the other parameters are easily calculated, for instance, and are the squared log-return and variance of the previous day, correspondingly. After calculating the daily variance, we continuously measure VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under different confidence levels of 99%, 97.5% and 95% based on the normal VaR formula (2.6), where the critical z-value of the normal distribution at each significance level is simply computed using the Excel function NORMSINV. 3.3.2.2.2. The Normal-GARCH(1,1) model For GARCH models, the chapter 2 confirms that the most important point is to estimate the model parameters ,,. These parameters has to be calculated for numerically, using the method of maximum likelihood estimation (MLE). In fact, in order to do the MLE function, many previous studies efficiently use professional econometric softwares rather than handling the mathematical calculations. In the light of evidence, the normal-GARCH(1,1) is executed by using a well-known econometric tool, STATA, to estimate the model parameters (see Table 3.2 below). Table 3.2. The parameters statistics of the Normal-GARCH(1,1) model for the FTSE 100 and the SP 500 Normal-GARCH(1,1)* Parameters FTSE 100 SP 500 0.0955952 0.0555244 0.8907231 0.9289999 0.0000012 0.0000011 + 0.9863183 0.9845243 Number of Observations 1304 1297 Log likelihood 4401.63 4386.964 * Note: In this section, we report the results from the Normal-GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the normal distribution with significance level of 5%. According to Table 3.2, the coefficients of the lagged squared returns () for both the indexes are positive, concluding that strong ARCH effects are apparent for both the financial markets. Also, the coefficients of lagged conditional variance () are significantly positive and less than one, indicating that the impact of ‘old’ news on volatility is significant. The magnitude of the coefficient, is especially high (around 0.89 – 0.93), indicating a long memory in the variance. The estimate of was 1.2E-06 for the FTSE 100 and 1.1E-06 for the SP 500 implying a long run standard deviation of daily market return of about 0.94% and 0.84%, respectively. The log-likehood for this model for both the indexes was 4401.63 and 4386.964 for the FTSE 100 and the SP 500, correspondingly. The Log likehood ratios rejected the hypothesis of normality very strongly. After calculating the model parameters, we begin measuring conditional variance (volatility) for the parameter estimated period, covering from 05/06/2002 to 31/07/2007 based on the conditional variance formula (2.11), where and are the squared log-return and conditional variance of the previous day, respectively. We then measure predicted daily VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under confidence levels of 99%, 97.5% and 95% using the normal VaR formula (2.6). Again, the critical z-value of the normal distribution under significance levels of 1%, 2.5% and 5% is purely computed using the Excel function NORMSINV. 3.3.2.2.3. The Student-t GARCH(1,1) model Different from the Normal-GARCH(1,1) approach, the model assumes that the volatility (or the errors of the returns) follows the Student-t distribution. In fact, many previous studies suggested that using the symmetric GARCH(1,1) model with the volatility following the Student-t distribution is more accurate than with that of the Normal distribution when examining financial time series. Accordingly, the paper additionally employs the Student-t GARCH(1,1) approach to measure VaRs. In this section, we use this model under the normal distributional assumption of returns. First is to estimate the model parameters using the method of maximum likelihood estimation and obtained by the STATA (see Table 3.3). Table 3.3. The parameters statistics of the Student-t GARCH(1,1) model for the FTSE 100 and the SP 500 Student-t GARCH(1,1)* Parameters FTSE 100 SP 500 0.0926120 0.0569293 0.8946485 0.9354794 0.0000011 0.0000006 + 0.9872605 0.9924087 Number of Observations 1304 1297 Log likelihood 4406.50 4399.24 * Note: In this section, we report the results from the Student-t GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the student distribution with significance level of 5%. The Table 3.3 also identifies the same characteristics of the student-t GARCH(1,1) model parameters comparing to the normal-GARCH(1,1) approach. Specifically, the results of , expose that there were evidently strong ARCH effects occurred on the UK and US financial markets during the parameter estimated period, crossing from 05/06/2002 to 31/07/2007. Moreover, as Floros (2008) mentioned, there was also the considerable impact of ‘old’ news on volatility as well as a long memory in the variance. We at that time follow the similar steps as calculating VaRs using the normal-GARCH(1,1) model. 3.3.2.3. Parametric approaches under the normal distributional assumption of returns modified by the Cornish-Fisher Expansion technique The section 3.3.2.2 measured the VaRs using the parametric approaches under the assumption that the returns are normally distributed. Regardless of their results and performance, it is clearly that this assumption is impractical since the fact that the collected empirical data experiences fatter tails more than that of the normal distribution. Consequently, in this section the study intentionally employs the Cornish-Fisher Expansion (CFE) technique to correct the z-value from the assumption of the normal distribution to significantly account for fatter tails. Again, the question of whether the proposed models achieved powerfully within the recent damage time will be assessed in length in the chapter 4. 3.3.2.3.1. The CFE-modified RiskMetrics Similar