Monday, September 30, 2019

An Information Communications Technology Solutions Essay

Unified communications (UC) is a commonly used term for the integration of disparate communications systems, media, devices and applications. This potentially includes the integration of fixed and mobile voice, e-mail, instant messaging, desktop and advanced business applications, Internet Protocol (IP)-PBX, voice over IP (VoIP), presence, voice-mail, fax, audio video and web conferencing, unified messaging, unified voicemail, and whiteboarding into a single environment offering the user a more complete but simpler and more effective experience. Gartner states â€Å"The largest single value of UC is its ability to reduce â€Å"human latency† in business processes. † Unified Messaging (or UM) is the integration of different streams of communication (e-mail, SMS, Fax, voice, video, etc. ) into a single, or, unified ‘message store’, accessible from a variety of different devices. Unified Messaging was expected by many in the consumer telecommunications industry to be a popular product, first augmenting and eventually replacing voicemail. However, UM was slow to gain consumer acceptance, and UM vendors such as Comverse were badly hit when the slowdown in the telecommunications industry in 2001 made carriers wary of spending large amounts of money on technology with little proven consumer demand. Role of UM in Present Scenario Today, UM solutions are increasingly accepted in the corporate environment. The aim of deploying UM solutions generally is to enhance and improve business processes as well as services. UM solutions targeting professional end-user customers integrate communications processes into the existing IT infrastructure, i. e. into CRM, ERP and mail systems (e. g. Phoenixnet PH, Microsoft Exchange, Lotus Notes, SAP, etc. ) Unified communications is sometimes confused with unified messaging, but it is distinct. Unified communications refers to a real-time delivery of communications based on the preferred method and location of the recipient; unified messaging systems culls messages from several sources (such as email, voice mail etc. ), but holds those messages for retrieval at a later time. Unified messaging focuses on allowing users to access voice, e-mail, fax and other mixed media from a single mailbox independent of the access device. Multimedia services include messages of mixed media types such as video, sound clips, and pictures, and include communication via short message services (SMS). Components of unified communications Unified communications can include a variety of elements, such as instant messaging, telephony, video, email, voicemail, and short message services, all of which could be brought into real time and coordinated. The concept of presence is also a factor – knowing where one’s intended recipients are and if they are available, in real time – and is itself a key component of unified communications. To put it simply, unified communications integrates all the systems that a user might already be using and helps those systems work together in real time. For example, unified communications technology could allow a user to seamlessly collaborate with another person on a project, even if the two users are in separate locations. The user could quickly locate the necessary person by accessing an interactive directory, engage in a text messaging session, and then escalate the session to a voice call, or even a video call – all within minutes. In another example, an employee receives a call from a customer who wants answers. Unified communications could enable that worker to access a real-time list of available expert colleagues, then make a call that would reach the necessary person, enabling the employee to answer the customer faster, and eliminating rounds of back-and-forth emails and phone-tag. The examples in the previous paragraph primarily describe â€Å"personal productivity† enhancements that tend to benefit the individual user. While such benefits can be important, enterprises are finding that they can achieve even greater impact by using unified communications capabilities to transform business processes. This is achieved by integrating UC functionality directly into the business applications using development tools provided by many of the suppliers. Instead of the individual user invoking the UC functionality to, say, find an appropriate resource, the workflow or process application automatically identifies the resource at the point in the business activity where one is needed. When used in this manner, the concept of â€Å"presence† often changes. Most people associate presence with IM â€Å"buddy lists† — the status of individuals is identified. But, in many business process applications, what is important is finding someone with a certain skill. In these environments, presence will identify available skills or capabilities. This â€Å"business process† approach to integrating UC functionality can result in bottom line benefits that are an order of magnitude greater than those achievable by personal productivity methods alone. According to the International Engineering Consortium, unified communications is an industry term used to describe all forms of call and multimedia/cross-media message-management functions controlled by an individual user for both business and social purposes. This includes any enterprise informational or transactional application process that emulates a human user and uses a single, content-independent personal messaging channel (mailbox) for contact access. The essence of communication is breaking down barriers. In its simplest form, the telephone breaks distance and time barriers so that people can communicate in real time or near real time when they are not together. There are now many other barriers to be overcome. People can use many different devices to communicate (wireless phones, personal digital assistants [PDA], personal computers [PC], thin clients, etc. ), and there are now new forms of communication as well, such as instant messaging. The goal of unified communications involves breaking down these barriers so that people using different modes of communication, different media, and different devices can still communicate to anyone, anywhere, at any time. Unified communications (UC) encompasses several communication systems or models including unified messaging, collaboration, and interaction systems; real-time and near real-time communications; and transactional applications.

Sunday, September 29, 2019

Pallister Case Study Essay

Background and Problem Palliser Furniture Ltd. is Canadian second largest furniture company. They currently have production facilities in Canada, Mexico, and Indonesia. Due to increasing competitive pressures from Asia, Palliser Furniture must decide whether to expand into the Chinese market, and if so through which entry strategy? SWOT ANALYSIS Internal Analysis: (Firm) Strengths: 1. Brand Name Recognition: Palliser has high brand name recognition especially domestically in Canada with the majority of its revenues are generated and it is known for its innovation, high quality, and contemporary design. 2. Recruited product managers/designers from all across the world including Sweden, Hong Kong, and Italy. 3. New distribution channel through the dealer-owned stores was very successful. Weakness: 1. Employees lay off at the Winnipeg factory. Downsizing activities such as this often decrease employee morale, impact employees’ perception of job security, and increase turnover rates. External Analysis: (Industry) Opportunity: 1. China’s total furniture output value was $20 billion and accounted for 10 per cent of world total furniture output value. 2. China’s furniture export was growing at an annual rate of over 30 per cent. 3. China could offer Palliser lower labor costs and high-quality workers. Along with minimum income tax and social costs is giving China a solid competitive position. 4. Producing the same product in China was up to 30 per cent cheaper compared to North America. 5. China offered cheaper supplies including leather, wood, foam, and packaging. Threat: 1. Increased Competition: America, Japanese, and Italian firms had established factories in China. Strong competition that will compete for the same skilled employees. 2. Chinese language and cultural barriers. Industry Attractiveness: China has made significant progress in the furniture market and will likely continue to see further growth due to its low labor costs and low tariffs making this a very attractive market for Palliser. Strategic Alternatives A) Maintain status quo (Do not invest in China) Pro: Simple Con: Lost market potential and possible cost savings B) Enter Chinese market through subcontracting with another firm Pro: Lower involvement, requires less financial commitment, and reduces risk Con: Conflict or unable to meet delivery dates, etc. C) Expand Palliser relationship with China through foreign direct investment (wholly owned) Pro: Cheaper labor and allows Palliser to focus on cost leadership strategy. Con: Higher risk, more involvement required Recommendation/Implementation In order for Palliser Furniture to remain competitive it critical for them to invest and expand into China immediately. Palliser should manufacture the motion products in this market due to the possible savings of $130 per product and identity the most effective market distribution channels in order to better achieve its cost leadership strategy. However, before entering this market Palliser should conduct a thorough industry analysis in order to understand any potential barriers such as China’s laws and regulations, shipping, tax structure, and supply of infrastructure in order to prevent any future problems (as experienced in Mexico). References Paperadepts.( 2011). Pallister furniture, S.W.O.T. analysis. Retrieved from: http://www.paperadepts.com/paper/Pallister-furniture-S.W.O.T.-analysis-185519.html Writework (2005). Pallister furniture, S.W.O.T. analysis. Retrieved from: http://www.writework.com/essay/pallister-furniture-s-w-o-t-analysis

Saturday, September 28, 2019

Supply and Demand Simulation Assignment Research Paper

Supply and Demand Simulation Assignment - Research Paper Example The shift in the demand curve in the simulation may take place because of any determinant other than price; therefore, there may be a shift in the demand curve due to the availability of a supermarket or grocery store near the apartment because if people are not able to find a store for daily needs near to their homes, that would not encourage them to buy the apartment. Furthermore, if there is a change in the prices or the quality of the Oakridge Builders’ homes, then the consumers may buy those and not the Goodlife apartments. This shift would reduce the number of homes being sold to families and thus the curve would go below the equilibrium price level. The supply curve would mostly shift due to a technological innovation and thus if the Company is able to bring about some technological innovation in their homes, that is, make them more digitalized, have proper security systems inserted then consumers will be interested in purchasing them. This shift will cause an increase in the supply of homes to consumers and thus result in going above the equilibrium price level due to the want of more homes by consumers and possible lack of the equal amount of supply. From the simulation the supply and demand can be understood as follows; taking products made by Apple and Microsoft, they may be similar in terms of usage however are different in terms of technological innovation. In the same way, Goodlife apartments appeal to families more than the retail homes from Oakbridge thus providing a clear competition for Goodlife to dominate the market just like Apple does even though it produces more expensive products, but it has a certain unique selling price. The concepts of microeconomics help in understanding the factors that affect supply and demand shifts on the equilibrium price and quantity as they talk about the shifts on an individual level; for example, if in a household, an individual had to choose between buying tea or coffee as a preferred beverage, the p rices of the same would affect his personal choice. Furthermore, if there was a shortage of supply of one of them, he would go for the other and similarly, if there was an increase in the price of one, he would choose the other as a substitute. This would affect the demand and supply curves to move up and down affecting the equilibrium price levels as per the quantities. The concepts of macroeconomics on the other hand refer to an aggregate demand and aggregate supply which takes place on a market level taking into account the personal needs and choices of all the consumers in a given area. Thus, from the point of view of households as well as firms, the factors that affects the shifts in demand and supply curves in macroeconomics may be understood by looking at the aggregate equilibrium price and quantity levels. Price elasticity helps in understanding how an individual’s demand can be lowered or increased by fixing a certain price for a particular commodity. As seen in the simulation, when the prices for the apartments are lowered, the demand for the same will be higher. At a higher price however, the demand will remain consistent for the group of people belonging to the category that can afford the apartment. At this point, the supply of the number of apartments is not taken into consideration to determine where the price of a single apartment will be set. The main

Friday, September 27, 2019

Stone Mountain Ga. and surrounding area Essay Example | Topics and Well Written Essays - 2000 words

Stone Mountain Ga. and surrounding area - Essay Example It has the world’s largest exposed mass granite and the third largest monolith in the world. The Stone Mountain in Northern Georgia boosts a mysterious history with a lot of unanswered questions. Despite that, Stone Mountain is known today for its beauty and exquisite bas relief. Three figures from the Confederate States of America have been carved here; Stonewall Jackson, Robert E. Lee and Jefferson Davis. Stone Mountain is host to the Stone Mountain Park which is the major tourist attraction there at the site. In addition to that-it plays a major role in Georgia’s eco-system as well as its economy. Thesis Statement: A detailed research into the geological formation, History and economic value of Stone Mountain. 1. Formation of Stone Mountain 2. Most prevalent rock types 2.1 Granite Rock 2.2 Gneiss Rock 3. How old is Stone Mountain? 4. Birth of Stone Mountain 5. Plate tectonics relative to the creation of Stone Mountain 6. Weathering in Stone Mountain 6.1 Physical Weat hering 6.2 Chemical Weathering 6.3 Biological Weathering 6.4 Analysis of Weathering in Stone Mountain 7. Types of Rocks in Stone Mountain 8. The Georgia Piedmont 9. Resources in Stone Mountain 1.0 Formation of Stone Mountain Georgia’s geologic formation is extremely fascinating and is suspected to have covered a billion year period. Influenced by different formations and erosions from mountain ranges and geologic events such as severe climatic changes, and volcanic eruptions and flooding -Georgia’s geology still sparks mysterious questions. The compilation of these geologic events has led to the formation of a historical landmark known today as Stone Mountains. With reference to Larry Worthy’s article ‘Stone Mountain Natural History’ (exclusively for About North Georgia, 1994-2011) Stone Mountain at its highest point stands a mighty 1683 feet above sea level and sits on the western edge of a large belt of Lithonia Gneiss granite although the younger intrusive granite that comprises the mountain is entirely different from Lithonia granite. Commonly referred to as a granite dome manadnock, Stone Mountain’s development disseminated through several counties and provides a significant amount of bas relief. The formation of the Stone Mountain is still pondered by many geologists with a lot of unanswered questions. However, based on reviewed literature it is safe to say; water, desert like conditions and glacial features played a vital role in its formation. First up, the Stone Mountain in Georgia was formed during the last stages of the Alleghenian Orogeny which also created the Appalachian Mountains. Technically speaking, the ‘stress’ and ‘pressure’ from the Alleghenian Orogeny caused huge uplifts of land in the Northern Georgia region to form mountains. As far as water impacting Stone Mountain’s formation goes; many geologists believe that the Piedmont was higher than the mountain at one poi nt and as millions of years passed the water slowly eroded leaving so much of the Stone Mountain granite exposed. On the other hand, in the Researcher’s opinion, its formative exposure could be due to heat and pressure inside the earth alongside the divergent occurrences of plate tectonic processes. In addition to that, the desert like conditions in the area help to define the mineral composition of the different rock types found in the region. 2.0 Most Prevalent Rock Types on Stone Mountain Rocks from the Stone Mountains belong to the three major classifications

Thursday, September 26, 2019

Lipstick sales in the Recession Thesis Example | Topics and Well Written Essays - 4250 words

Lipstick sales in the Recession - Thesis Example Women although consider lipstick an essential but purchase it as a luxury to give themselves satisfaction. The results of the research have been supported by economic theories of Keynes, the income effect, the lipstick index and the elasticity of income theories. The results defy the substitution effect and basic economic theory of demand and supply. An interesting point is that lipstick sales are being depended upon to assess recession however, the trend of buying lipstick is changing and women are substituting it for lip gloss. Thus the question arises as to how much can the lipstick index be relied upon or whether the theory should be revised to include a certain pool of cosmetics for the theory to be more dependable. Recession has the global market and the economic conditions are deteriorating both in the developed and developing countries. The unemployment rate is increasing and the purchasing power of consumers is shrinking (CBS News 2008). Consumers now have to make choices and switch to cheaper commodities. Most cannot afford to purchase the luxuries they could afford in pre-recession times. Thus the overall prices of all goods are increasing due to inflation and according to the law of economics the demand for all products should decrease (CBS News 2008). However, as per Keynes (2009) there are certain products whose demand and supply rises in recession which is against the economic law. Lipstick is one of these products as its demand rises with the recession and increase in prices. The research focuses on why the demand of lipstick rises even with the increase in prices. So much so, that the sales of lipstick are used to indicate the recession patterns and to know whether recession has set in or not. The greater the sales, the more the recession. In the makeup industry the lipstick index is ardently used to see the recession progress. At the same time, it is also

Wednesday, September 25, 2019

Budget report Essay Example | Topics and Well Written Essays - 1250 words

Budget report - Essay Example Throughout the term, my General Merchandise expenses were valued at $314.20. This figure accounted for 7.9% of the total amount I spent. I decided to break down this percentage according to the amounts I spent each month, that is, 12.1% ($192.88) in February, 7.6% ($103.00) in March, and 3.9% ($18.32) in April. In this category, I have realized that I can make a considerable improvement. There are no variations in the amounts I spent weekly on general merchandise. However, my expenditure shows that I spent less in March-$103.00 compared to February-$192.88, and even less in April-$18.32. I spent more in February because I had to travel to Georgia to visit my cousin who was newly admitted to Georgia Institute of Technology. I spent much money in air tickets and in buying gifts and a few treats. I believe that the expenses of the following months will not escalate save for a few emergencies that may arise. For groceries, my expenses accounted for 15.20% of the total expenditure. This was valued at $603.63. I had to spend a lot in groceries in March-a total of $272.78 (reflecting 45.19% of the total amount spent on groceries for the term) because I fell ill and the doctors recommended taking more than four meals a day to cater for energy loses. April’s grocery expenses were much less since I had begun cutting on my budget. Grocery expenses in April were $151.99 (25.18%) and $178.86 (29.63%) in February. I am capable of cutting more on grocery expenses by 10% in May because I have identified the stores that sell similar products at lower prices. Restaurant/Bar expenses accounted for $871.81, reflecting 21.94% of the overall expenditure. This category included buying drinks for the occasional parties we hold in our favorite restaurant, purchasing party cups, hiring rooms when it is too late to reach home, and buying burgers and pizza at least once a week. It is

Tuesday, September 24, 2019

Teenage Pregnancy Annotated Bibliography Example | Topics and Well Written Essays - 500 words

Teenage Pregnancy - Annotated Bibliography Example One of the first main points that should be covered in this paper is to highlight what exactly would be the outcomes of somebody were to not get an abortion. Kate Kerzinke outlined an effective paper on this topic in 2003 in the New York Times. The author identifies that there is a cycle associated with teenage pregnancy which is that a teenage girl gets pregnant, leavers their formal education, becomes dependant on welfare and in turn raises a child that herself becomes a teenage mother and then repeats the cycle. Whilst this paper does not necessarily come directly from an academic journal it was published in the New York Times which is a fairly reputable publication. Moreover there may be a concern that the information is not the most current however for the purpose of this paper, the source will be used to discuss socioeconomic problems and not demographic trends so the information would remain up to date. The next source that will be examined I a paper that highlights where demographic trends are heading in regards to teenage abortion. Bielski, 2010 was an excellent source for this information. His work which appeared in the Globe and Mail identified that the abortion rate droped by approximately 36.9% in Canada. Although it is the case that the focus of this research was in the Canadian Market, one could make the argument that Canada and The United States are nations that are not totally unlike each other. Moreover the article appeared in a newspaper and not an academic journal but much like the New York Times, the Globe and Mail is a respected article and as such is heavily scrutinized by the Canadian public.

Monday, September 23, 2019

Article analyze Assignment Example | Topics and Well Written Essays - 1000 words

Article analyze - Assignment Example It was during this same year that he was sent on a diplomatic mission to the court of Sultan Suleiman the magnificent to iron out the tension and creases that were erupting between the Sultan and Ferdinand of Habsburg. Busbecq served as the ambassador the court of the Sultan from the year 1555 to 1562. In these years, Busbecq wrote four letters in Latin to his fellow diplomat at Habsburg in which wrote complete details of his travels and his stay at the Ottoman Empire. His letter was highly important because he highlighted the Janissaries. The letters of Busbecq are important because it highlights the goodness and strength of the Ottoman Empire and the sturdiness of the Janissaries in comparison to the Christian soldiers. During his stay at the Sultan’s court, Busbecq fist met the Janissaries at Buda. The Janissaries were essentially the infantry portion of the royal guard of Sultan. These guards were stationed everywhere according to Busbecq to maintain peace and order throughout the cities. Further, he listed in his chronicles that this infantry also provided protection to the Christians and the Jews from the attacks of other races at all points of time. In his letters, Busbecq first described the attires of the Janissaries. He described those wearing robes that reached down to their ankles. Further, their heads were covered by cowl like headgears that flapped along their necks. On Busbecq’s first encounter with them in Paris, he was filled with awe. This was because he had never met soldiers so well disciplined and such courteous like them. He even described their incident of courteousness in his letter in which he recounts that in Paris the infantry had come up to him. He remembers their sense of etiquette in which the soldiers had given him flowers in his hand and had receded back quickly without showing their backs to him. Busbecq was highly amused with such gesture because he

Sunday, September 22, 2019

Girl Interrupted Movie Review Example | Topics and Well Written Essays - 500 words

Girl Interrupted - Movie Review Example As the story progresses, Susanna became attached to Lisa who influenced her to cause troubles with the other patients. There was even a point where Susanna rejected the idea that she wasn't sick as what her boyfriend had told her because she relied much on Lisa. Susanna only came to realize how dangerous Lisa's personality was after Daisy killed herself and Lisa showed no mercy. Lisa even attacked Susanna and threatened to kill herself, too. At the end of the movie, Susanna was released from the institution. She left a remarkable line "Crazy isn't about being broken, or swallowing a dark secret. It's you, or me, amplified...". According to a study conducted by World Health Organization (2010) depression, anxiety, psychological distress, sexual violence, domestic violence and escalating rates of substance use affect women to a greater extent than men across different countries and different settings.

Saturday, September 21, 2019

Obesity in African American Culture Essay Example for Free

Obesity in African American Culture Essay Obesity has more that just a physical effect on the body. Obesity also greatly affects the mental and emotional part of the body as well. Although you cannot directly correlate metal and emotional health to obesity, you can see that its effects do in fact play a role in the mental and emotional health of an obese person. While the effects of obesity do indeed reach out to all races, it is easy to see that mental and emotional problems from obesity in the African American culture are present in the culture. Depression, anxiety, and discrimination, are all results that are caused by obesity in the African American community. Many people are familiar with depression, whether it be a friend or family member that went through it or that they themselves went though it. â€Å"Depression is a state of low mood and aversion to activity that can have a negative effect on a persons thoughts, behavior, feelings, world view and physical well-being† (Salmans 1997). African American obesity has a close tie with depression in African American people. When people are self-conscious about their weight they may think that people look down on them for this. This would cause them to think less of themselves or believe that others are better then them. In turn it can cause the obese African American to have a bad view of themselves, other people, and the world in general. This is exactly what depression is. You can see that depression can be caused by obesity in the African American culture. Anxiety is another emotional distress many people are familiar with. Anxiety is know as, â€Å"the displeasing feeling of fear and concern† (Davison 2008). Many people have felt the effects of anxiety in their own lives, whether it is before an important test, a speech in front of many people, or the big gam; many people feel anxiety. Looking only at anxiety caused by obesity in African American people is a different situation. Anxiety or nervousness before a big event is common and in many ways healthy because it motivates us to do the very best we can. Anxiety in African Americans because of obesity is not healthy; in fact it can be dangerous and destructive. By feeling displeased and concerned about their weight African Americans can struggle all through out life to over come these feelings. It could limit their goals and overall make them settle for less then they really can do. Anxiety do to obesity in the African American community is not a healthy and can severely constrain someone’s life. Discrimination in the African American community has always been a problem through out history. Slavery is a very obvious product of discrimination. Taking a more specific look at discrimination of the African American community because of obesity is a different situation. When people discriminate African Americans because of their weight it seriously prohibits their chances of succeeding in life. It could be in the work place or at school. By placing these barriers we are limiting the ability of the African American community and hurting their chances of having a successful and meaningful life. These mental and emotional effects of obesity in the African American community are unfair and wrong. People should not be judged on their weight. Davison, Gerald C. (2008). Abnormal Psychology. Toronto: Veronica Visentin. p. 154. ISBN 978-0-470-84072-6. Salmans, Sandra (1997). Depression: Questions You Have – Answers You Need. Peoples Medical Society. ISBN 978-1-882606-14-6.

Friday, September 20, 2019

Algorithms for pre-processing and processing stages of x-ray images

Algorithms for pre-processing and processing stages of x-ray images 1.1 Introduction This chapter presents algorithms for pre-processing and processing stages of both cervical and lumbar vertebrae x-ray images. Pre-processing stage here is the process of locating and enhancement the spine regionof interestin the x-ray image, where the processing stage includes the shape boundary representation and segmentation algorithms based feature vectors extraction and morphometric measurement. In this research the spine vertebrae are introduced and the objectives of segmentation algorithm are discussed. Then various general segmentation approaches including those based on the shape boundary extraction are discussed and applied to our spinal x-ray image collection. The current approach is introduced with a flow diagram and then the individual blocks of the segmentation process are taken up and discussed in detail. 1.2 Image Acquisition A digital archive of 17,000 cervical and lumbar spine x-ray images from the second National Health and Nutrition Examination Survey (NHANES II) is maintained by the Lister Hill National Center of Biomedical Communications in the National Library of Medicine (NLM) at the National Institutes of Health (NIH). Among these 17,000 images, approximately 10,000 are cervical spine x-rays and 7,000 are lumbar x-rays. Text data (including gender, age, symptom, etc.) are associated with each image. This collection has long been suggested to be very valuable for research into the prevalence of osteoarthritis and musculoskeletal diseases. It is a goal of intramural researchers to develop a biomedical information resource useful to medical researchers and educators. Figure 3.1 shows two sample images from the database. Spine x-ray images generally have low contrast and poor image quality. They do not provide meaningful information in terms of texture or color. Pathologies found on these spine x-ray images that are of interest to the medical researchers are generally expressed along the vertebral boundary. (a) (b) 1.3 Proposed segmentation scheme The proposed process main stages scheme shown at Figure3.2, followed by a details review of the used methods applied to our spinal images and can be listed as follow: a. Pre-processing stage include image acquisition, region localization (RL) and region localization enhancement. b. Shape boundary representation and segmentation stage; include active shape model (ASM) segmentation based on two shape boundary representation 9-anatomical points and b-spline representation. c. Feature extraction stage; include feature extraction based shape feature vector and morphometric measurement-invariant features for indexing. d. Classification and similarity matching stage; include feature models classifier and similarity matching for diagnosis and retrieval 1.4 Pre-processingstage 1.4.1 Spineregion localization Region localization (RL) refers to the estimation of boundaries within the image that enclose objects of interest at a coarse level of precision. RL is important for assisting human experts in rapid image display and review (independent of its use in initializing a segmentation process). For example, with an algorithm that can rapidly, and with high probability identify the spine region with a marked line passing, this region of interest can be automatically zoomed on the display even though the location and orientation of the spine may vary appreciably in these images. This algorithm assumes that a line passing through the maximum amount of bone structure in the image will lie over a large part of the spine area, given a line passing through the image; Figure 3.3 shows the region localization (RL) selection of both cervical and lumbar images. (a) (b) 1.4.2 Enhancement approach Image enhancement is significant part of AVFAS recognition systems. Changes in lighting conditions produces dramatically decrease of recognition performance, if an image is low contrast and dark, we wish to improve its contrast and brightness. The widespread histogram equalization cannot correctly improve all parts of the image. When the original image is irregularly illuminated, some details on resulting image will remain too bright or too dark. Typically, digitized x-ray images are corrupted by additive noise and de-noising can improve the visibility of some structures in medical x-ray images, thus improving the performance of computer assisted segmentation algorithms. However, image enhancement algorithms generally amplify noise [17, 18]. Therefore, higher de-noising performance is important in obtaining images with high visual quality for that reason different enhancement techniques was implemented i. Adaptive histogram-based equalization ( Filter 1) Adaptive histogram-based equalization (AHE) can be applied to aid in the viewing of key cervical and lumbar vertebrae features, and its an excellent contrast enhancement method for medical image and other initially no visual images. In medical imaging its automatic operation and effective presentation of all contrast available in the image data make it a competitor of the standard contrast enhancement methods. The goal of using adaptive histogram equalization is to obtain a uniform histogram for the output image, so that an optimal overall contrast is perceived. However, the feature of interest in an image might need enhancement locally. Adaptive Histogram Equalization (AHE) computes the histogram of a local window centred at a given pixel to determine the mapping for that pixel, which provides a local contrast enhancement. However, the enhancement is so strong that two major problems can arise: noise amplification in flat regions of the image and ring artifacts at strong edges [12, 13]. Histogram equalization maps the input images intensity values so that the histogram of the resulting image will have an approximately uniform distribution [9-11].The histogram of a digital image with gray levels in the range [0, L-1] is a discrete function Where is the gray level, is the number of pixels in the image with that gray level, is the total number of pixels in the image, and k =0, 1, 2 L-1, basically gives an estimate of the probability of occurrence of gray level The local contrast of the object in the image is increased by applied histogram equalization, especially when the applied data of the image is represented by close contrast values. Through this adjustment the intensity can be better distributed on the histogram, this allows for areas of lower local contrast to gain a higher contrast without affecting the global contrast. (a) (b) ii. Adaptive contrast enhancement The idea is to enhance contrast locally analyzing local grey differences taking into account mean grey level. First we apply local adaptive contrast enhancement. Parameters are set to amplify local features and diminish mean brightness in order to obtain more contrast resulting image. After that we apply histogram equalization. Adaptive gamma value Gamma correction Gamma correction operation performs nonlinear brightness adjustment. Brightness for darker pixels is increased, but it is almost the same for bright pixels. As result more details are visible. 1.5 Shape boundary segmentation Shape boundary segmentation presented at this work is a hierarchical segmentation algorithm tailored to the segmentation of cervical and lumbar vertebrae in digitized x-ray images. The algorithm employs the both shape boundary representation schemes, 9-anatomical points representation (9-APR) and B-spline representation (B-SR) to obtain a suitable initialization for segmentation stage that utilize active shape models (ASMs) proposed by Cootes et al. The advantage of using ASMs in medical image segmentation applications is that rather than creating models that are purely data driven, ASMs gain a priori knowledge through a thorough observation of the shape variation across a training set. 1.5.1 Shape boundary representation Shape is an important characteristic for describing pertinent pathologies in various types of medical image and its a particular challenges regarding vertebra boundary segmentation in spine x-ray images. It was realized that the shape representation method would need to serve the dual purpose of providing a rich description of the vertebra shape while being acceptable to the end user community consisting of medical professionals. In order to model the spinal vertebra shape we presented by term of set points chosen to place point around the boundary , this must be done for each shape at training stage and the labelling point its important. Two schemes list has been used at this stage to determine a vertebra boundary shape in terms of list points i. 9-anatomical point representation (9-APR) We obtained segmentation data created by medical expertise at an early state of our segmentation work; the purpose of this task was to acquire reference data as a guideline for validating vertebrae segmentation algorithms. These data consisted of (x, y) coordinates for specific geometric locations on the vertebrae; a maximum of 9-anatomical points representation (9-APR) assigned and marked by board certificate radiologist that is indicative of the pathology found to be consistently and reliably detectable per vertebra were collected . Figure 3.7 shows below the points were placed manually on each vertebrae and which is the interest to medical researchers. Points 1, 3, 4, and 6 are indicative of the four corners of the vertebral body as seen in a projective sagittal view. Points 4 and 3 mark the upper and lower posterior corners of the vertebra, respectively; Points 6 and 1 mark the upper and lower anterior corners of the vertebra, respectively. Points 5 and 2 are the median along the upper and lower vertebra edge in the sagittal view; Point 8 is the median along the anterior vertical edge of the vertebra in the sagittal view. Note that Points 7 and 9 mark the upper and lower anterior osteophytes, so if osteophyte(s) are not present on the vertebra, then these points (7-9) coincide with points 6 and 1, respectively. ii. B-spline representation (B-SR) Representation of curves using piecewise polynomial interpolation to obtain curves is widely used in computer graphics .B-spline are piecewise polynomial curves whose shape is closely related to their control polygon a chain of vertices giving a polygonal representation of curves. B-splines of the third order are most common because this is the lowest order which includes the changes of curvatures. The Advantage of using B-spline techniques at this research is to enhance the 9-anatomical points, B-spline curves require more information (i.e., the degree of the curve and a knot vector) and a more complex theory than Bà ©zier curves. But, it has more advantages to offset this shortcoming. * B-spline curve can be a Bà ©zier curve. * B-spline curves satisfy all important properties that Bà ©zier curves have. * B-spline curves provide more control flexibility than Bà ©zier curves can do. * The degree of a B-spline curve is separated from the number of control points. More precisely [ReF]. We can use lower degree curves and still maintain a large number of control points and also we can change the position of a control point without globally changing the shape of the whole curve (local modification property). Since B-spline curves satisfy the strong convex hull property, they have a finer shape control. Moreover, there are other techniques for designing and editing the shape of a curve such as changing knots. B-spline is a generalization of the Bezier curve [Ref] , let a vector known as the knot vector be defined, Where, is a no decreasing sequence with and define control points, Define the degree as ,The knots are called internal knots. 1.5.2 Modelling Shape Variations In ASM, an object shape is represented by a set of landmark points and requires a good initialization of an objects pose in an image (i.e., location, size, and angle of rotation); therefore, we used the two schemes representation (9-APR B-SR) in our proposed segmentation technique to create this initialization. Several instances of the same object class are included in a training set and in order to model the variations we need to align the set of shapes. i. Training set In order to build a model that is flexible enough to cover the most typical variations of vertebrae, a sufficiently large training set has to be used. For the purpose of the investigation reported in this work, we locate the shape (by eye) and its important that the two schemes representations are accurately located and that there is an exact correspondence between labels in different instances of training shapes. In this research a set of 1100 vertebra for both cervical (400 vertebral) and lumbar (710 vertebra) has been used. ii. Aligning trainshapes The model that will be used to describe a shape and its typical appearances is based on the variations of the spatial position of each landmark point within the training set. Each point will thus have a certain distribution in the image space and therefore the shape model is being referred to as a Point Distribution Model (PDM). In order to obtain the PDM, we use the two shape representation, to align the shapes, and finally, to summarize the landmark variations in a compact form. In what follows, these steps are being described in some detail. We achieve the required alignment by scaling, rotating and translating the training shapes so that they correspond as closely as possible. 1.7 Shape boundary Indexing The shape analysis described here is related to the statistical analysis of vertebrae shapes to shape similarity matching and recognition. Three schemes of shape analysis implemented at this stage. First scheme is the shape analysis based feature vectors extraction includes statistical shape feature (SSF) and Gabor wavelets features (GWF). Second scheme is the shape analysis based morphometric measurement based angles measurement index (AMI) and intra-bone ratio measurement (IBRM). Last is the analysis based similarity matching, the index output result from each analysis will be considered as input to the classifier systems those schemes outlined are described below. Feature vector is an n-dimensional vector of numerical features represents object shape. Statistical models captured from active shape model, Gabor wavelets filter bank require a numerical representation of vertebrae shape based on both boundary shape representation (9-anatomical point model ,B-spline curve), since such representations facilitate processing and statistical analysis. Figure below shows schematic pattern recognition system based feature vectors. 1.7.1 Statistical shapefeatures(SSF) Each vertebral in the training set, when aligned can be represented by a single points in 2n dimensional space (eq2).Thus a set of N example shapes gives base on each shape boundary representation cloud of N point in this 2n dimensional space. We assume that these points lie within some region of the space which call the Allowed Shape Domain and that the points give an indication of the shape and size of this region. Every 2n-D point within this domain gives a set of landmarks whose shape is broadly similar to that of those in the original training set. Thus by moving about the Allowable shape domain we can generate new shapes in systematic way .The approach given below attempts to model the shape of this cloud in high dimensional space and hence to capture the relationship between the positions of the individual landmark points. 1.7.2 Gabor wavelets features(GWF) The objectives of this stage is to explore the feasibility of using Gabor wavelet-constructed spatial filters to extract feature-based vector from shape boundary consisting of cervical and lumbar vertebrae, and to use these extracted feature vectors to train and test with different classifier. To evaluate the robustness of the method, so many analysis based filter and mask size was experimented to select the suitable Gabor mask that will be convolute with the two vertebra shape boundary extracted. In order to briefly describe Gabor wavelets and provide a rationale for this stage of work, the Short Time Fourier Transform (STFT) and Gabor Transform need to be explained first. The Fourier transform is a fundamental tool of classical signal analysis. i. Gabor wavelets filter bank The Gabor wavelet function used in this research for AOs feature extraction was same as Naghdy (1996) used and was defined. Where: the different choices of frequency j and orientation constructed a set of filters. ii. Filter frequency and mask size analysis As the frequency of the sinusoid changes, the window size will be changed. (Fig. 3.28, 3.29, 3.30 and 3.32) shows real and imaginary parts of eight two-dimensional wavelets filters. When j is changed from 1 to 4, the sinusoid frequency is reduced whereas the Gaussian window size increases. In comparison, for the Gabor transform, Gaussin window size will remain same. iii. Convolution vertebral region with the filter bank The elementary Gabor wavelet functions were used to construct spatial domain filters, Each filter was made of a pair of filters, which were the real and imaginary part of the complex sinusoid. These pair was convolved with the green channel signal of texture image separately. The reason of choosing the green channel to do convolution was that the green channel was found to have the best texture quality, which means the best contrast level between plants and soil, among red, blue and MExG channels. This scenario is absolutely sensor dependent and may not be the case for other sensors. For one frequency level, the filtering output was the modulation of the average of the convolution output from real and imaginary filter masks on all convolved pixels in the green channel image, which was computed. iv. Gabor wavelets filer bank block diagram 1.8 Shape boundary morphometric measurement 1.8.1 Morphometric measurement-invariant features For efficient image retrieval, it is important that the pathological features of interest be detected with high accuracy. In this stage of Automatic Vertebral Fracture Assessment System techniques, new morphometric measurement-invariant features were investigated for the detection of anterior osteophytes, including lumbar and cervical vertebrae. The goal in this stage of work is to investigate a measurement algorithm for high accuracy and avoid the complex calculation. Two approaches morphometric measurement-invariant features were developed based: 1) Angles invariant features (A-IF) 2) Intra-distance ratio invariant features (ID-IF) The results of this morphometric extraction geometries calculation will produce a signal of two index based on angle and distance measurement that can be used to distinguish between the anterior osteoporosis classes and their severity implemented as input for classifier algorithm. Figure below show the block diagram of the shape analyses based morphometric technique. Stage 1: AOs detection Two classification schemes for anterior osteophytes were established by a medical expert to evaluate the accuracy of the PSM algorithm. The first is Macnabs classification, established by Macnab and his coworkers in 1956 on radiological and pathological bases [6, 7].Two types of osteophytes are adapted from Macnabs classification: claw and traction, as shown in Figure 1. Their visual characteristics are: 1. Claw spur rises from the vertebral rim and curves toward the adjacent disk. It is often triangular in shape and curved at the tips. 2. Traction spur protrudes horizontally, is moderately thick, does not curve at the tips, and never extends across the intervertebral disk space. The second classification is a grading system which was defined by the medical expert consistent with reasonable criteria for assigning severity levels to anterior osteophytes (AO). Three grades of AO are slight, moderate, and severe, also shown in Table 1. Their visual characteristics are: 1. Slight grade includes normal, where the corner angles on the vertebral boundary are approximately right angles. It may have a slight protuberance, where the tip of the osteophyte is round and no narrowing is observed at the base of the protuberance. 2. Moderate grade is characterized by evident protuberance from the ideal horizontal or vertical edge of the vertebra. The bounding edges of the AO form an angle of at least 45 degrees and the osteophyte has a relatively wider base than severe grade. 3. Severe grade is characterized by presence of hook, the angle is less than 45 degrees and has a narrow base, or protrudes far (about 1/3 of the length of the horizontal border) from the normal (ideal 90 degree) vertebral corner. Angles invariant features (A-IF) We explore three main angles for measurement that make sense of difference between the AO classes from the 9-anatomical landmarks model. Shape below show the angle of interest selected that will be used next as input for our classifier system to make decision (a) Turning Angle (b) Intra-Distance Across the Shape Turn Angle (TA) To capture the characteristics of shape in local regions, we use two different features. The first is Turn Angle (TA). Turn Angle is also called Turning Angle or Bent Angle. It is defined as follows [3]: if the points on the polygon are ordered in the counterclockwise direction, and the polygon is traversed in this direction, the Turn Angle is the angle between the direction vector for the current polygon segment and the next one; the sense of the Turn Angle is calculated such that a clockwise turn gives a negative angle whereas a counterclockwise turn gives a positive angle. Figure 3 (a) shows an example. For an arbitrary shape, the Turn Angle feature could be calculated from the approximating polygon for that shape. Turn Angle for a polygon with n vertices is simply a vector in Rn . For example, if the vertebra is represented as a polygon with 72 vertices (our sparse representation), the Turn Angle is a 72-element vector. If the polygon has the concept of an initial vertex, similarity computation is straightforward, e.g., with a Euclidean metric. If there is no initial vertex, similarity between two shapes may be computed by a combinatorial comparison of distances between possibly-matching sets of vertices. This computation may be optimized by dynamic programming. Intra-distance ratio invariant features (ID-IF) Distance across the shape [4] is another local shape feature. DAS is defined, for each vertex P in a polygon, as the length of the angle bisector at P, measured as the line segment from P to the intersecting side of the polygon. For Example, the interior bisector of angle à ¢Ã‹â€ Ã‚  P2P3P4 in the figure 3 (b) intersects the contour at point I3. The length of P3I3 is the DAS at point P3. If the bisector intersects the shape multiple times, the distance to the closest intersection is used. Similarly as for turn angle, if we represent the vertebra shape as a polygon with 72 sample points, the DAS feature may be calculated on those 72 points. Where, V: is called as vertical angle calculated between the points 7-8-9 H: is called as horizontal angle calculated between the points 1-2-3 C: is called as corner angle calculated between the points 8-9-1 Angle formula calculation between these three points coordinates as follow 1.9 Operation Step 1: Calculate the Horizontal angle and this calculation based on the Step 2: Calculate the Horizontal angle and this calculation based on the Step 3: Calculate the Horizontal angle and this calculation based on the Step 4: build the rule base and evaluate the result by visual inspection Intra-Distance ratio Measurement (I-DRM) Inter-bone ration is another morphometric measurement issue, it was explored based on the shape distance here we focused Where, : Represents the distance posterior height calculated between the points 3-4 : Represents the distance medial height calculated between the points 5-2 : Represents the distance interior height calculated between the points 1-6 : Represents the distance calculated between the points 8-mp, where mp Midpoint between the points 3-4, the Midpoint (mp) coordinates calculation formula as the following: With; (, ) is the point 3 coordinate, (,) is the point 4 coordinate Given the two points (, ) and (,), the distance between these points is given by the formula: The normal vertebra was estimated to have the following ratio distance Distance () =Distance () =Distance () Base on this estimation by expert radiologist we develop another rule base decision system that can work properly to and true classify the normal and abnormal and bone The criteria of the X= Stage 2: AOsLocation Detection of the Ao position conduct us to determine the location either upper or lower AO a) b) The position of the AO is determined by sample way calculation based of angles too Stage 3: Disc space narrowing (DSN) Stage 4 Stage 5:Subluxation/Spondylolisthesis Segmentation and Pre-processing The vertebra shapes were segmented using an active contours method modified to constrain evolving contour points to follow orthogonal curves [18], to avoid convergence to a self-intersecting solution contour at vertebra corners [9]. The solution contours have 36 points. Nine of these 36 points were distinguished as geometrical or anatomical reference points, with relative locations that are approximately constant across the veterbra shapes. The nine points, shown in Figure 2 were either manually marked by experts, or extracted automatically or semi-automatically by specialized algorithms [9]. For the current work, we preprocess these segmented shapes by curve smoothing (to reduce noise), fitting (for smoothness), interpolation, and re-sampling (for larger number of evenly distributed points) to obtain the final shape contour description. The curve fitting and interpolation are done with the natural cubic spline algorithm. Then the shape contour is resampled by equal arc length sampling. Finally, the vertebra whole shape is represented by two boundary point sets with different resolutions. The dense sampling set contains 180 points, and the superior and the inferior anterior corners are represented by 60 points, respectively. The sparse sampling set contains 72 points, with the superior and the inferior anterior corners represented by 25 points, respectively.

Thursday, September 19, 2019

A Comparison of Online Universities to Traditional Universities Essay

Online Universities and Traditional Universities Traditional universities are a wonderful way to study for students who have the time and patience to deal with teachers and classmates. In contrast, online universities are the ideal way to study for students who do not have the time to go to school and those who enjoy working at their own pace. Some students enjoy traditional universities while others prefer online universities. If someone chooses a traditional university and then realizes that he or she is unable to keep up with his or her schedule and class work, this student might decide to try an online university. Ultimately, everyone chooses which route works best for him or her. Traditional universities require students to attend class in person on a daily basis. One adva...

Wednesday, September 18, 2019

Robert Frosts Design Essay -- Poetry Poem Essays Poet

Robert Frost's Design Robert Frost's "Design" is a meditation on human attempts to see order in the universe--and human failures at perceiving the order that is actually present in nature. The speaker of the poem perceives what he takes to be a significant coincidence, then speculates on what the coincidence might mean, or whether it means anything at all. However, he fails to see that there is a very good reason for the coincidence he spots, and the "design" of nature that it implies is quite different from anything he suggests. Design by Robert Frost I found a dimpled spider, fat and white, On a white heal-all, holding up a moth Like a white piece of rigid satin cloth-- Assorted characters of death and blight Mixed ready to begin the morning right, Like the ingredients of a witches' broth-- A snow-drop spider, a flower like a froth, And dead wings carried like a paper kite. What had that flower to do with being white, The wayside blue and innocent heal-all? What brought the kindred spider to that height, Then steered the white moth thither in the night? What but design of darkness to appall?-- If design govern in a thing so small. The starting point for the speaker's thinking is what he perceives to be a coincidence: a white spider sits on a white flower holding up a white moth. The coincidence is even more striking because heal-alls are usually blue. In Western culture, the color white usually symbolizes goodness, purity, and innocence. The language of the poem suggests these connotative links: the spider is "dimpled" as well as "fat and white," like a newborn baby. The moth's wings are like a "white piece of rigid satin cloth," like a bridal dress (or perhaps the lining of a c... ...er would be attracted to a white flower because it would offer some concealment from prey. There is indeed a "design" at work, but it is not a "design of darkness"; it is simply the order of nature. The existence of such a design leaves open the question of whether God exists.An atheist would take the explanation above as evidence that there are rational explanations for natural processes, and that there is no need to invoke the concept of God to explain how the universe works. In other writings, Frost does appear to profess belief in God (albeit belief of a complex kind). The focus of "Design," then, is not ultimately the existence or absence of God, but rather the tendency of humans to engage in what John Ruskin called the "pathetic fallacy"--the act of reading oneself into nature. The first act of responsible belief, Frost implies, is seeing nature as it is.

Tuesday, September 17, 2019

An Analysis of Rebellion in George Orwell’s 1984 Essay

As a new society unfolds, so do new values and authority. In 1984, George Orwell presents a futuristic vision of the power of government as well as its social conventions. Primarily, Orwell uses Winston Smith to exhibit the effects that government control can have on morality. Winston lives in Oceania where â€Å"The Party† exploits its complete power by controlling people emotionally and mentally. However, this disturbs Winston who subsequently challenges The Party and is provoked into becoming a rebel. He recognizes that he is at the point of no return; consequently, he marches blindly ahead in the hope of defeating The Party. However, Winton’s defiant nature is quickly extinguished after he is caught and tormented for committing subversive acts. The once rebellious Winston is then forever changed, as he becomes a loyal subject of Big Brother. Winston’s challenge of Oceania’s imposed values and beliefs demonstrates humanity’s need and subsequent p ursuit of freedom. In Oceania, The Party is seen as the ultimate power; it imposes its authority and fear over its citizens with the use of technology. From the street corners to Winston’s living room, the telescreens are used to monitor the thoughts and actions of its people. â€Å"It was even conceivable, that they watched everybody all the time. But at any rate they could plug in your wire whenever they wanted to. You had to live- did live, form habit that became instinct- in the assumption that every sound you made was overhead, and, except in darkness, every movement scrutinised.† (Orwell 5). By not knowing which move is being watched or which words are being listened to, all privacy and freedom of speech is eliminated from their daily lives. The telescreens are used as a source of control and power rather than communication. They also display propaganda from the Ministry of Truth to support the Party’s actions and power. The Party also uses the media as a tool for manipulation. Posters, slogans, and advertisements display messages such as â€Å"BIG BROTHER IS WATCHING YOU† and â€Å"WAR IS PEACE; FREEDOM IS SLAVERY; IGNORANCE IS STRENGTH†. These slogans, in addition to presenting Big Brother as a symbolic figure, work together to complete the manipulation and control of its citizens. However, altering the history and memory of Oceania also enforces political control. History books opportunely reflect the Party’s ideology which forbids individuals from keeping mementos such as photographs  and documents from their past. As a result, the citizens have vague memories of their past and willingly believe whatever the Party tells them. â€Å"Who controls the past controls the future. Who controls the present controls the past† (Orwell 32). By controlling the past, The Party ensures that they control the future, and through false history, the psychological independence of peopl e is controlled. By stealing people’s privacy, manipulating and manoeuvring their lives, and presenting altered history, the Party is able to exploit its power. Winston, a man with a conscience and a sense of right and wrong has no choice; he must fight for his beliefs. Big Brother, a symbolic figure for power, agitates Winton’s morality. Although a member of the Party, he disagrees with the conventions of The Party. At first, Winston demonstrates his defiance using a diary as a secure place to keep his thoughts. â€Å"DOWN WITH BIG BROTHER† (Orwell 20). Here, Winston expresses his feelings about the party. He is aware that having or expressing thoughts against Big Brother is viewed as a thought crime in Oceania; nevertheless he also knows that he cannot sit back and accept their philosophy. â€Å"Thought crime does not entail death: thought crime IS death† (Orwell 30). Winston has the good sense to be extremely careful when writing in his diary; he is paranoid about being caught and places himself away from the telescreen where he hopes he will not be detected. This action demonstrates his unwillingness to simply accept the party line and the government’s control. Another equally serious offence against the Party is his love affair with Julia. Well aware of the Party’s stand on pleasurable sexual activity, Winston, nevertheless, can not and does not suppress his desire for her. He also discovers that he is not the only one with these forbidden feelings. â€Å"That was above all what he wanted to hear. Not merely the love of one person, but the animal instinct, the simple undifferentiated desire: that was the force that would tear the Party to pieces† (Orwell 132). With the knowledge that he is not alone in this battle, Winston is even more committed and empowered to continue his defiance against the system. He recognizes that he must act cautiously and, in order to continue his affair without being caught, Winston rents a room above Mr.Charrington’s shop. Another subversive act is Winston’s communication with O’Brien, a leader in the Party. Winton bases his trust of O’Brien through the voices in  his dreams, the eye contact between them during hate meetings, and when O’Brien turns off his telescreen when the two meet. â€Å"Between himself and O’Brien, and of the impulse he sometimes felt, simply to walk into O’Brien’s presence, announce that he was the enemy of the Party and demand his help†(Orwell 159) Trustingly, Winston reveals his views to O’Brien, hoping that in the future, others will also join in the defeat of the Party. O’Brien convinces Winston that he is member of the Brotherhood; Winston eagerly joins. The authority the Party enforces over Oceania’s citizens seizes Winston’s morality and gives him the courage to increase the momentum of his rebellious acts. Unfortunately, a power far greater than his is watching his every move. As Winston continues his treasonous acts, he realises there is no way out; his optimism for a better future has him stride blindly into shark-infested waters. Winston realizes that by writing in his diary it is only a matter of time before the Thought Police capture him. â€Å"Whether he wrote DOWN WITH BIG BROTHER, or whether he refrained from writing it, made no difference. Whether he went on with the diary, or whether he did not go on with it, made no difference† (Orwell 21). Intellectually, Winston realizes that he will probably get caught, but he cannot turn back. His affair with Julia boosts his ego and so he continues with the hope that other rebels will unite with him against the Party. Unfortunately, in his dream of defeating Big Brother, Winston becomes careless and his acts against the Party take him down a dangerous path, leading him into torturous consequences. Winston allows himself to take unnecessary risks, such as trusting O’Brien. Unknowingly, the room he rents above Mr. Charrington’s shop to meet with Julia is under surveillance. Mr. Charrington, a member of the Thought Police, uses the telescreen to capture Winston’s sexual affair with Julia. As a member of the Thought Police, it is his duty to turn them in, and he does. He has the two arrested and they are sent to the Ministry. Winston’s carelessness now comes back to haunt him. In his eagerness to find others who loathe the system, he trusted O’Brien who led him to believe that he shares his hatred for Big Brother. However, Winston soon learns that O’Brien’s intentions are quite different. When Winston is caught, O’Brien visits him to â€Å"help† him through this wretchedness. However, Winston’s misplaced trust is exploited when  O’Brien preys on his biggest fear. He is taken to Room 101 where he is tortured both physically and mentally with his ultimate revulsion: rats. Winston’s fortitude collapses, changing his perspective. â€Å"†¦it was all right, everything was all right. The struggle was finished. He had won the victory over himself. He loved Big Brother.† (Orwell 311). Winston’s physical pain and mental anguish help him to now embrace the unquestionable power and wisdom of the Party. The irony is evident: Winston’s determination to defeat Big Brother is defeated†¦ by Big Brother. In Winston’s pursuit to gain independent thought, he struggles against the absolute power of the Party, thus demonstrating the battle between him and his government. In Oceania, the Party controls the people physically, mentally and emotionally in order to maintain their supremacy. However, the Party’s abusive power subverts Winston’s morality, aggravating him into rebellion. Once started, Winston realizes that he cannot turn back from his revolt, even though, intellectually he acknowledges that his battle could cost him far more than his freedom. He is driven to continue. Winston’s fervour for change comes to an immediate halt after he is caught and punished for his disloyalty to the Party. A man forever changed becomes a loyal supporter of Big Brother. Orwell’s 1984 is a frightening journey of a man’s fight for freedom of thought and expression. In 1948, when the book was written it was considered a futuristic view of society. Today, many o f the events have already become a reality. Big Brother is indeed watching! Works Cited Orwell, George. 1984. New York: Penguin, 1964

Monday, September 16, 2019

Essay Writing

Sarva Shiksha Abhiyan is a quantitative success: IIM study There are some good tidings for the Union Human Resource Development Ministry from its flagship enterprise, the Sarva Shiksha Abhiyan (SSA), to universalise elementary education. A study conducted by the Indian Institute of Management, Ahmedabad (IIM-A), has found that the SSA has met with considerable success quantitatively if not qualitatively.While quality remains an area of concern, the SSA has been able to bridge the enrolment, retention and achievement gaps between the sexes and among social groups. According to the IIM-A study titled `Shiksha Sangam: Innovations under the SSA,' the out-of-school population had come down from 28. 5 per cent of the six-to-14 year age group in 2001 to 6. 94 per cent by the end of 2005. Dropout rates at the primary level stands at about 12 per cent and 190 of the 400 districts were showing a declining trend in 2005-2006.The SSA has been able to bring Scheduled Castes and Scheduled Tribes ( SC/STs) — weak points in earlier efforts to universalise elementary education — into the educational mainstream. Greater share The share of SC/ST children at the primary level in 2004-2005 was actually greater than their respective proportion of the population: 20. 73 per cent in the case of SCs against a population share of 16. 2 per cent and 10. 69 per cent against a population share of 8. per cent. The gender gap in enrolment now stands at 4. 2 percentage points at the primary level and 8. 8 percentage points at the upper primary level. In 2005-2006, there were only 22 districts (of the 400 for which data was available) where the gender gap was more than 10 percentage points at the primary level. However, the success rate on this count in the upper primary level is not so good as 82 districts have reported a gap of more than 15 percentage points. Essay Writing Sarva Shiksha Abhiyan is a quantitative success: IIM study There are some good tidings for the Union Human Resource Development Ministry from its flagship enterprise, the Sarva Shiksha Abhiyan (SSA), to universalise elementary education. A study conducted by the Indian Institute of Management, Ahmedabad (IIM-A), has found that the SSA has met with considerable success quantitatively if not qualitatively.While quality remains an area of concern, the SSA has been able to bridge the enrolment, retention and achievement gaps between the sexes and among social groups. According to the IIM-A study titled `Shiksha Sangam: Innovations under the SSA,' the out-of-school population had come down from 28. 5 per cent of the six-to-14 year age group in 2001 to 6. 94 per cent by the end of 2005. Dropout rates at the primary level stands at about 12 per cent and 190 of the 400 districts were showing a declining trend in 2005-2006.The SSA has been able to bring Scheduled Castes and Scheduled Tribes ( SC/STs) — weak points in earlier efforts to universalise elementary education — into the educational mainstream. Greater share The share of SC/ST children at the primary level in 2004-2005 was actually greater than their respective proportion of the population: 20. 73 per cent in the case of SCs against a population share of 16. 2 per cent and 10. 69 per cent against a population share of 8. per cent. The gender gap in enrolment now stands at 4. 2 percentage points at the primary level and 8. 8 percentage points at the upper primary level. In 2005-2006, there were only 22 districts (of the 400 for which data was available) where the gender gap was more than 10 percentage points at the primary level. However, the success rate on this count in the upper primary level is not so good as 82 districts have reported a gap of more than 15 percentage points.

Sunday, September 15, 2019

Volumetric Vinegar Analysis

Experiment 9 and 10: Volumetric/Vinegar Analysis Abstract: The goal of the experiment that was conducted was to figure out both the molar concentration of NaOH and the standard mole ratio of the NaOH solution. In order to find the concentration of the NaOH solution, volumetric analysis was used. In volumetric analysis, a titration mechanism was utilized in order to find the reaction that the base will end up having with KHC8H4O4. , also known as KHP. Phenolphthalein, which is the indicator that was used in this experiment, assisted in figuring out at exactly what point was there neutralization.The indicator turns the solution into a bright pink color once neutralization has occurred. In experiment 10, the average molarity of NaOH that was found in experiment nine was used in order to find out if the vinegar that was being used in the experiment contained around the same percent mass of acetic acid that is found in regular vinegar. The experimental value of NaOH that was used was 1. 0 425 grams and the molarity of NaOH was found to be 0. 089 m/L of NaOH. Towards the conclusion of the experiment, the average percent mass of acetic acid was calculated and found to be 1. 695%.Regular house hold vinegar’s average percent mass of acetic acid usually ranges to 4-5%. Based on the percent mass of acetic acid obtained in the experiment, the vinegar that was used in experiment 10 was clearly not house hold vinegar. The hypothesis for this experiment was, if the average percent mass of acetic acid ranged between 4-5%, then it is house hold vinegar. However, due to the results from the experiments conducted, this hypothesis was rejected. In order to obtain the results that the groups were searching for, titration was used in both experiments to find the answer.The method of titration involves the measurement of KHP and NaOH. Afterwards, the volumetric analysis was used, with the indicator included. The experiment starts by finding the measurements of KHP. The indicato r was added later on, and then the titration began with the NaOH solution. It was apparent once the solution was neutralized because the indicator caused the solution to turn bright pink. The experiment also required the utilization of volumetric mass in order to find the percent mass of acetic acid in vinegar.The mass of vinegar is then titrated along with the indicator endpoint with the sodium hydroxide solution. In order to find the average acetic percent mass of vinegar, the concentration found in NaOH in experiment 9 was utilized together with the known volume of NaOH. Materials: Please refer to Experiment 9 and 10 on pages 127-136 and 137-142, of Laboratory Manual for Principles of General chemistry 9th Edition by J. A. Beran. The only deviation that was performed during this experiment was the two to three extra drops of the indicator phenolphthalein in order to distinguish a titration point.Results: Experiment 9: Data: |Table 1: Measurement |Trial 1 |Trial 2 | |Mass of KHC8H 4O4. (g) |. 509 g |. 501 g | |Buret Reading of NaOH (mL) |28. 3 mL |26. 7 mL | Table 1 shows the measurements recorded for experiment 9, volumetric analysis Table 2: Calculations |Trial 1 |Trial 2 | |Moles of KHC8H4O4 (mol) |. 000303 |. 0002485 | |Volume of NaOH Dispensed (L) |. 0034 |. 0032 | |Molar Concentration of NaOH (mol/L) |. 089 |. 089 | Table 2 shows the calculations derived from experiment 9, volumetric analysis Calculations:Moles of KHC8H4O4 x 1 mol KHC8H4O4/ Molar Mass KHC8H4O4: 0. 089 m/L NaOH x 0. 0034 L= . 000303 moles NaOH 0. 089 m/L NaOH x 0. 0032 L= 0. 0002485 NaOH Volume of NaOH Dispensed (mL): Buret Reading of NaOH= 28. 3 mL, 26. 7 mL Molar Concentration Concentration of NaOH: 2. 45 x 10 -3 mol OH-/. 0275 L NaOH = 0. 089 M/L NaOH Results: Experiment 10 |Table 3: Measurement |Trial 1 |Trial 2 | |Mass of Vinegar (g) |1. 048 g |1. 37 g | |Buret Reading of NaOH (mL) |3. 4 mL |3. 2 mL | Table 3 shows the measurements recorded for experiment 10, vinegar analysis |Table 4: Calculations |Trial 1 |Trial 2 | |Volume of NaOH Used (mL)(L) |3. 4(. 0034) |3. 2(. 0032) | |Molar Concentration of NaOH (mol/L) (given) |0. 089 |0. 89 | |Molar Mass of Acetic Acid (g/mol) |. 0182 |. 0171 | |Mass of Acetic Acid in Vinegar (g) |1. 048 g |1. 037 g | |Avg. Percent Mass of Acetic Acid in Vinegar (%) |1. 695% | | Table 4 shows the calculations derived from experiment 10, vinegar analysis. Calculations: 1. Molar Concentration of NaOH (mol/L) Given (. M Solution) 2. Mass of Acetic Acid in Vinegar (g): Moles of Acetic Acid (mol) x Molar Mass of Acetic Acid (g/mol): 3. 026 x 10 -4moles of acetic acid x 60. 05 g/mol= . 0182 g 2. 848 x 10 -4moles of acetic acid x 60. 05 g/mol= . 0171 g 3. Avg. Percent Mass of Acid in Vinegar (%): 1. 65%+1. 74%/2= 1. 695% Discussion: The experiment began by adding NaOH to the mixture of deionized water and KHP in the beaker. The H+ ion that is found in KHP, reacted to the OH- ions that are found in the NaOH solution, even as more of the Na OH continued to be added into the mixture.When there turned out to be an abundance of NaOH, there were no longer any H+ to be added to KHP. As a solution, the extra OH-ions were found in the NaOH solution was used to make the indicator activate and make the solution turn pink. It was imperative that the solution be mixed the correct way. If it was not mixed the correct way, the results from the experiment will be inaccurate. If the reading had proven to be inaccurate because of that mistake, the volume of the NaOH solution mixed with the KHP will eventually get neutralized to a point where the numbers in the results would be very off.Two trials were done in this experiment in order to ensure that that mistake never happened and the volume of NaOH was found. Once the solution had finally been able to neutralize, the moles of the KHP were found and ended up being equal to the moles of NaOH. This information allowed for the molarity to be found. The average molarity that was in NaOH ha d been found in experiment 9, it was . 089 M. Both experiments 9 and 10 seemed to have similar traits because both of them involved titration. The titration was used in order to find the number of moles that was found in the acetic acid of the vinegar solution that was used.The normal amount of acetic acid found in household vinegar is between 4-5%. The experiments helped determine that household vinegar was definitely not the vinegar that was being used since the acetic amount that was found was 1. 695%. Conclusion The hypothesis was proven in the first experiment because the base of NaOH did end up neutralizing KHP’s acids. The indicator turned the solution pink; therefore the hypothesis in the first experiment was not rejected. The experiment involving the molarity of NaOH was very close in numbers. The molarity that was given was . 1 M, and the molarity that was found in the experiment was . 89 M. The hypothesis for the second experiment was â€Å"If the average percent mass of acetic acid ranged between 4-5%, then the vinegar that was being used for the experiment was household vinegar. † However, since the average percent mass of acetic acid resulted as 1. 695%, which was lower than household vinegar; this caused the hypothesis to be rejected. Works Cited Beran, Jo A. Laboratory Manual for Principles of General Chemistry. Hoboken, NJ: Wiley, 2011. Print. Tro, Nivaldo J. Principles of Chemistry: A Molecular Approach. Upper Saddle River, NJ: Prentice Hall, 2010. Print. Volumetric Vinegar Analysis Experiment 9 and 10: Volumetric/Vinegar Analysis Abstract: The goal of the experiment that was conducted was to figure out both the molar concentration of NaOH and the standard mole ratio of the NaOH solution. In order to find the concentration of the NaOH solution, volumetric analysis was used. In volumetric analysis, a titration mechanism was utilized in order to find the reaction that the base will end up having with KHC8H4O4. , also known as KHP. Phenolphthalein, which is the indicator that was used in this experiment, assisted in figuring out at exactly what point was there neutralization.The indicator turns the solution into a bright pink color once neutralization has occurred. In experiment 10, the average molarity of NaOH that was found in experiment nine was used in order to find out if the vinegar that was being used in the experiment contained around the same percent mass of acetic acid that is found in regular vinegar. The experimental value of NaOH that was used was 1. 0 425 grams and the molarity of NaOH was found to be 0. 089 m/L of NaOH. Towards the conclusion of the experiment, the average percent mass of acetic acid was calculated and found to be 1. 695%.Regular house hold vinegar’s average percent mass of acetic acid usually ranges to 4-5%. Based on the percent mass of acetic acid obtained in the experiment, the vinegar that was used in experiment 10 was clearly not house hold vinegar. The hypothesis for this experiment was, if the average percent mass of acetic acid ranged between 4-5%, then it is house hold vinegar. However, due to the results from the experiments conducted, this hypothesis was rejected. In order to obtain the results that the groups were searching for, titration was used in both experiments to find the answer.The method of titration involves the measurement of KHP and NaOH. Afterwards, the volumetric analysis was used, with the indicator included. The experiment starts by finding the measurements of KHP. The indicato r was added later on, and then the titration began with the NaOH solution. It was apparent once the solution was neutralized because the indicator caused the solution to turn bright pink. The experiment also required the utilization of volumetric mass in order to find the percent mass of acetic acid in vinegar.The mass of vinegar is then titrated along with the indicator endpoint with the sodium hydroxide solution. In order to find the average acetic percent mass of vinegar, the concentration found in NaOH in experiment 9 was utilized together with the known volume of NaOH. Materials: Please refer to Experiment 9 and 10 on pages 127-136 and 137-142, of Laboratory Manual for Principles of General chemistry 9th Edition by J. A. Beran. The only deviation that was performed during this experiment was the two to three extra drops of the indicator phenolphthalein in order to distinguish a titration point.Results: Experiment 9: Data: |Table 1: Measurement |Trial 1 |Trial 2 | |Mass of KHC8H 4O4. (g) |. 509 g |. 501 g | |Buret Reading of NaOH (mL) |28. 3 mL |26. 7 mL | Table 1 shows the measurements recorded for experiment 9, volumetric analysis Table 2: Calculations |Trial 1 |Trial 2 | |Moles of KHC8H4O4 (mol) |. 000303 |. 0002485 | |Volume of NaOH Dispensed (L) |. 0034 |. 0032 | |Molar Concentration of NaOH (mol/L) |. 089 |. 089 | Table 2 shows the calculations derived from experiment 9, volumetric analysis Calculations:Moles of KHC8H4O4 x 1 mol KHC8H4O4/ Molar Mass KHC8H4O4: 0. 089 m/L NaOH x 0. 0034 L= . 000303 moles NaOH 0. 089 m/L NaOH x 0. 0032 L= 0. 0002485 NaOH Volume of NaOH Dispensed (mL): Buret Reading of NaOH= 28. 3 mL, 26. 7 mL Molar Concentration Concentration of NaOH: 2. 45 x 10 -3 mol OH-/. 0275 L NaOH = 0. 089 M/L NaOH Results: Experiment 10 |Table 3: Measurement |Trial 1 |Trial 2 | |Mass of Vinegar (g) |1. 048 g |1. 37 g | |Buret Reading of NaOH (mL) |3. 4 mL |3. 2 mL | Table 3 shows the measurements recorded for experiment 10, vinegar analysis |Table 4: Calculations |Trial 1 |Trial 2 | |Volume of NaOH Used (mL)(L) |3. 4(. 0034) |3. 2(. 0032) | |Molar Concentration of NaOH (mol/L) (given) |0. 089 |0. 89 | |Molar Mass of Acetic Acid (g/mol) |. 0182 |. 0171 | |Mass of Acetic Acid in Vinegar (g) |1. 048 g |1. 037 g | |Avg. Percent Mass of Acetic Acid in Vinegar (%) |1. 695% | | Table 4 shows the calculations derived from experiment 10, vinegar analysis. Calculations: 1. Molar Concentration of NaOH (mol/L) Given (. M Solution) 2. Mass of Acetic Acid in Vinegar (g): Moles of Acetic Acid (mol) x Molar Mass of Acetic Acid (g/mol): 3. 026 x 10 -4moles of acetic acid x 60. 05 g/mol= . 0182 g 2. 848 x 10 -4moles of acetic acid x 60. 05 g/mol= . 0171 g 3. Avg. Percent Mass of Acid in Vinegar (%): 1. 65%+1. 74%/2= 1. 695% Discussion: The experiment began by adding NaOH to the mixture of deionized water and KHP in the beaker. The H+ ion that is found in KHP, reacted to the OH- ions that are found in the NaOH solution, even as more of the Na OH continued to be added into the mixture.When there turned out to be an abundance of NaOH, there were no longer any H+ to be added to KHP. As a solution, the extra OH-ions were found in the NaOH solution was used to make the indicator activate and make the solution turn pink. It was imperative that the solution be mixed the correct way. If it was not mixed the correct way, the results from the experiment will be inaccurate. If the reading had proven to be inaccurate because of that mistake, the volume of the NaOH solution mixed with the KHP will eventually get neutralized to a point where the numbers in the results would be very off.Two trials were done in this experiment in order to ensure that that mistake never happened and the volume of NaOH was found. Once the solution had finally been able to neutralize, the moles of the KHP were found and ended up being equal to the moles of NaOH. This information allowed for the molarity to be found. The average molarity that was in NaOH ha d been found in experiment 9, it was . 089 M. Both experiments 9 and 10 seemed to have similar traits because both of them involved titration. The titration was used in order to find the number of moles that was found in the acetic acid of the vinegar solution that was used.The normal amount of acetic acid found in household vinegar is between 4-5%. The experiments helped determine that household vinegar was definitely not the vinegar that was being used since the acetic amount that was found was 1. 695%. Conclusion The hypothesis was proven in the first experiment because the base of NaOH did end up neutralizing KHP’s acids. The indicator turned the solution pink; therefore the hypothesis in the first experiment was not rejected. The experiment involving the molarity of NaOH was very close in numbers. The molarity that was given was . 1 M, and the molarity that was found in the experiment was . 89 M. The hypothesis for the second experiment was â€Å"If the average percent mass of acetic acid ranged between 4-5%, then the vinegar that was being used for the experiment was household vinegar. † However, since the average percent mass of acetic acid resulted as 1. 695%, which was lower than household vinegar; this caused the hypothesis to be rejected. Works Cited Beran, Jo A. Laboratory Manual for Principles of General Chemistry. Hoboken, NJ: Wiley, 2011. Print. Tro, Nivaldo J. Principles of Chemistry: A Molecular Approach. Upper Saddle River, NJ: Prentice Hall, 2010. Print.

Saturday, September 14, 2019

Early to Bed

Early to Bed April 17th 2013 Section 1 It’s not unusual you can hear the floorboards creak, the toilet flush, and the sound of the first one shoe drop to the floor from your neighbor at 1 a. m. in your apartment, and you may be one of them. Nowadays many people stay up late, especially for those people who have variable sleep schedules, such as university students. University students usually change their sleep schedules due to studying, working for a living, or working for social networking (e. g. alcohol and caffeine consumption). Staying up late usually leads to insufficient sleep, and this situation is prevalent among university students. According to a survey done by Leon C. Lack Ph. D. in the journal â€Å"Delayed Sleep and Sleep Loss in University Students†, â€Å"A sample of 211 university first-year psychology students†¦ accounted for about 50% of the total enrollment in the course†¦ about 50% of the sample complained of insufficient sleep and estima ted needing about half an hour more sleep on the average to feel rested. (Lakc, 2010) Moreover, the author also realized the linkage between staying up late and the insufficient sleep, â€Å"Delayed sleep pattern presumably arises from a delay in their endogenous biological rhythms that creates difficulty in falling asleep early enough to get sufficient sleep before necessary weekday morning awakening. † (Lakc, 2010) Both delayed sleep and insufficient sleep can cause serious healthy issues, and also affect one’s working productivity. Based on the Journal named â€Å"Pathways to adolescent health sleep regulation and behavior† by Ronald E Dahl, M. D. â€Å"There is mounting evidence that sleep deprivation has its greatest negative effects on the control of behavior, emotion, and attention†¦ the most obvious direct health consequences of insufficient sleep are high-risk behaviors associated with substance abuse and automobile accidents. † (Dahl, 2002) Delayed sleep may harass one’s circadian rhythm, and further lead to delayed sleep phase disorder. Insufficient sleep may cause emotional fluctuation, which further affect your social networking relationships since being tired usually means being grumpy. Students usually think that they are more productive at night, owever the truth is opposite. Humans aren’t used to saying up late, in the optimal situation, based on an article from CNN Health, â€Å"we rise in the morning and after about 16 hours of wakefulness we are sleepy and we go to bed and sleep for eight hours† (Shives, 2010) Staying up simply means we use our brains so intensively even when our brains are ready for a rest. During the weekdays, delayed sleep and insufficient sleep make us feel tired in the daytime, and it is difficult for students to be concentrated in classes, then further affect students’ academic performances.The benefits of sleeping early are obvious. Going to bed early helps us maintain the order of circadian rhythm and ensures the quality of sleep at night. Based on Dahl’s journal, â€Å"Sleep appears to be particularly important during periods of brain maturation. † (Dahl, 2002) Sleeping is the process of restoring our brain, we would be more productive, concentrated, and confident in our work during daytime. Sleeping early means we can have more time in the morning. Changing and maintain sleep schedule is a continuous process. It is impossible to accomplish all the changes overnight.In order to successfully switch sleep schedule to optimal situation, we should be aware the healthy issues derived from delayed sleep, identify a target behavior with a personal research, set achievable and incremental goals as time goes by, and finally reward your success. Section 2 As a junior-year university student at business school, both my academic and personal life have been busy, being productive is one of the major factors that let me survive. I o ften stayed up late to get work done since I thought sacrificing sleep created more time for work, and then I could keep abreast of my schedule.However, things just went contrary to my wishes. First of all, staying up shorted my sleep time, which led to insufficient sleep time. Then I had to use coffee to fight for fatigue and tiredness, but my productivity still kept low during classes. In order to catch up what I left during the classes, I had to spend more time to study outside. After I finished all my homework, it was usually around 1 a. m. , but the drag effect of caffeine kept me waking up at that time. My daily life was a vicious spiral and I found my body reactions slowed down physically and mentally, my motion was under the weather and even affected the relationship with my girlfriend. Therefore, the main reason I’ve chosen to sleep early is increasing my productivity and getting rid of fatigue and tiredness without caffeine. In order to optimize my sleep schedule, I organized a three-stage target schedule: The first stage (3/30 to 4/15), I went to bed at 12:00 a. m. and woke up as usual; the second stage (4/15 to 4/30), I went to bed at 12:00 a. m. and woke up half an hour early as usual; the third stage (After 4/30), I went to bed at 11:00 p. m. and woke up one hour early.Half a month has passed, even though I am in the second stage, but I do have some progresses that benefit for my daily life. Setting a fixed time to go to bed forces me to manage my time more effectively. Most importantly, sleeping early gives me more energy in daytime, and now I can keep my brain working without caffeine even I wake up half an hour early than before. My productivity is improving, and the biggest change is I can keep myself on the same page with professor in lectures simply because I have enough energy to think more and interact mentally.Nevertheless, things won’t change overnight, and I do encounter some difficulties during my behavior changing. So f ar, the biggest challenge has been my habit of staying up in my sub-consciousness. During weekdays, as long as my schedule gets crowded, I will have the intent to delay sleep time out of habit even those tasks are not urgent; in weekends, parties are attractive for me and most of them last until late night. Be honest, I did not meet my short-term goal three times so far. Reaching my ultimate goal is not easy, and I am implementing some strategies hopefully to keep myself on the right track.First of all, I believe separating my plan into three short-term stages makes my plan as a continuous improvement that is easier to accomplish and encourages me to proceed; second, finding a change agent is important. My girlfriend is my change agent, and she has helped me to act with the criteria I set closely. One advantage of choosing my girlfriend as the agent is I have to listen to her order because I do not want to piss her off. Even though I’ve not reached my ultimate goal yet, some potential long-term benefits can be observed.First and foremost, I will be more productive in my academic performance. Sleeping early provides my body an optimal circadian rhythm which gives me a high quality and sufficient sleep at night. Consequently, I will have abundant of energy to handle my busy university life. Moreover, sufficient energy will enable me to balance my academic life and personal life more reasonably, and then I will have a great passion to maintain my private relationship with my girlfriend and my social networking.Last but not least, sufficient sleep will give me a healthy life that will be the upmost foundation for my body health in my future life. Section 3 By reviewing my journal entries for the past half a month, in sum, I did follow my stage short-term target in weekdays. Meeting the short-term target in each stage is easier in weekdays because my class schedule is relatively fixed. Nevertheless, meeting the target in weekends has been the difficult part. As I mentioned in the last section, attending parties held in weekends last late made me out of my planned track.Moreover, since I was used to stay up late for a long time, sometimes I still consider staying up is a way to relax myself. As for the change of emotional process, at the very beginning, I even felt anxious when I went to bed without completing my tasks as usual, and this emotion hindered me to fall asleep. Fortunately, as I reorganized my tasks priority corresponding to my early to bed plan, and that anxious emotion has no longer been a problem. Below is a snapshot of my tracking chart.Cells with yellow filling indicate weekend days, and times in red font indicate failing fulfillments. Works Cited Lakc, L. C. (2010). Delayed Sleep and Sleep Loss in University Students. Journal of American College Health , 105. Dahl, R. E. (2002). Pathways to adolescent health sleep regulation and behavior. Journal of Adolescent Hleath , 10-11. Shives, L. (2010, 11 30). Get Some Sleep: A re you a night owl? Here's why. Retrieved 4 17, 2013, from CNN Health: http://thechart. blogs. cnn. com/2010/11/30/get-some-sleep-night-owl-its-a-real-condition/

Evaluation Of Investment Alternatives Essay

Introduction – Capital budgeting A critical role of a financial manager is the evaluation of capital projects.   This is a very important task because the money involved in such activities is significant and the benefit or loss derived from will highly influence the financial performance of the whole organisation (Brockington R. B. 1996, p 102).   Indeed, Nobel laureates Modigliani and Miller suggested in their theory of capital structure that the value of a company is not affected by its gearing, but the primary factor that influences such value is the investment in wealth creating projects (Pike R. et al.   1999. p 557 and 577). 1.1   Evaluation of plans if their risk equals that of the firm 1.1.1 Net Present Value Method PLAN X Details 0 â‚ ¬Ã¢â‚¬â„¢000 1 â‚ ¬Ã¢â‚¬â„¢000 2 â‚ ¬Ã¢â‚¬â„¢000 3 â‚ ¬Ã¢â‚¬â„¢000 4 â‚ ¬Ã¢â‚¬â„¢000 5 â‚ ¬Ã¢â‚¬â„¢000 Initial Investment (2,700)                Cash Flows    470 610 950 970 1,500 Net Cash Inflow/(Outflow) (2,700) 470 610 950 970 1,500 12% Discount Rate 1.0000 0.89286 0.79719 0.71178 0.63552 0.56743 Present Value (2,700) 419.64 486.29 676.19 616.45 851.15 Net Present Value – â‚ ¬349,720 PLAN Y Details 0 â‚ ¬Ã¢â‚¬â„¢000 1 â‚ ¬Ã¢â‚¬â„¢000 2 â‚ ¬Ã¢â‚¬â„¢000 3 â‚ ¬Ã¢â‚¬â„¢000 4 â‚ ¬Ã¢â‚¬â„¢000 5 â‚ ¬Ã¢â‚¬â„¢000 Initial Investment (2,100)                Cash Flows    380 700 800 600 1,200 Net Cash Inflow/(Outflow) (2,100) 380 700 800 600 1,200 12% Discount Rate 1.0000 0.89286 0.79719 0.71178 0.63552 0.56743 Present Value (2,100) 339.29 558.03 569.42 381.31 680.92 Net Present Value – â‚ ¬428,970 Source:   Drury C. 1996, p 389. 1.1.2 Internal Rate of Return Method PLAN X Year Net Cash Inflow/(Outflow) Discount Factor* Present Value    â‚ ¬ 16% 17% 16% 17% 0 (2,700,000) 1.0000 1.0000 (2,700,000) (2,700,000) 1 470,000 0.86207 0.85470 405,172.90 401,709.00 2 610,000 0.74316 0.73051 453,327.60 445,611.10 3 950,000 0.64066 0.62437 608,627.00 593,151.50 4 970,000 0.55229 0.53365 535,721.30 517,640.50 5 1,500,000 0.47611 0.45611 714,165.00 684,165.00 Net Present Value 17,014 (57,723) PLAN Y Year Net Cash Inflow/(Outflow) Discount Factor* Present Value    â‚ ¬ 18% 19% 18% 19% 0 (2,100,000) 1.0000 1.0000 (2,100,000) (2,100,000) 1 380,000 0.84746 0.84034 322,034.80 319,329.20 2 700,000 0.71818 0.70616 502,726.00 494,312.00 3 800,000 0.60863 0.59342 486,904.00 474,736.00 4 600,000 0.51579 0.49867 309,474.00 299,202.00 5 1,200,000 0.43711 0.41905 524,532.00 502,860.00 Net Present Value 45,670.80 (9,560.80) Source: Horngren T. C. et al. 1997, p 785 – 787. 1.1.3 Evaluation of projects Plan Y is more financially feasible under both methods.   The net present value of Plan Y is â‚ ¬79,250 [â‚ ¬428,970 – â‚ ¬349,720] higher than Plan X.   The internal rate of return of Plan Y is also 2.61% higher than the other plan, indicating a higher margin of safety on losses in case the expected cash flows are not achieved (Randall H. 1996, p 446). 1.2 Examination of plans at different risk profiles 1.2.1 Net Present Value Method PLAN X Details 0 â‚ ¬Ã¢â‚¬â„¢000 1 â‚ ¬Ã¢â‚¬â„¢000 2 â‚ ¬Ã¢â‚¬â„¢000 3 â‚ ¬Ã¢â‚¬â„¢000 4 â‚ ¬Ã¢â‚¬â„¢000 5 â‚ ¬Ã¢â‚¬â„¢000 Initial Investment (2,700)                Cash Flows    470 610 950 970 1,500 Net Cash Inflow/(Outflow) (2,700) 470 610 950 970 1,500 13% Discount Rate 1.0000 0.88496 0.78315 0.69305 0.61332 0.54276 Present Value (2,700) 415.931 477.722 658.398 594.920 814.140 Net Present Value – â‚ ¬261,111 PLAN Y Details 0 â‚ ¬Ã¢â‚¬â„¢000 1 â‚ ¬Ã¢â‚¬â„¢000 2 â‚ ¬Ã¢â‚¬â„¢000 3 â‚ ¬Ã¢â‚¬â„¢000 4 â‚ ¬Ã¢â‚¬â„¢000 5 â‚ ¬Ã¢â‚¬â„¢000 Initial Investment (2,100)                Cash Flows    380 700 800 600 1,200 Net Cash Inflow/(Outflow) (2,100) 380 700 800 600 1,200 15% Discount Rate 1.0000 0.86957 0.75614 0.65752 0.57175 0.49718 Present Value (2,100) 330.437 529.298 526.016 343.050 596.616 Net Present Value – â‚ ¬225,417 Source:   Hirschey M. et al. 1995, p 799. 1.2.2 Comparison of decisions at different risk rates When the discount rate of the project is considered instead of the overall rate of the company, the financial viability of Plan Y diminishes because this plan is a riskier project than the other one and hence, a higher discount rate is chosen.   The process of discounting arises from the time-value of money principle, and the higher the discount rate the lower the present value from the cash flows generated from the project (Pike R. et al. 1999, p 66 & 67).   In such a stance, Plan Y is no longer the most optimal project because Plan X net present value exceeds that of Plan Y by â‚ ¬35,694 (â‚ ¬261,111 – â‚ ¬225,417). 1.3 Analysis of real option data for plans 1.3.1 Net Present Value Method PLAN X Details 0 â‚ ¬Ã¢â‚¬â„¢000 1 â‚ ¬Ã¢â‚¬â„¢000 2 â‚ ¬Ã¢â‚¬â„¢000 3 â‚ ¬Ã¢â‚¬â„¢000 Initial Investment (2,700)          Cash Flows    470 610 950 Net Cash Inflow/(Outflow) (2,700) 470 610 950 13% Discount Rate 1.0000 0.88496 0.78315 0.69305 Present Value (2,700) 415.931 477.722 658.398 Net Present Value: -â‚ ¬1,147,949 + (â‚ ¬100,000 x 25%) = -â‚ ¬1,122,949 PLAN Y Details 0 â‚ ¬Ã¢â‚¬â„¢000 1 â‚ ¬Ã¢â‚¬â„¢000 2 â‚ ¬Ã¢â‚¬â„¢000 3 â‚ ¬Ã¢â‚¬â„¢000 4 â‚ ¬Ã¢â‚¬â„¢000 5 â‚ ¬Ã¢â‚¬â„¢000 Initial Investment (2,100)                Cash Flows    380 700 800 600 1,200 Net Cash Inflow/(Outflow) (2,100) 380 700 800 600 1,200 15% Discount Rate 1.0000 0.86957 0.75614 0.65752 0.57175 0.49718 Present Value (2,100) 330.437 529.298 526.016 343.050 596.616 Net Present Value: â‚ ¬225,417 + (â‚ ¬500,000 x 20%) = â‚ ¬325,417 Source:   Lucey T. 2003, p 416. 1.3.2 Comparison of real option plans with original plans If we consider and apply the real options available, Project Y becomes the best project, on the contrary of the conclusion noted in sub-section 1.2.2.   It is also worth nothing that the application of the real option for Plan X is not financially viable because we will end up with a negative net present value.   If we compare the net present value of Plan Y under the real options scheme with the net present value of Plan X we can deduce that Plan Y real options project is more feasible than the other plan since the net present value is â‚ ¬64,306 higher [â‚ ¬325,417 – â‚ ¬261,111]. 1.4 Effect of Capital Rationing Capital rationing is an absolute restriction on the amount of finance available for a project irrelevant of cost.   This should not be confused with scarcity of economic resources.   Capital rationing on projects is sometimes applied even though the organization posses or can attain available finance.   For example, a capital rationing may be imposed on the amounts of debts an organisation can take in order to limit the gearing of the firm (Brockington R. B. 1996, p 151). When conditions of capital rationing are imposed, there is the possibility that the most optimum project is not selected.   Therefore yes capital rationing may effect the selection of Plan X or Plan Y.   For example if a capital rationing is adopted by the firm which states that the initial investment cannot exceed â‚ ¬2,000,000 due to its effect on gearing. Under such conditions no Plan would be selected by the firm.   Another example of capital rationing that will affect the project choice is if management decided to restrict expansion of the factory, because they fear that control on employees may be lost affecting negatively their relationship and control on staff.   In this case Plan X would be excluded, even though it is the most optimal project as denoted in sub-section 1.2.2., and the available choice would be Plan Y. 1.5 Financial instruments available for private companies The alternative financial instruments that the firm can use, apart from shares are: Corporate Bonds & Debentures; Overdraft facility by the bank; Bank loan; Venture capital; and Leasing 1.5.1 Advantages and disadvantages of corporate bonds/debentures The advantages related to corporate bonds are (E*Trade Financial website): Corporate bonds are usually lent at a longer period of time (Veale R. S. 2000, p 155). Interest payments for bonds are tax deductible. Interest rates of corporate bonds are frequently lower than those of banks. Percentage ownership of shareholders is not weaken by the issue of corporate bonds or debentures (Veale R. S. 2000, p 156) The disadvantages encountered with corporate bonds are: Obligation of interest on the firm’s cash flow, thus increasing the risk of bankruptcy during periods of financial problems. Upon maturity, the company has to pay back all the amount of the bond. 1.5.2 Advantages and disadvantages of bank overdraft facility A bank overdraft facility can provide the following benefits (tutur2u website): Allows flexibility of finance.   The company can increase the overdraft facility within acceptable limits. Interest is only charged on the amount used and is tax deductible. Percentage ownership of shareholders is not diluted by taking an overdraft facility. The disadvantages imposed by an overdraft facility are (tutur2u website): Rates of interest are higher than those of bank loans. Money due is repayable on demand. The facility limit can be changed by the bank according to its discretion. Usually used for short-term borrowing. 1.5.3 Advantages and disadvantages of bank loans These are the advantages derived from bank loans (tutur2u website): Loan is repaid back in regular payments thus allowing better cash management. Lower interest charged than bank overdraft. Percentage ownership of shareholders is not diluted by taking an overdraft facility. Large amounts can be borrowed for long term finance. Limitations of this type of finance are (tutur2u website): Interest has to be paid within a specified date. Less flexible than an overdraft facility. 1.5.4 Advantages and disadvantages of venture capital The advantages of venture capital are (Business Link website): Obtain proficient management expertise, if they get involved in the firm’s operations. Large sums of finance can be obtained from venture capital. The disadvantages incurred by using such medium of finance are (Business Link website): Require detailed financial reporting like business plans and financial estimates. Legal and accountancy fees are incurred in the negotiation process. Firm require a proven track record to take such finance. High returns are frequently expected from venture capitalists.       15.5 Advantages and disadvantages of leasing The advantages obtained from leasing are (Enterprise. Financial Solutions website): Provides 100% financing of asset. There is no need of credit lines with banks and other depositary associations, which are hard to obtain. Minimal paperwork required to acquire lease. Acts as hedging against inflation. Flexible payments are allowed in leasing. Interest on leasing is not subject to increases like bank overdrafts. The disadvantages encountered through leasing finance are (Auto Leasing Software Lease Tips website): The organisation is committed to the entire validity period of the lease. High amounts of insurance coverage are frequently demanded in leases. No ownership of the asset the firm is using in the project’s operations. References: Auto Leasing Software Lease Tips.   Disadvantages of leasing (on line).   Available from:   http://www.autoleasingsoftware.com/LeaseTips/Disadvantages.htm (Accessed 13th March 2007). Brockington R. B. (1996).   Financial Management.   Sixth Edition.   London:   DB Publications. Business Link.   Equity Finance (on line).   Available from:   http://www.businesslink.gov.uk/bdotg/action/detail?type=RESOURCES&itemId=1075081582 (Accessed 13th March 2007). Drury C. (1996).   Management and Cost Accounting.   Fourth Edition.   London:   Thomson Business Press. Enterprise.Financial Solutions.   Advantages of leasing (on line).   Available from:   http://www.efsolutionsinc.com/Advantages_of_leasing.htm (Accessed 13th March 2007). E*Trade Financial.   Corporate Bonds Overview (on line).   Available from:   https://us.etrade.com/e/t/kc/KnowArticle?topicId=13200&groupId=8722&articleId=8723 (Accessed 13th March 2007). Hirschey M; Pappas L. J. (1995).   Fundamental of Managerial Economics.   Fifth Edition.   Orlando:   The Dryden Press Horngren T. C.; Foster G.; Srikant M. D. (1997).   Cost Accounting – A Managerial Emphasis.   Ninth Edition.   London:   Prentice-Hall International (UK) Limited. Lucey T. (2003).   Management Accounting.   Fifth Edition.   Great Britain:   Biddles Ltd. Pike R.; Neale B. (1999).   Corporate Finance and Investment.   Third Edition.   London:   Prentice-Hall International (UK) Limited. Randall H. (1999).   A Level Accounting.   Third Edition.   Great Britain:   Ashford Colour Press Ltd. Tutur2u.   Bank Loans and Overdrafts (on line).   Available from:   http://www.tutor2u.net/business/gcse/finance_bank_loans_overdrafts.htm (Accessed 13th March 2007). Veale R. S. (2000).   Stocks, Bonds, Options and Futures.   Second Edition.   United States of America:   New York Institute of Finance.