Thursday, October 31, 2019

Information System Briefing Essay Example | Topics and Well Written Essays - 750 words

Information System Briefing - Essay Example It is crucial to involve the stakeholders in selecting and acquiring the planned information system to ensure participative decision-making. We intend to engage in an intensive planning before we embark on the purchasing process. Planning will involve searching for information concerning the anticipated information system, assigning the purchase tasks to the team concerned with the implementation, setting task priorities and putting in place reminders of the project duration. The planning involves an analysis of information and workflows that are critical to the development of system architectures necessary to meet the needs of the business (Evelyn, 2010). Our organization wants to purchase the best system for enhancing quality of service delivery. The process of selecting will require results from properly conducted research. All nurses and caregivers will be required to contribute to the selection process; our organization performs duties on the foundations of democracy to prevent negative reactions from the users. After selection, the system will be purchased from the vendor who wins the tender. The installation of the new system will follow the parallel implementation conversion strategy. According to (Evelyn, 2010), there is usually an overlap period where both the old and the new information systems will need to run concurrently. Parallel implementation means operating the new and old systems side by side until the new systems confirms reliability (Hovenga, 2010); this is when the old system will be discontinued. Evaluating the information system will involve an ongoing process of testing the performance of the information and determine whether it meets the predetermined standards. The information collected will provide a foundation for collecting the performance requirements, creating performance testing

Tuesday, October 29, 2019

The Market of IKEA in China Essay Example | Topics and Well Written Essays - 15000 words

The Market of IKEA in China - Essay Example 1.1 Product rule 45 4.1.2 Propaganda approach 46 4.2 IKEA stores have a different location principle in China 46 4.2.1 City selection 47 4.2.2 Position selection 47 4.3 IKEA tries to lower the price to fit Chinese consumption level 49 4.3.1 Build more processing factories 49 4.4 The difficulties and challenges for IKEA in China 50 4.4.1 Domestic furniture firms are the main competitors of IKEA 50 4.4.2 IKEA’s products are always imitated by others in China 52 4.4.3 IKEA joint venture with local firms at the early stage of operating in China 52 4.4.4 IKEA developed slowly in China 53 4.5 Chinese furniture market has huge potential 54 5 Conclusion, Limitation and Recommendations 57 5.1 Conclusion 57 5.2 Limitations 58 5.21 The credibility of Chinese media resource 58 5.22 The short history of statistics collection in China 58 5.23 On demand information not available 58 5.24 Regional restriction 59 5.3 Recommendations 59 5.3.1 Appropriate investigations for the market of IKEA in other countries 59 5.3.2 Choose a similar brand as a case study, adding into this paper 59 5.3.3 Adding the research on IKEA’s development planning in China 60 References 61 Bibliography 69 Appendix I Reflection 71 Appendix II Dissertation Log 72 Appendix III Dissertation Proposal 83 Appendix IV Ethic Form 95 1 Introduction With the rapid growth and development of China, especially after 1987, the level of economy and the life quality of Chinese people increased dramatically. When they have enough money for food and cloth, they shift their interests in other aspects, such as furnishing, to make their life more colourful. Due to the increasing needs for furniture, there are more and more furniture companies operating in China. Nowadays, there are a large number of furniture companies...IKEA is a Swedish home furnishing retailer. Since 1973, when IKEA opened its first store in Switzerland, it started its business activities in the foreign markets. Nowadays, IKEA has more than 2 92 stores in 37 countries although it had operations only in Asia about ten years ago. At present, Ikea is trying to expand, to many other overseas markets. International business or cross cultural business is growing day by day after the introduction of globalization. Majority of the countries are trying to attract foreign direct investments as much as possible and hence international companies are getting immense opportunities in overseas markets. As an internationalisation strategy, IKEA has selected the neighbouring countries initially to move into. It is because these countries share a similar language, and traditional culture with Sweden. Following that, IKEA has moved into other Non-neighbouring countries to develop more variety in its market share. It should be noted that culture plays an important role in the internationalization of business. In other words, expansion of business to countries with similar culture is easier than expansion of business to countries with differ ent culture. IKEA aims to offer a better everyday life for a lot of people. The company’s mission is to offer a wide range of well-designed, functional home furnishing products with affordable price (Sandelands, 2009). Earlier, Ikea products were highly expensive and hence only the elite groups in the society were the major customers of Ikea.

Sunday, October 27, 2019

Evaluation of Individual Stock and Sector Level

Evaluation of Individual Stock and Sector Level Is there any method of asset allocation within a stock portfolio that can repeatedly and over time outperform a passive index (buy and hold strategy)? The objective of this study is to compare strategies that have been used over the last decades by academics and professionals alike, and to expand on that study to create a real-time portfolio at the end of year t, to observe the portfolios behaviour during the next year (t + 1). This portfolio, unlike those created in previous studies, is not limited to the study of individual stocks, but instead gives importance to sector allocation. In addition, this study also focuses on implementing a long-short strategy in those assets: with the same overall exposure to the market, will a long-short strategy that depends on financial metrics exhibit a better risk-adjusted return than a 100% long strategy? In other words, are financial metrics capable of not only detecting undervalued stocks, but also of detecting those overpriced? Sector allocation is an especially important factor, with previous studies failing to consider sector allocations before moving to find the best classified assets within each sector; rather, they have jumped directly to stock selection. One of the most common strategies for stock picking in the asset management industry today relies on initially choosing sectors that, from a macro perspective, are expected to outperform the market. From this stance, analysts proceed to evaluate specific stocks to choose potential winners. Despite this being the industry standard, only a scarce number of studies exist that use common ratios between sectors to analyse and devise allocation strategies that are first sector-based and then based on individual stock. The objective of this project, therefore, is to focus on a set of financial metrics, both at individual stock level and at sector level, to examine if there is a positive relationship between these ratios and alpha creation. In order to achieve this, a portfolio will be constructed and rebalanced yearly, according to previous end-of-year data. Several traditionally-appraised financial measures, such as P/E ratio, free cash flow to enterprise value ratio and book-to-market value ratio will be employed, as will certain profitability ratios that include data from the income statement, such as gross profit, operating profit and EBITDA. The reasoning behind using measures higher up within the income statement is due to solely to accounting choices in comparison to net income these are less affected by an individual companys accounting process. In fact, revenue and other described measures of profit, are more consistent year to year than net income. Subsequently, this provides the rationa le that these measures are better able to predict future cash flows and, consequently, next years performance. Forward-looking measures such as analysts consensus recommendations and forward EPS will also be utilised and tested. Departing from the hypothesis that these individuals conduct an exhaustive analysis of the financial data at year end t to predict the performance during year t+1, the accuracy of the forecasts will be tested against those same financial measures and valuation metrics existing at year end t. Data will be extracted from a set of databases comprised of Compustat, CRSP and I/B/E/S. Fundamental data will be extracted from end-of-fiscal-year filing, to allow a time lag for the data release of a quarter period (3 months), before a portfolio is rebalanced. Hence, with a fiscal year ending in December of year t, a lag in the release of data will always exist and portfolios will be rebalanced at the end of the first quarter of year t+1. Monthly returns for every stock will be compounded throughout that year over 12 months. Each years universe of stocks will then be ranked by the different valuation metrics to construct a portfolio at year end t in order to assess the portfolios performance during t + 1. Three different sets of portfolios will be constructed for each financial metric each year and, within each set, two strategies will be implemented. For a stocks-only portfolio, only those stocks ranking in the upper quintile (top 20%) will be used each year. For the sector-only portfolio, a market capitalisation average of each sector will be calculated, and the portfolio will be formed by the top 20% sectors ranked in any given year. For the sector and stocks portfolio, an implementation of both criteria will be evaluated. That being said, the portfolio is formed from the top quintile stocks within the top quintile sectors each year. As a whole, this assumes a long strategy, buying in on those stocks on a value-weighted basis each year to constitute a portfolio. In the long-short strategy, the bottom quintile of each respective category will be shorted, and the proceeds used to buy an extra 30% of the top quintile of stocks. Using a long-short strategy will help the researcher examine the feasibility of using these ratios in recognising overvalued stocks as well as undervalued companies, and use this information to construct a more profitable portfolio. A 130/30 long short strategy is used, where a 150/50 long short strategy or other proportions could have also been tested. However, the 130/30 strategy is chosen following the creation and later popularisation of 130/30 mutual funds and investment vehicles. This choice stems from an initial study suggesting that 130/30 was the optimal proportion of long-short positions in a portfolio, even though no empirical data has been found that a 130/30 strategy later maximises alpha. Despite this, given its popularisation and position as an industry standard, our analysis proceeds with this strategy. Ultimately, performance attribution and portfolio statistics will be calculated, such as average return, total payoff, standard deviation, Sharpe ratio and alpha according to the Fama French 3 factor model, correcting for small minus big and high minus low book market value (Fama and French, 1992). This will help in our analysis of the results, to provide a clear and concise indication of which ratios perform best under each strategy and under each level (sector and stock). Literature Review Re-emphasising the importance of sector level asset allocation strategies, particularly at a time in the financial industry when performance attribution analysis stresses return on the relative weighting of sectors in portfolios, it is surprising that existing studies underscore the importance of certain ratios or fundamental data for stocks while lacking the ability to employ a method to identify undervalued sectors. Previous studies from Shiller and Bunn (2014) construct a 140-year regression series based on the relationship between the earnings of different sectors and their yields, creating a CAPE (Cyclically Adjusted Price Earnings) index that identifies sectors with upside potential. Their research indicates that market sectors show price mismatches that can be exploited. According to them, the CAPE index is capable of outperforming the market by an average of 4%. Therefore, the objective of this project is to expand on their results by examining a number of other ratios and fi nancial fundamentals, particularly those related to profitability measures, and to investigate whether these, both at individual and sector level, are capable of forming a portfolio that outperforms the broader index and a buy and hold investment strategy. Gray and Vogel (2012) try to depict not only the ratio that is able to predict higher performing stocks, but also those in the lower ranges; this implies being able to detect not only what are known in the financial investment world as value stocks, but also overvalued growth stocks. According to their research, some measures are more efficient than others in providing insight into which stocks are overpriced. Gray and Vogel (2012) therefore conclude that EBITDA/EV and GP/EV are the metrics that are best able to identify the overvalued stocks. The results in this dissertation agree that the GP/EV ratio is useful to identify overvalued stocks and is hence a good metric to build long/short strategies, but the results also consider free cash flow/EV as a favourite on a risk-adjusted basis for implementing a long-short strategy at stock level. The results of the following study show that stocks exhibiting a low FCF/EV experience low returns, demonstrating an ability to identify overvalue d stocks. Such a contradiction might be explained by the difference in the universe of stocks used or, more specifically, by the use of lag for data release, which corrects the assumption that results are available to the public at the end of fiscal year t. This lag is introduced by Hughen and Strauss (2015) in their comparable study of profitability ratios in portfolio allocation. The analysis in this project goes beyond what the Gray and Vogel (2012) study implies and develops a portfolio strategy to buy stocks that exhibit higher ratios, but also a complementing 130/30 strategy, which short sells stocks exhibiting poor ratios, and proportionally buys in excess those that exhibit a healthy ratio. As Miller (2001) shows in his work, overvaluation of stocks is far more common and of greater absolute value than undervaluation. This supports a rationale for this work. However, care should be taken when dealing with long-short strategies. As suggested by Michaud (1993), costs stemming from short sales in a portfolio could prove quite significant. However, Jacobs and Levy (1995) argue that these costs are not much higher than a long-only portfolio, and are well under those charged by active management. Professionals and practitioners alike have historically depended on several fundamental and financial measures to assist them in the portfolio selection process. Perhaps the most famous is the price-to-earnings ratio (P/E) along with the ratio between earnings before interest, taxes, depreciation, and amortization (EBITDA) and total enterprise value. Fama and French (1992) argue that book to market ratio perhaps most accurately explains the cross section return of stock, which they later include in their three-factor model. In our approach, we include these traditional metrics, while also relying on profitability measures, such as gross profit/EV, introduced by Novy-Marx (2010), and operating profit divided by market value, as presented in Fama and Frenchs (2015) 5-factor model. Ball et al. (2015) proves that the suggestions shown in Novy-Marxs (2010) paper, in which he proposes the existence of a very strong cross relation between gross profit and future returns, regardless of the financial leverage or structure of the firm, are true by constructing portfolios based on highly profitable firms as represented by gross profit/enterprise value. Novy-Marx (2010) concluded that because gross profit is the measure of profit less affected by accounting choices in the income statement, it results in a clear and normalised comparison between different companies. However, Ball et al. (2015) argue that gross profit is not significantly superior to net income (earnings) when analysing an extended time period. After analysing other measures of financial data, they conclude that operating profit, as a percentage of market value, does offer a significantly higher alpha. Therefore, this project continues with the aforementioned financial metrics, and focuses on sector and stock selection to create an annually-rebalanced real-time portfolio. Hughen and Strauss (2015) attempt to use different financial measures to construct portfolios at sector, stock and combined stock and sector levels. The following study complements and verifies the conclusions of Hughen and Strauss (2015) regarding the superior indicators of profitability measures versus traditional measures of valuation such as P/E and book to market in all three levels, and extends their research by looking at forward looking measures and a value weighted approach to the sector allocation, rather than the equal weight approach used in their research. The limitations of assuming sectors to be equally weighted across the portfolio, and not a function of the market value of the components of those sectors, contradict the notion of constructing a value-weighted portfolio. Their construction of portfolios at stock level is value-weighted, whilst at sector levels they equally weight each sector within their top quintile. This is a counterintuitive approach and this paper tackles that limitation by weighting the sectors accordingly respective to their components market capitalization, making periodical rebalances within the year unnecessary and increasing operational efficiencies in a real-life practical situation. It should be mentioned that the universe of stocks used in this study pertains to the SP500, which by definition is a market-weighted index. The project finds some discrepancies with respect to Hughen and Strauss paper, in particular surrounding the performance of the free cash flow ratio. A possible explanation for this is that this study states free cash flow as a percentage of total enterprise value, whilst Hughen and Strauss (2015) compute it as a percentage of market value. The approach taken within the subsequent study results in a much higher risk-adjusted return for the ratio, as measured by the Sharpe ratio, both for long strategies and to identify overvalued stocks. In their research of different financial ratios, Loughran and Wellman (2011) found that EBITDA over enterprise value offers superior performance to a predefined buy and hold benchmark. Their analysis, which comprehends data starting from 1963 to 2009, holds that EBITDA/EV possess a very significant regressive coefficient with future performance. Gray and Vogel (2012) confirm this hypothesis, analysing a time period of 30 years starting in 1980, in their research of different financial metrics. This paper confirms that, at a stocks-only level, EBITDA along with gross profit, both measured as a percentage of enterprise value, offer the highest risk-adjusted returns. For the analysis at both sector and stock level, EBITDA fails to show the same accuracy as the stock-only analysis. Therefore, the following study builds on the findings of previous studies by providing a more thorough examination at sector level. Gray and Vogel (2012) extended their research further by considering periods of economic crisis, in order to identify which financial ratio is most appropriate during high volatility economic downturns. However, they were unable to conclude which ratio is able to identify winners or losers during periods of financial distress, because none behaves in the same systematic manner during selected periods of extreme economic contraction. In their study of different economic coefficients and measures, Welch and Goyal (2007) conclude that the relationship between sector level performance and macroeconomic industrial data is unstable and at most, follows a random relationship. With that in mind, the focus of this paper is instead on building sector data as a market weighted average of the individual microeconomic company ratios and forecasts. Each individual constituent fundamental metric at year end will be used to position the allocation of each asset for the next year based on a ranked system. This construes that this analysis will be based on each stocks financial information at year end t to later construct sector level ratios and metrics, and is not based on macroeconomic or sector level data that, according to Welch and Goyal (2007), do not provide any significant cross-relation with future performance. Theory Development Although the focus of previous literature is in the attribution of portfolio performance to the different ratios and metrics used, the objective of this paper is to examine whether these same metrics, mainly traditional measures, forward looking estimates and profitability ratios, are able to exploit sector and stock level mispricing and generate real-time winner portfolios. Given the availability of forward estimates in the I/B/E/S database, a period from 1990 to 2016 will be examined in this paper. The choice of time period is not a random one; rather, to have consistency in data across the analysis and throughout all the variables used, this period is chosen from the start. To see the limitations of an extended data period, Gray and Vogels (2012) work show an exemplary illustration of such restrictions. They use a period of 30 years, starting in 1980, evaluating which financial metric can predict future performance. They complement their analysis on fundamental metrics by looking at analysts estimates and consensus forecasts, succeeding to recognise the lack of certain information in the beginning years of their timeframe, therefore failing to Nos interesa saber, al igual que el trabajo de Graham y Dodd (1934), cà ³mo el uso de normalizacià ³n de los diferentes ratios y fundamentos es capaz de cambiar nuestros resultados. Segà ºn sus estudios, la normalizacià ³n o media sobre cierto tiempo de estas mà ©tricas financieras, es capaz de mejorar la prediccià ³n de los resultados comparado con una estimacià ³n anual. Segà ºn su anà ¡lisis, la normalizacià ³n deberà ­a ser entre 7 y 10 aà ±os. Anderson y Brooks (2006) recientemente confirmaron esto, llevando a cabo un estudio de la mà ©trica P/E, la cual tambià ©n utilizamos en nuestro anà ¡lisis, pero a la inversa (Earnings/ Market Value). Segà ºn su estudio, basado sobre el mercado en U.K., usando el promedio de este ratio de 8 aà ±os en lugar de usar las mà ©tricas del aà ±o anterior, resulta en un crecimiento de las ganancias de un 6%, ya que es capaz de filtrar el ruido de earnings. Siguiendo estos anà ¡lisis, nuestro estudio abarcarà ¡ tambià ©n ratio s normalizados durante una serie de aà ±os, concentrà ¡ndonos en el universo de acciones del SP500, para confirmar que esta hipà ³tesis es apta en nuestro anà ¡lisis. Sin embargo, Data V.I Evaluation Metrics This paper will focus on three different categories of data inputs. There is an abundant choice of methods and variables in the accounting and financial research world, there is a large set of variables and measures to assess a firms valuation. In order to establish the model, an initial differentiation between these variables should be made. Traditional Metrics To start with, we look at the long standing traditional metrics that long have been appraised by the professionals in the financial industry. This involves the inverse of the P/E ratio, given as Earnings over Market Value of the firm, Book to Market value and Free Cash Flow to Enterprise Value. These ratios, introduced decades back in the origins of value investing by Graham and Dodd (1934), show mixed results according to existing literature. Including this long favourite measures in this research will prove useful when comparing to the other measures. Earnings/Market Value Earnings will be computed following Fama and Frenchs (2001) approach: Earnings = Earnings Before Extraordinary Items Preferred Dividends + Income Statement Deferred Taxes Book value/Market Value Book Value will again be calculated as Fama and French (2001) propose. Following on its definition, Book Value = Stockholders Equity Preferred Stock Free Cash Flow/Enterprise Value Analogous to Novy-Marxs (2010) work, we compute free cash flow as FCF = Net Income + Depreciation Amortisation Working Capital Change Capital Expenditures Enterprise Value will also need to be calculated. Following Loughran and Wellman (2011), we compute it as EV = Market Value + Short-term Debt + Long-term Debt + Preferred Stock Value Cash and Short-term Investments The enterprise value variable will be used again in multiple valuation measures. Profitability Metrics Profitability measures as reported in the income statement will also be used as valuation methods. The focus will be Gross Profit, EBITDA and Operating Profit. EBITDA and Gross Profit will be computed as a percentage of Total Enterprise value, as suggested by the work of Gray and Vogel (2012), whilst Operating Profit will be looked at as a percentage of Market Value. From here on, well expand on this and compute an average of this three profitability measures, in order to analyse if a composite metric is able to detect the cross relation between fundamentals and future returns. The reasoning behind using an average of these three different measures stems from the work of Hughen and Strauss (2015), as they find that the composite measure are less sensitive to changes in the firms structure across different sectors and within sectors, as well as providing more information than just a single variable. This implies that the average measure is less affected by differences in financial leverage across sectors, which results in a more standardised comparison between firms in different sectors. Gross Profit/Enterprise Value Once again following Novy-Marx (2010), we compute every years gross profit as Gross Profit = Revenue Cost of Goods Sold Operating Profit/Market Value Operating Profit, as define in the income statement will be used for this metric. EBITDA/Enterprise Value EBITDA, defined as Earnings Before Interest, Tax and Depreciation Amortisation is calculated by the simple sum of operating and non-operating income; EBITDA = Operating Income before Depreciation + Non-Operating Income Profitability Average Equally weighted average of the three profitability ratios. The reason for selecting profitability measures higher up the income statement, and not focusing solely on the inverse P/E ratio, Earnings/MV or expectations of forward earnings, is because the higher up the income statement we go, the more consistent data proves to be year on year: that is, figures are more normalized and suffer fewer variations, which could explain why they result in being better predictive models, filtering out excessive noise. According to Dichev et al. (2013), profitability metrics are more persistent than earnings and forecast future performance more accurately than net income. Earnings data is affected by accounting choices, whereas gross profit and operating income suffer fewer distortions from this. Forward Estimates Analysing a set of fundamental past data wont be the only proxy used to rebalance our portfolio: analysts stock recommendations will also be evaluated. Two different sets of forward data will be used. In the first place, an average of the consensus forecast of next fiscal years EPS divided by the current market value of each firm will be used. This forecast will be an average of the estimates of each analyst throughout the fourth quarter of year t for year t+1. The consensus mean recommendations from analysts from the fourth quarter of the year t for year t+1 will also be employed. These recommendations are a ranking from 1 to 5, with 1 signalling a strong buy and 5 a strong sell. This is the mean of the different analysts recommendation existing at that time for each individual stock. V.II Data Criteria and Universe To ensure a minimum amount of liquidity in our analysis, we pick the historical constituents of the SP500 Index as our universe of stocks. This results in our analysis not being driven by the performance of smaller capitalisation firms, for which data might not be readily available. As our analysis involves implementing a long/short strategy, the ability to do so with large capitalisation stocks in practice results much easier. Therefore, every year, the appropriate constituents in our portfolio are updated, reflecting the changes in the overall index. This implies that our universe of stocks closely replicate the SP 500 Index on a yearly basis. The constituents as of 1990 will first be extracted, and updated every year thereafter. The analysis is then limited to those companies with a positive market capitalization as of December of year t, as well as to those companies with at least 2 years of data, in order to perform all the analysis in a consistent universe of stocks. In order to conduct the analysis across sectors in a more uniform manner, certain companies were removed from the universe of stocks. This includes REITs, utility and financial firms, as denominated by CRSP. From this, a benchmark is constructed with our new universe of stocks; that is, all those fulfilling the above criteria. This benchmark is a value weighted portfolio of all the stocks for a given year, rebalanced yearly at the end of each previous year (December 31st). Therefore, being a market value weighted portfolio comprising most SP500 stocks, it should closely resemble the SP500 Index. Comparing the quarterly performance of both our benchmark and the index for the period to analyse between 1990 and 2015, and running a corresponding regression, it is found that they correlate with a coefficient of 99.17%. As seen by this observation, our universe of stocks bears similarities with the index, although the payoff at the end of the period differs. The benchmark provides a payoff of $11.13 for a $1 investment (or a 1113%) at the start of the period, in 1990. The SP500 index returns a payoff of $8.86 (886%) at the end of the period. This figures assume complete reinvestment of capital and a compounded growth rate. V.V Model We represent year t+1 to be the year for which the portfolios performance will be monitored, and year t to be the year in which the fundamental data which will estimate performance will be extracted. As most US companies have a fiscal year corresponding to the calendar year, our model will retrieve end of year fundamental data for these companies, corresponding to December year t, allow for a data release lag, and compute the portfolio. The lag in data release is introduced as companies dont disclose their annual financial statements until the quarter after their fiscal year end. This usually happens within two months, as observed from historical data. Taking this factor into account, the model will allow for a lag of one quarter, therefore allowing for information to be readily available to the public at each point in time. Denoting t.(x) as the xth quarter of year t, and t+1.(x) as the xth quarter of year t+1, the above implies extracting fundamental data as of t.(4), allowing for a lag in data release during t+1.(1) in order to construct the portfolio at t+1.(2). The performance will then be measured during one year from then. This model so far deals only with the companies which disclose their end of year information by the end of the calendar year, so a provision must be made for the proportionally low, but still significant, number of companies whose annual results are released at a different date. Hughen and Strauss (2015) tackled this issue by rebalancing quarterly their portfolio, but they recognized the limitations of using quarterly results rather than normalizing their ratios and profitability measures by using annual ones. Gray and Vogels (2012) work consists of an annually rebalanced portfolio as of June 30 every year. Their approach is to use, for firms with fiscal year ending within the last quarter of the previous year, or the first quarter of the year, those fundamentals. For companies with fiscal years ending after March 30, previous years fundamentals will be used. This implies that, no matter when the end of fiscal year is, the latest annual filling will always be employed to construct th eir portfolio, even when this filling is from the second quarter of the previous year. In the following model, the approach will be somewhat different, Therefore, first, a differentiation between the two strategies implemented should be made. Value weighted These buy-and-hold portfolios are attractive not only because they minimize trading costs, but because they are simple to implement from an operational perspective. Mention sector allocation using SICS merge Compustat and CRSP databases. Delisting returns. References Ball, R., Gerakos, J., Linnainmaa, J. and Nikolaev, V. (2015). Deflating profitability. Journal of Financial Economics, 117(2), pp.225-248. Bunn, O. and Shiller, R. (2014). Changing times, changing values. 1st ed. Cambridge, Mass. Dichev, I., Graham, J., Harvey, C. and Rajgopal, S. (n.d.). Earnings Quality: Evidence from the Field. SSRN Electronic Journal. Fama, E. and French, K. (1992). The Cross-Section of Expected Stock Returns. The Journal of Finance, 47(2), p.427. Fama, E. and French, K. (2006). Disappearing dividends: changing firm characteristics or lower propensity to pay?. 1st ed. Fama, E. and French, K. (2015). A five-factor asset pricing model. Journal of Financial Economics, 116(1), pp.1-22. Gray, W. and Vogel, J. (2012). Analyzing Valuation Measures: A Performance Horse-Race Over the Past 40 Years. SSRN Electronic Journal. Hughen, J. and Strauss, J. (2015). Portfolio Allocations Using Fundamental Ratios: Are Profitability Measures Effective in Selecting Firms and Sectors?. SSRN Electronic Journal. Jacobs, B. and Levy, K. (1993). Long/Short Equity Investing. The Journal of Portfolio Management, 20(1), pp.52-63. Loughran, T. and Wellman, J. (2011). New Evidence on the Relation between the Enterprise Multiple and Average Stock Returns. Journal of Financial and Quantitative Analysis, 46(06), pp.1629-1650. Michaud, R. (1993). Are Long-Short Equity Strategies Superior?. Financial Analysts Journal, 49(6), pp.44-49. Miller, E. (2001). Why the Low Returns to Beta and Other Forms of Risk. The Journal of Portfolio Management, 27(2), pp.40-55. Novy-Marx, R. (2010). The other side of value. 1st ed. Cambridge, MA: National Bureau of Economic Research. Welch, I. and Goyal, A. (2007). A Comprehensive Look at The Empirical Performance of Equity Premium Prediction. Review of Financial Studies, 21(4), pp.1455-1508.

Friday, October 25, 2019

Planet Of The Apes Satire Essay -- essays research papers

The setting of the movie compared to the setting in the book makes Planet of the Apes one of the greatest satires. In the movie, the setting takes place on earth in the future where apes deny and are afraid of the past, whereas the setting in the book is on a different planet where apes are civilized and technologically advanced, and the humans were primitive creatures. The orangutans in the movie prevent what happened to the humans from happening to the apes. Orangutans, such as Zaius went to great work as destroying the cave where the evidence of the humans reigned is revealed and removing Landen’s memory. In the book civilization of humans on Earth is equal to and may even surpass the civilization of the apes on Sorror. The point of view in the book is through Ulysees’ mind. He is clam and patient. Taylor in the movie is an impatient angry man who is never satisfied and is outraged by the fact that apes are running the planet and have locked him up. In the movie Tayl or is a misanthrope who is hot-tempered and not respectful to the apes. He calls them "Bloody Baboons!" Taylor left Earth to find a better place and ended up where he started. In the book, Ulysee is kind and respectful towards the apes, and he was granted citizenship to their civilization and begins to assign apes human features. Ulysee was granted citizenship because of the speech he made before them. He gave that speech with respect and loyalty towards the apes for acceptance. The tones in the boo... Planet Of The Apes Satire Essay -- essays research papers The setting of the movie compared to the setting in the book makes Planet of the Apes one of the greatest satires. In the movie, the setting takes place on earth in the future where apes deny and are afraid of the past, whereas the setting in the book is on a different planet where apes are civilized and technologically advanced, and the humans were primitive creatures. The orangutans in the movie prevent what happened to the humans from happening to the apes. Orangutans, such as Zaius went to great work as destroying the cave where the evidence of the humans reigned is revealed and removing Landen’s memory. In the book civilization of humans on Earth is equal to and may even surpass the civilization of the apes on Sorror. The point of view in the book is through Ulysees’ mind. He is clam and patient. Taylor in the movie is an impatient angry man who is never satisfied and is outraged by the fact that apes are running the planet and have locked him up. In the movie Tayl or is a misanthrope who is hot-tempered and not respectful to the apes. He calls them "Bloody Baboons!" Taylor left Earth to find a better place and ended up where he started. In the book, Ulysee is kind and respectful towards the apes, and he was granted citizenship to their civilization and begins to assign apes human features. Ulysee was granted citizenship because of the speech he made before them. He gave that speech with respect and loyalty towards the apes for acceptance. The tones in the boo...

Thursday, October 24, 2019

Public School vs Private School Essay

Education can be considered one of the most important decisions parents make for their children. Why? Because education lays the foundation to future success in life, I personally understood this concept early on in life. My parents from as far back as I can remember taught me the value of having an education. I can still remember my mom preaching to my younger brother and me about how the only thing an individual cannot take away from another individual is the knowledge gained in this world. Now many years later I find myself in the same position as many parents when it comes to education. I have to make the decision whether or not to send my child to public school or a private institution. How do public schools and private schools compare? And is there much of a difference? Admission standards for public schools and private schools are similar in comparison when it comes to placement testing and reviewing previous transcripts from other institutions. Public schools unlike private schools are required by law to accept individual as long as the individual is attending a school in the district that they live in. Admission to a private school is not regulated by law and is up to the digression of the school administrators and if the requirements were met by the individual. Because private schools are more selective in their admission process parents tend to base part of their decision on the reputation. Private schools of good reputation are challenging to get accepted into because of the prominent level of competition at the admissions level. Curriculum is a major influence on a parent’s decision whether or not to send their child to private or public school. In both private and public schools cover basic subjects such as English, Social Studies, Mathematics, and Science. By law public schools must follow state curriculum standards, which the schools are subject to state standardize academic testing. Whereas private school has much more freedom in their curriculum simply because they are not require to teach only basic subjects and are not subjected to state standardize testing. Private schools do subject the students to test but only based on comprehension and proficiency rather than retention. Since private schools have freedom in their curriculum, the schools have the opportunity to provide specialized courses and independent study to the students. Cost is the deal breaker in the decision on whether or not to send a child to private school or public school. In both public school and private school a financial investment is made in a child. But the best way to cut the cost especially if the affordability of attending private school is out of the question is to send your child to public school. Public Schools are financially supported by the property taxes in the local area alongside funding from state and federal government. Unlike public schools, private schools do not receive support from property taxes. The way private schools receive funding is through fundraising, the tuition from the student body, and in some cases partial government funding. Because of the lack of state and federal assistance the average tuition cost in the United States according to the National Association of Independent Schools (NAIS) is roughly $17,000 to $50,000 a year. To offset the high cost of tuition parents should seek out financial aid, financi ng, and payment plans with the private institution. The decision to place a child into a public school or a private school is not a decision that should be made overnight there is a lot to consider, most importantly the child. As a parent it is imperative to re-evaluate the child before placing the child in school because the child has to be placed in a school that is the right fit. Placing a child that thrives in a smaller group setting or enjoys one on one time I would personally consider private school as an option. If the child enjoys a larger group setting placing the child in public school will be right fit too. Overall the affordability of public school is more reasonable and the flexibility of curriculum of a private school is things can be negotiated when making the final decision. As a parent I know that the child best interest is always in the forefront whether I decide on a public school or private school.

Wednesday, October 23, 2019

A Contemporary Critique on Murasaki Shikibu’s The Tale of Genji Essay

The Heian court and the social structure it provided is a compelling aspect of Japanese history. The 21st century reader is intrigued by such an era and its artistic representations because the general norms, collective conscious, and interpersonal relationships seem to be in clear contrast with the social practices of today. At face value, it appears that Murasaki Shikibu’s discontentment with the aforementioned characteristics of court life manifested itself within the pages of The Tale of Genji. The acclaimed Russian novelist Vladimir Nabokov once stated, â€Å"A masterpiece of fiction is an original world and as such is not likely to fit the world of the reader. † Thus, although Murasaki Shikibu’s work is deeply rooted in exposing the pretense associated with Heian court social rank, marriage practices, and feminine submissiveness, she managed to create a world for Genji which tested the limits of his emotional threshold and, by default, relatable with modern/epic protagonists. Moreover, because the modern audience can at times feel sympathetic toward Genji by relating to his emotional range (i. grief through ecstasy) and psychological abnormalities, The Tale of Genji’s status as a timeless masterpiece is merited. Had Genji been a detached lover with no emotional and psychological depth, Murasaki Shikibu’s work and reputation would not have seen the light of day outside of the court she was heavily critiquing. This essay will compare the qualities depicted in The Tale of Genji with other works that are highly regarded as masterpieces while shedding light on the differences which can be seen as a more direct jab at Heian readership. There is a notion in philosophical theory that is used to show that the ‘robber and the robbed’ share a mutual existence dictated by past events. Their meeting, the robbery, is the climax of their distinct lifelong plots. The idea that humans are simply ‘victims of circumstance’ applies directly to Genji as can be seen through his decisions and amorous plight. Through the first few chapters of Murasaki Shikibu’s tale, the audience can infer on the surface that the plot revolves around the development of the protagonist’s Oedipus complex. Upon meeting young Murasaki whose resemblance to Fujitsubo was ‘astonishing’, the author specifies Genji’s yearning for Fujitsubo as the reason he was brought to tears (Murasaki 71). Although inquiries of Genji’s psychological state are not without merit, the bond between Murasaki’s work and Sophocles’ Oedipus the King delves much deeper. The growth and development of Genji’s character and traits do a bit more than clarify his current actions: should one have started reading at the â€Å"Lavender† chapter, it provides an insight to a past riddled with complexity. Through Genji’s dialogues and decisions regarding dilemmas of the heart the readership is given a man who, involved with the particular situations Genji had experienced, would most likely act in a similar fashion to Genji. Every act through Oedipus the King paints a picture for the reader of the power that emotional disposition has over Oedipus and his quests, which is not at all unlike Genji himself. After hearing about the crimes he was to commit, how can a reader not feel sympathetic towards his pursuit for independence from the oracle? In similar fashion to this masterpiece, Murasaki utilized the tool of plot reappearance across characters and time settings to give readership the sense that Genji was predisposed to repeat past deeds. After the death of Genji’s mother, the emperor (Genji’s father), was in mourning and grief and seeking to fill the void left over. After coming across the remarkable beauty of Fujitsubo and in an effort to bring her in, the emperor stated that he would â€Å"treat the girl as one of his daughters† and adding that given Genji’s resemblance to her, she â€Å"could pass for his mother† (Murasaki 22). As previously mentioned, this father-like quality resurfaced in Genji’s admiration for Murasaki. However, a distinct difference is that Genji’s paternal instinct is more or less a fabrication resulting from the impediment, put in place by Murasaki’s nurse, of his unyielding desire to make her his future lover: â€Å"It is you who do not understand. I see how young she is, and I have nothing of the sort in mind. I must again ask you to be witness to the depth and purity of my feelings†¦How can she bear to live in such a lonely place? (Murasaki 95). At this point, one can see the psychological abnormalities developed in Genji which were not corrected by his upbringing; in fact, witnessing his father’s adamant love for this exact model of beauty might have amplified the effect on his behavior. When Oedipus proclaimed the total expulsion of the murderer who was still unknown (practically accepting Jocasta, his mother, as his wife), the knowledge of the f oreseeable future which the readership possessed allowed one to feel the helplessness which Oedipus embodied. Much to the same effect, is the reader supposed to feel that Genji does not have the ability escape fate? One can infer that Murasaki’s response to this is a resounding â€Å"No†. Although the similarities between the two works are many, the major difference lies in two factors: Oedipus’ fate was sealed, and, prior to his emotional endeavor regarding the oracle’s prophecy, he was an exemplary combination of leadership and intelligence which suited a king well. Even though Murasaki gave Genji an emotional depth, she left out the qualities of critical thought and consideration for others. Doing so, Murasaki left Genji at the mercy of his circumstances without his fate being set in stone, and, thus continuing the chauvinistic characteristic of male aristocrats of the time. Through her literary prowess, Murasaki subtly but effectively proclaimed that high rank/position did not equate to intellectual superiority nor did it predetermine that all aristocrats in those positions were fit to rule as can be witnessed by Genji’s preoccupation with his love affairs and not the further betterment of court reputation or intellect. Another ubiquitously renowned masterpiece with similar sexual deviance from its protagonist is Homer’s The Odyssey. Odysseus’ journey to return home to his wife is juxtaposed with temptation by utmost beauty which ultimately leads him to succumb to the latter. In academic circles, the reunion with his wife is seen as one of the most romantic scenes in literary history; yet, there seems to be a lack of uproar regarding his adventures with Calypso and Circe. On the other hand (with critical awareness of the social norms of the time), Genji is met with great disdain by the general audience. In comparison to The Tale of Genji, the similarity lies with the degree of sympathy the protagonists evoke more so than the actual plot: although both characters had multiple extramarital affairs, does Odysseus’ long term physical displacement conjure up a greater forgiveness from the readership s oppose to Genji’s emotional dissatisfaction with his current state of affairs? The fact that Murasaki’s work, more specifically her protagonist Genji, is able to invoke an amount of emotional response from contemporary audiences comparable to that of The Odyssey without relying on 20 years of desolation from its main character in itself should merit the reputation it has received. In regards to the previously mentioned question, Murasaki would probably be displeased with Odysseus’ affairs. Although universally accepted justification would never be reached, his unfading love for Penelope goes without question. The major difference between the two protagonists lies in their response to utmost beauty. Facing Calypso, Odysseus admits that she is far more beautiful than his wife who is a mere mortal, but that he â€Å"pines† all his days to see his return to her (Lawall et. al 265). Genji, on the other hand, falls ‘prey to the female’. Murasaki’s commentary on relationships lies in the deliberate absence of discernment in Genji and his state of being out of touch with reality. Moreover, his narrow focus on beauty does not allow him to see the combination of flaws and qualities that all women possess; thus, his longing for the specific mold of beauty he yearns for seen in the beginning of Murasaki’s work holds no merit and thus, true sympathy fails to reach Genji’s amorous quest. This notion is exemplified by Murasaki’s narration describing Fujitsubo’s beauty: â€Å"There was no one else quite like her. In that fact was his undoing: he would be less a prey to longing if he could find in her even a trace of the ordinary† (Murasaki 86). Furthermore, Murasaki leaves no indication that Genji exhausted all possibilities in an attempt to make love work with Aoi (or even Rokujo); this would at least add some credibility to his dissatisfaction with them. Instead, he utilizes their unfavorable idiosyncrasies as further incentive for his extramarital adventures. Murasaki expresses her own dissatisfaction with marriage practices/relationships, in essence, by making the case that male aristocrats at court lack the judgment and intellect (which is supposed to be innate to them given the hierarchal structure of the society at the time) to fully comprehend and appreciate the complexity of women and lack the consideration to take into account that their actions affect not only hemselves. All in all, the clearest insight to Murasaki’s critique of Heian structure, rank, and interpersonal outlook comes directly from Genji and it sounds as if Murasaki implied to state the true intentions of Heian male aristocrats: â€Å"I am weak and indecisive by nature myself, and a woman who is quiet and withdrawn and follows the wishes of a man even to the point of letting herself be used has much the greater appeal. A man can shape and mold her as he wishes, and becomes fonder of her all the while† (Murasaki 62). Lastly, Heian era Japan was not the only male dominated civilization and this type of society has not yet disappeared. The fact that one can use the same critiques Murasaki masterfully made about Heian court to dispute manifestations of chauvinism in certain aspects of society today solidifies The Tale of Genji as a masterpiece which stood the test of time.

Tuesday, October 22, 2019

Significance in Literature essays

Significance in Literature essays An Example in Literature of How An Experience Can Have Significance on a Persons Life In the short story Walk Well, My Brother the author, Farley Mowat, develops the idea that a significant experience can lead to a change in how one individual views another individual. The story shows us how a person can learn from another person that is very different from them and be moved by their selflessness into becoming a better person. It also shows us how important it is for people not to judge others for superficial reasons. An individual can learn a lot from people that are very different from them. I feel that this story was written to illustrate that point. The story tells us about a man named Charlie Lavery who was twenty six years old and believed that he was capable of taking care of himself no matter what the situation. The story gives us evidence of this when the author says, he was very much of the new elite that believed that any challenge...could be dealt with by good machines in the hands of skilled men. Charles also had no knowledge of the arctic or of the people that lived there because he felt that he did not need this knowledge as long as he had his machines. It was this ignorance that led him to feel so disgusted with the natives that lived there because he did not understand their way of life. When the machines that he so greatly relied on were no longer of use, he had no knowledge to fall back on. He was completely dependent on a native girl, Konola, whom he despised when he fi rst met. His inability to take care of himself forced him to co-operate and to try and understand this person who was so foreign to him. If it werent for the situation that he was in, he would never have made that effort and crossed the barriers that he had made between himself and the natives. In this story, Konola never once acts in a selfish manner towards Charles. She was very ill with tuberculosis and sh ...

Monday, October 21, 2019

Coachs Designer Purses

Coachs Designer Purses Introduction Coach was established in New York in 1941 as a family business. The company has grown steadily since then to become the household name we know today.Advertising We will write a custom essay sample on Coach’s Designer Purses specifically for you for only $16.05 $11/page Learn More Capitalizing on its high quality merchandise, Coach has won many customers from all over the world. The success of Coach has been a result of competent leadership as well as research. Currently, Coach’s clienteles include children, women, and men from all races, religions, and nationalities (Sherman). In line with its business objectives, Coach as employed high profile advertisement campaigns that have continued to expand its markets. Celebrity endorsements and fashion expert recommendations have also played a great role in Coach’s success. Coach has a variety of leather products ranging from wallets, purses, backpacks, and handbags. Lately, their h andbags and purses have turned into big sales. Coach’s designer purses have gained unrivaled popularity among women. Though expensive, their eye-catching designs and gracious colors are a combination any woman cannot avoid. Possessing coach’s designer purse has many advantages. To start with, it is a means of saving money in the end. A coach’s designer purse may be expensive to acquire, but it is of high quality and very durable hence you can use it for a long time. On the other hand, cheap purses are weak and cannot be of service for a long time. That means that you may have to buy several cheap purses in the life span of only one Coach’s designer purse. Therefore, purchasing of a Coach’s designer purse saves you the trouble of having to budget every single time. Secondly, Coach’s designer purses never lose their value. Since the products are of high quality, they can serve a customer for a long time without losing their shape color, or spe cial effects such as texture. This means that if a customer has used the purse for a considerably long time and finds it not interesting anymore or out of fashion, she may give it out to others who cannot afford it. In such a case, she would be adding value to the person’s life and would be satisfied as the purse, in the state she considered not good, still added a smile to someone’s face. The purses are also confidence boosters. Human confidence comes from deep inside; it is more of how we perceive ourselves than how others think about us. It is at the point we learn to appreciate ourselves that we gain our confidence (Wan 42). Buying expensive things like Coach’s designer purses help in convincing us that we deserve only the best.Advertising Looking for essay on art and design? Let's see if we can help you! Get your first paper with 15% OFF Learn More When this feeling moves to other areas like our relationships, financial needs, and ca reer, we can achieve much. Furthermore, a woman’s purse or handbag is her best companion when she is lonely and on the road. The other advantage is the fact that they give the impression of high social standing. Humans, no matter how hard they try, will always end judging people they meet and interact with in their daily activities. The information people gather from your outward appearance will determine how they behave towards you (Maxwell 145). Even though appearance does not constitute reality, many tend to believe those in possession of designer’s products are rich. With this impression, one is often given a fair and respectable treatment. Coach’s designer purses are unique and classy. Coach’s products come in different sizes, shapes and colors, which give its clients a variety to choose. Purchasing a designer’s purse may be expensive in monetary value, but the satisfaction that it brings to its buyers and its quality level makes it actually w orth the money. In fact, satisfaction derived from owning these products is the main motivation for many buyers. Lastly, Coach’s designer purses can be customized to fit customers’ preferences at an extra cost. People have different tastes and preferences. In fact, â€Å"man’s† ability to make choices is one characteristic that sets him apart from other animals. Unlike the readymade purses that customers have no choice on their design, customization allows a client to add personalized information such as name, favorite team, or memorable events on their products. This service is, however, very popular only among the celebrities and high-income earners. There are several disadvantages of Coach’s designer purses. The first disadvantage arises from its strength; durability. The traditional designer purses are so strong that they can last for centuries. This may lead to an attachment with the product that makes the consumer feel badly giving it away or throwing it. The second disadvantage is the difficulty of finding one that completely matches your personality. Every individual has a fashion personality who guides his or her style. A person can be casual, trendy, romantic, or classic (Wan 32). Since our personality is who we truly are, people tend not to let accessories such as purses, watches and rings clash with their personality. Instead, they search for the accessories they consider friendly, which is a hard job. This is because designer products are unique and rare.Advertising We will write a custom essay sample on Coach’s Designer Purses specifically for you for only $16.05 $11/page Learn More Coach’s designer purses are also very expensive. Highly experienced and expensive designers make the unique purses, which make their prices go up. According to Forbes international, an average brand new Coach’s designer purse goes for $ 500 (Sherman). This is not little money as one can do a lot with it. Even though there are places like EBay where one can get relatively cheap purses, majority still prefer Coach’s branches. Additionally, Coach’s designer purses may limit their owners when it comes to using. This is mostly because of their uniqueness, which makes it hard to find outfits that match them. Since one may only have a few outfits that match them, using them frequently becomes impossible. Another factor that could limit the use of Coach’s designer purse is the type of activity or occasion that one is attending. Regardless of how much you love your designer’s purses, you cannot wear them on wrong occasions. Coach’s designer purses may make you a target for robbers and other criminals. The society considers many people who own designer purses rich. Many criminals who want to reap where they did not saw therefore target such people in the hope of making quick money. This has led to attacks resulting in injury and even deaths in some instances. Lastly, the dynamism of the fashion industry makes the price of Coach’s designer purses unrealistic. The fashion industry is moving so fast (Wan 35). Today as you resign to your bed for a good night sleep, you may be in the first fashion lane, but when you wake up, you will be playing catch up! The new design that comes to the market renders the one you just bought outdated. Since women are so conscious of fashion trends, wearing a replaced product is an uphill task. To keep to the first lane, they choose to dump such items regardless of the cost incurred. This may result to frustration, especially if you fixed yourself to purchase the purse. In conclusion, we must accept that even perfect products have disadvantages. In making a choice, one has to choose the better option, since there is no perfect product. The disadvantages associated with the consumption of Coach’s designer purses notwithstanding, I believe that Coast’s designer purse is t he best present a woman can give herself in her lifetime. Maxwell, John C. Winning with people: discover the people principles that work for you every time. Nashville, Tennessee: Thomas Nelson Inc, 2004. Print.Advertising Looking for essay on art and design? Let's see if we can help you! Get your first paper with 15% OFF Learn More Sherman, Lauren. Forbes International. 24 May 2006. Web. Wan, Gok. How to Dress: Your Complete Style Guide for Every Occasion. London: HarperCollins Publishers, 2010. Print.

Sunday, October 20, 2019

Scientific and Social Definitions of Race

Scientific and Social Definitions of Race Its a common belief that race can be broken down into three categories: Negroid, Mongoloid and Caucasoid. But according to science, thats not so. While the American concept of race took off in the late 1600s and persists even today, researchers now argue that there’s no scientific basis for race. So, what exactly is race, and what are its origins? The Difficulty of Grouping People Into Races According to John H. Relethford, author of The Fundamentals of Biological Anthropology, race â€Å"is a group of populations that share some biological characteristics†¦.These populations differ from other groups of populations according to these characteristics.† Scientists can divide some organisms into racial categories easier than others, such as those which remain isolated from one another in different environments. In contrast, the race concept doesn’t work so well with humans. That’s because not only do humans live in a wide range of environments, they also travel back and forth between them. As a result, there’s a high degree of gene flow among people groups that make it hard to organize them into discrete categories. Skin color remains a primary trait Westerners use to place people into racial groups. However, someone of African descent may be the same skin shade as someone of Asian descent. Someone of Asian descent may be the same shade as someone of European descent. Where does one race end and another begin? In addition to skin color, features such as hair texture and face shape have been used to classify people into races. But many people groups cannot be categorized as Caucasoid, Negroid or Mongoloid, the defunct terms used for the so-called three races. Take Native Australians, for instance. Although typically dark-skinned, they tend to have curly hair which is often light colored. â€Å"On the basis of skin color, we might be tempted to label these people as African, but on the basis of hair and facial shape they might be classified as European,† Relethford writes. â€Å"One approach has been to create a fourth category, the ‘Australoid.’† Why else is grouping people by race difficult? The concept of race posits that more genetic variation exists interracially than intra-racially  when the opposite is true. Only about 10 percent of variation in humans exists between the so-called races. So, how did the concept of race take off in the West, particularly in the United States? The Origins of Race in America The America of the early 17th century was in many ways more progressive in its treatment of blacks than the country would be for decades to come. In the early 1600s, African Americans could trade, take part in court cases and acquire land. Slavery based on race did not yet exist. â€Å"There was really no such thing as race then,† explained anthropologist Audrey Smedley, author of Race in North America: Origins of a Worldview, in a 2003 PBS interview. â€Å"Although ‘race’ was used as a categorizing term in the English language, like ‘type’ or ‘sort’ or ‘kind, it did not refer to human beings as groups.† While race-based slavery wasn’t a practice, indentured servitude was. Such servants tended to be overwhelmingly European. Altogether, more Irish people lived in servitude in America than Africans. Plus, when African and European servants lived together, their difference in skin color did not surface as a barrier. â€Å"They played together, they drank together, they slept together†¦The first mulatto child was born in 1620 (one year after the arrival of the first Africans),† Smedley noted. On many occasions, members of the servant class- European, African and mixed-race- rebelled against the ruling landowners. Fearful that a united servant population would usurp their power, the landowners distinguished Africans from other servants, passing laws that stripped those of African or Native American  descent of rights. During this period, the number of servants from Europe declined, and the number of servants from Africa rose. Africans were skilled in trades such as farming, building, and metalwork that made them desired servants. Before long, Africans were viewed exclusively as slaves and, as a result, sub-human. As for Native Americans, they were regarded with great curiosity by the Europeans, who surmised that they descended from the lost tribes of Israel, explained historian Theda Perdue, author of Mixed Blood Indians: Racial Construction in the Early South, in a PBS interview. This belief meant that Native Americans were essentially the same as Europeans. They’d simply adopted a different way of life because they’d been separated from Europeans, Perdue posits. â€Å"People in the 17th century†¦were more likely to distinguish between Christians and heathens than they were between people of color and people who were white†¦,† Perdue said. Christian conversion could make American Indians fully human, they thought. But as Europeans strove to convert and assimilate Natives, all the while seizing their land, efforts were underway to provide a scientific rationale for Africans’ alleged inferiority to Europeans. In the 1800s, Dr. Samuel Morton argued that physical differences between races could be measured, most notably by brain size. Morton’s successor in this field, Louis Agassiz, began â€Å"arguing that blacks are not only inferior but they’re a separate species altogether,† Smedley said. Wrapping Up Thanks to scientific advances, we can now say definitively that individuals such as Morton and Aggasiz are wrong. Race is fluid and thus difficult to pinpoint scientifically. â€Å"Race is a concept of human minds, not of nature,† Relethford writes. Unfortunately, this view hasn’t completely caught on outside of scientific circles. Still, there are signs times have changed. In 2000, the U.S. Census allowed Americans to identify as multiracial for the first time. With this shift, the nation allowed its citizens to blur the lines between the so-called races, paving the way for a future when such classifications no longer exist.​

Saturday, October 19, 2019

Rhetorical Analysis Paper Essay Example | Topics and Well Written Essays - 1000 words

Rhetorical Analysis Paper - Essay Example So many Pidgin pessimists," gives a lot of promise. And he's not stopping, now or in the foreseeable future. "Can you come up wit one more positive way of looking at dis piece o'wot Try tink. Right on. Ho, you get 'em. Das how. We get ONE Pidgin optimist in da house" is as convincing as the essay itself those who will oppose the guy will have a difficult time. "In da real world get planny Pidgin prejudice, ah. Dey, da ubiquitous dey, dey is everywea brah; dey say dat da perception is dat da standard english talker is going automatically be perceive fo' be mo' intelligent than da Pidgin talker regardless wot dey talking, jus from HOW dey talking," he complains as he spars biases. And one imagines a Huck, with slumped shoulders and head cast downward trying to avoid people at daylight, processing his thoughts in his own world dominated only by an African American slave Jim, and at times, by the more acceptable and lovable Tom Sawyer. It's kine lonely, if one sees through it, so much like the cause Pidgin Guerilla Tonouchi is fighting for. Biases ran amuck in a global culture of majority rules as Tonouchi might strongly be shaking his head as he asserts that, "but I no need really look da studies, cuz I can see dis happening insai my classrooms" "Oh Frazier, you're so smart. ... He recalled the experience of his Oriental parents in the 50s to 60s, "If dey talked Pidgin in school den da teachah would slap 'em wit da ruler. Ka-pow. Ow, ow, ow" up to his generation, "You gotta enunciate and tell, "May I please use the restroom" And if you no tell 'em li'dat, den you gotta hold your shishi, brah." He is standing up. "If I knew den wot I know now, HO, I would've SUED da DOE for da kine cruel and unusual punishment. Million dollah settlement right dea," and he depreciates in kine funny, if not yet hilarious manner. He's shaking his head, "wuz equating talking Pidgin to smoking cigarettes cuz he gotta "cut back." If he talk too much Pidgin, den he going get Pidgin cancer and he going DIE, brah. Pua ting. Sad yeah, da tinking" but he's not giving up, nor going away and turn from his cause. In fact, he is facing the challenge head on as he asked his class, "Try tell me all da tings dat people told you ova da years dat you CANNOT do wit Pidgin." And dis wot dey came up wit: Dey Say if You Talk Pidgin You No Can . . . be smart be important be successful be professional be taken seriously be one teacher be one doctor be one lawyer be a government worker be big businessman be da Pope be the president be the wife of the president Dey say if you talk Pidgin you no can . . . communicate eat at fine dining restaurants enter a beauty pageant (and win)" and the list is endless, but he is not stopping. And he is proving the Pidgin detractors wrong. "but I tink so people jus find 'em funny cuz dey know lot of da tings on da list is not true. Bogus li'dat. Why Cuz dey know Pidgin people who eat at fancy restaurants, cuz dey know Pidgin

Friday, October 18, 2019

Pollution Of New Energy Resource Essay Example | Topics and Well Written Essays - 750 words

Pollution Of New Energy Resource - Essay Example With the current issue of global warming, generation of energy from solar power is one of the best options for managing the problem. Use of solar power has a high likelihood of reducing the challenges of global warming affecting the world today. There are different ways in which solar energy can be used. It is as a result of this that there exist different forms of solar energy. For example, passive solar systems use absorptive structures of houses in heating water beside homes. Active solar energy systems are other types of solar systems that depend on solar collectors to harness solar energy. This type of solar can be used in the generation of electricity. The electricity generated by this solar is channeled to an electrical grid. It is as a result of this that I considered generating energy from a solar bag. Solar bags use the same principle of energy generation from the sun like other solar systems. The solar bag constitutes small, thin as well as durable solar panels. The solar panels in the solar bags are made up of photovoltaic cells which have the ability to generate electricity from the direct sunlight. As the light and energy from the sun hit the solar panel, the cells absorb the energy as well as the daylight making the flow freely since they become loose. The electric field created in the photovoltaic cells ensures the electrons have flown in one direction creating an electric current that is harnessed by metal contact cells that are attached to the top and bottom of the cells.

Belbins team-role theory Essay Example | Topics and Well Written Essays - 2500 words - 1

Belbins team-role theory - Essay Example The benefit of utilising and understanding Belbin Team Roles is that not only do we learn more about ourselves, but also a lot about our work colleagues and how to get the best out of them† (â€Å"Belbin Team-Role Theory,† 2011, pgh. 3). In Belbin’s theory, his specified team roles help delineate what kind of worker each person is within a group setting at work. It is obvious from the â€Å"Belbin Team-Role Summary Sheet† that every individual contributing in a group—either as a plant, a resource investigator, a coordinator, a shaper, a monitor evaluator, a teamworker, an implementer, a completer finisher, or a specialist—has particular strengths and weaknesses (2011, pp. 1). Teamworkers are People-Oriented Roles. Teamworkers want to make the flow of the group smooth, and will do anything to be cooperative. In fact, they will go out of their way to make any project operate like a well-oiled machine. Teamworkers are diplomatic. Not only do they avoid friction and drama, but they try to build a team instead of break it down. Typical teamworkers will always try to repair any fractures within the infrastructure of the team. Teamworkers are good to have around because they are beneficial alliances in the event that other workers are upset with the manager. Specialists are Thought-Oriented Roles. Specialists are self-starting, dedicated types who evaluates research. Additionally, this person finds specialized information that is difficult to find. The weakness of a specialist.

Government Essay Example | Topics and Well Written Essays - 250 words - 3

Government - Essay Example This would politicize the entire education system since the appointed education leaders have to act in favor of the governor’s preference, but not in accordance with the people’s desire. An independent state elected board such as Texas Board of Education should play the role of overseeing and managing the education in states. This will help in bringing positive tension in the education system since the board will carry out its functions with independence from the political leaders such as the governors (Robelen, par. 4). I agree that governors are probably correct when they say that individuals look at them as the leaders of a state’s education system. Governors are usually involved in the leadership of the entire state, which implies that the governor acts as the head of education leaders of a state. Hence, he has to take part indirectly in the leadership of a state’s education. I have an opinion that decision making in education must be depoliticized. De-politicization of decision making in the education helps in the implementation of education policies through considering nonpolitical influence, which develops the education

Thursday, October 17, 2019

Porter's five forces Case Study Example | Topics and Well Written Essays - 500 words

Porter's five forces - Case Study Example Start up of soft drink industry requires high financial capital, labor, marketing, warehousing and advertising. The situation makes it difficult for a competitor to enter into the industry and expand. Additionally, Coca Cola Company has made huge investments in their advertisements which resulted to brand loyalty with its customers throughout the world. For example, Coca Cola Company’s advertising strategy made it attain a larger market share in 2011 than its rival Pepsi. The market of Coca Cola was at 41.9%, the market share for its rival Pepsi was at 29.9%. The advantage was driven by huge investment Coca Cola Company makes in advertising (Russell). The main substitutes of Coca Cola products are bottled water, tea, coffee and sport drinks. Consumers are currently more concern about their health. They have begun cutting down on the demand of sodas because of the view that they contribute highly to obesity due to its high caloric content. Additionally, tea and coffee are a threat to Coca Cola products because they have caffeine and customers can decide to buy it. This makes the substitute products stronger and a threat to the Coca Cola Company (Russell). Raw materials essential for manufacturing Coca Cola products are basic raw materials such as sugar, flavor, color and packaging materials. The suppliers of the above materials have low bargaining power because materials are easy to obtain and there are many suppliers in the market. However, with recent inflation of the price of goods, cost of materials such as sugar and packaging increased. This affects the profits of Coca Cola Company (Hill and Jones 44). The Coca Cola Company and any other company distribute their soft drinks to convenience stores, restaurants, supermarkets and large grocers for resale directly to customers. They are the buyers of Coca Cola products, and they buy the products in large quantities for

Wednesday, October 16, 2019

Case Study Essay Example | Topics and Well Written Essays - 750 words - 21

Case Study - Essay Example The analysis primarily comprises of the situations that result in to the decline or death of a brand and the appropriate approaches that can be applied to strengthen the survival of the brand and give it another chance of survival (Aaker, 1991). The death or decline of a brand is a complicated issue that a time leads to controversies, a good example sis the collapse of the Taurus brand after two decades. This brand by Ford Motors is a good example to indicate that the period of time that a brand stays in the market cannot be set, when the time for a brand to die or decline reaches, the entire process becomes irreversible. A good instance of how complex it is to revive a brand after its collapse id the example of the Harley-Davidson motorcycles. The main reason why it’s hard to revive a declining brand is precisely due to financial losses. However, research shows that it’s better to revive a declining or dead brand rather than developing a new brand, this is because the process of reviving a brand has reduced risks and expenses (Sunil & Chiranjeev, 2009). Branding is a technique that in many years has been applied to differentiate products and services from different suppliers. In the current day, the strength of a brand is contributed by its equity with its consumers. This is also defined as brand equity. Brand equity refers to the degree of difference effect that consumers product know how regarding a brand has on the reaction of a purchaser to the promotion activities. Today, some of the brands that have managed to maintain a very strong brand equity includes such as coca cola, HP, Sony to name but a few(Aaker, 1991). There are several reasons as to why brand die or decline, they include the introduction of the brand, its growth, the maturity of a brand and eventually the decline.in addition to these four factors, there are reason that a brand may die or decline, they include such as increase in prices of the brand with no increase in the

Porter's five forces Case Study Example | Topics and Well Written Essays - 500 words

Porter's five forces - Case Study Example Start up of soft drink industry requires high financial capital, labor, marketing, warehousing and advertising. The situation makes it difficult for a competitor to enter into the industry and expand. Additionally, Coca Cola Company has made huge investments in their advertisements which resulted to brand loyalty with its customers throughout the world. For example, Coca Cola Company’s advertising strategy made it attain a larger market share in 2011 than its rival Pepsi. The market of Coca Cola was at 41.9%, the market share for its rival Pepsi was at 29.9%. The advantage was driven by huge investment Coca Cola Company makes in advertising (Russell). The main substitutes of Coca Cola products are bottled water, tea, coffee and sport drinks. Consumers are currently more concern about their health. They have begun cutting down on the demand of sodas because of the view that they contribute highly to obesity due to its high caloric content. Additionally, tea and coffee are a threat to Coca Cola products because they have caffeine and customers can decide to buy it. This makes the substitute products stronger and a threat to the Coca Cola Company (Russell). Raw materials essential for manufacturing Coca Cola products are basic raw materials such as sugar, flavor, color and packaging materials. The suppliers of the above materials have low bargaining power because materials are easy to obtain and there are many suppliers in the market. However, with recent inflation of the price of goods, cost of materials such as sugar and packaging increased. This affects the profits of Coca Cola Company (Hill and Jones 44). The Coca Cola Company and any other company distribute their soft drinks to convenience stores, restaurants, supermarkets and large grocers for resale directly to customers. They are the buyers of Coca Cola products, and they buy the products in large quantities for

Tuesday, October 15, 2019

Academic Project Title Essay Example for Free

Academic Project Title Essay The purpose of this study is to study design of public toilets in shopping malls with high status. This study was conducted at the well-known shopping center in Kuala Lumpur at KLCC. KLCC is chosen as the place has always been a focus of people no matter domestically or internationally. KLCC can be said as a luxury place as for business that provides well-known store sales and items with high price and most of it is expensive. So, the main purpose of this study is to evaluate the design of public toilets have their own class, a literature review was based on the goals and objectives of the study. Many studies through secondary data methods obtained from a variety of different sources such as, books, journals, newspapers, articles, and even the internet. Most of the data collected from the library at UiTM Shah Alam. See more: The stages of consumer buying decision process essay Researcher also do a variety of research methods, including the four methods of research which is based on the observation, physical measurements at the site location, interviewing experts in the design of the toilet and also produce and distribute a questionnaire to obtain primary data. After getting the information from all methods of research, then all data will be collected and analyzed to be used to determine the objectives of the study. Data analysis will be separated into two sample case studies to be carried out to obtain the findings. Toilet with fee of RM 2 per entry in KLCC has been used to study the case. All research methods have been used for the review process for primary data based on research on selected toilet. Researcher has also managed to interview a designer who has been involved in the design of the toilet. Apart from distributing the questionnaire sheets to the toilet users, measuring and observing the restroom area also took place at the site location.

Monday, October 14, 2019

Allocation of Resources in Cloud Server Using Lopsidedness

Allocation of Resources in Cloud Server Using Lopsidedness B. Selvi, C. Vinola, Dr. R. Ravi Abstract– Cloud computing plays a vital role in the organizations resource management. Cloud server allows dynamic resource usage based on the customer needs. Cloud server achieves efficient allocation of resources through virtualization technology. It addresses the system that uses the virtualization technology to allocate the resources dynamically based on the demands and saves energy by optimizing the number of server in use. It introduces the concept to measure the inequality in multi-dimensional resource utilization of a server. The aim is to enlarge the efficient resource utilization system that avoids overload and save energy in cloud by allocating the resources to the multiple clients in an efficient manner using virtual machine mapping on physical system and Idle PMs can be turned off to save energy. Index Terms-cloud computing, resource allocation, virtual machine, green computing. I. Introduction In cloud computing provides the service in an efficient manner. Dynamically allocate the resources to multiple cloud clients at the same time over the network. Now-a-Days many of the business organizations using the concept of cloud computing due to the advantage with resource management and security management. A cloud computing network is a composite system with a large number of shared multiple resources. These are focus to unpredictable needs and can be affected by outside events beyond the control. Cloud resource allocation management requires composite policies and decisions for multi-objective optimization. It is extremely difficult because of the convolution of the system, which makes it impracticable to have accurate universal state information. It is also subject to continual and unpredictable communications with the surroundings. The strategies for cloud resource allocation management associated with the three cloud delivery models, Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS), differ from one another. In all cases, the cloud providers are faced with huge, sporadic loads that contest the claim of cloud flexibility. Virtualization is the single most efficient way to decrease IT expenses while boosting effectiveness and liveliness not only for large enterprise, but for small and mid budget organizations also. Virtualization technology has advantages over the following aspects. Run multi-operating systems and applications on a single computer. Combine hardware to get hugely higher productivity from smaller number of servers. Save 50 percent or more on general IT costs. Speed up and make things easier IT management, maintenance, and the consumption of new applications. The system aims to achieve two goals: The capability of a physical machine (PM) should be enough to satisfy the resource requirements of all virtual machines (VMs) running on it. Otherwise, the Physical machine is overload and degrades performance of its VMs. The number of PMs used should be minimized as long as they can still satisfy the demands of all VMs. Idle physical machine can be turned off to save energy. There is an intrinsic exchange between the two goals in the face of altering resource needs of VMs. For overload avoidance, the system should keep the utilization of PMs Low to reduce the possibility of overload in case the resource needs of VMs increase later. For green computing, the system should keep the utilization of PMs reasonably high to make efficient use of their energy. It presents the design and implementation of an efficient resource allocation system that balance between the two goals. The following aids are, The development of an efficient resource allocation system that avoids overload in the system effectively while minimizing the number of servers use. To introduce the concept of â€Å"lopsidedness† to measure the uneven utilization of a server. By minimizing lopsidedness, the system improves the overall utilization of servers in the face of multidimensional resource constraints. To implement a load prediction algorithm that can capture the future resource usages of applications accurately without looking inside the VMs. Fig.1 System Architecture II. System Overview The architecture of the system is presents in Fig.1. The physical machine runs with VMware hypervisor (VMM) that supports VM0 and one or more VMs in cloud server. Each VM can contain one or more applications residing it. All physical machines can share the same storage space. The mapping of VMs to PMs is maintains by VMM. Information collector node (ICN) collects the information about VMs resource status that runs on VM0. The virtual machine monitor creates and monitors the virtual machine. The CPU scheduling and network usage monitoring is manage by VMM. Assume with available sampling technique can measure the working set size on each virtual machine. The information collects at each physical machine and passes the information to the admin controller (AC). AC connects with VM Allocator that activated periodically and gets information from the ICN resource needs history of VMs, and status of VMs. The allocator has several components. The Indicator Indicates the future demands of virtual machine and total load value for physical machine. The ICN at each node attempts to satisfy the input demands locally by adjusting the resource allocation of VMs sharing the same VMM. The hotspot remover in VM allocator spots if the resource exploitation of any PM is above the Hot Point. If so, then some VMs runs on the particular PM will be move away to another PM to reduce the selected PM load. The cold spot remover identifies the system that is below the average utilization (Cold point) of actively used PMs. If so, then it some PMs turned off to save energy. Finally, the exodus list passes to the admin controller. III. The Lopsidedness Algorithm The resource allocation system introduces the concept of lopsidedness to measure the unevenness in the utilization of multiple resources on a server. Let consider n be the number of resources and let consider ri be the exploitation of the ith resource. To define the resource lopsidedness of a server p by considering r is the average utilization of resources in server p. In practice, not all types of resources are performance critical and then consider bottleneck resources in the above calculation. By minimizing the lopsidedness, the system can combine different types of workloads nicely and improve the overall utilization of server resources. A. Hot and Cold Points The system executes periodically to evaluate the resource allocation status based on the predicted future resource demands of VMs. The system defines a server as a hot spot if the utilization of any of its resources is above a hot threshold. This indicates that the server is overloaded and hence some VMs running on it should be migrated away. The system defines the temperature of a hot spot p as the square sum of its resource utilization beyond the hot threshold. Consider R is the set of overloaded resources in server p and rt is the hot threshold for resource r. (Note that only overloaded resources are considered in the calculation.) The temperature of a hot spot reflects its degree of overload. If a server is not a hot spot, its temperature is zero. The system defines a server as a cold spot if the utilizations of all its resources are below a cold threshold. This indicates that the server is mostly idle and a potential candidate to turn off to save energy. However, the system does so only when the average resource utilization of all actively used servers (i.e., APMs) in the system is below a green computing threshold. A server is actively used if it has at least one VM running. Otherwise, it is inactive. Finally, The system define the warm threshold to be a level of resource utilization that is sufficiently high to justify having the server running but not so high as to risk becoming a hot spot in the face of temporary fluctuation of application resource demands. Different types of resources can have different thresholds. For example, the system can define the hot thresholds for CPU and memory resources to be 90 and 80 percent, respectively. Thus a server is a hot spot if either its CPU usage is above 90 percent or its memory usage is above 80 percent. B. Hot Spot Reduction The system sort the list of hot spots in the system in descending temperature (i.e., the system handle the hottest one first). Our goal is to eliminate all hot spots if possible. Otherwise, keep their temperature as low as possible. For each server p, the system first decides which of its VMs should be migrated away. The system sort its list of VMs based on the resulting temperature of the server if that VM is migrated away. The system aims to migrate away the VM that can reduce the server’s temperature the most. In case of ties, the system selects the VM whose removal can reduce the lopsidedness of the server the most. For each VM in the list, the system sees if the system can find a destination server to accommodate it. The server must not become a hot spot after accepting this VM. Among all such servers, the system select one whose lopsidedness can be reduced the most by accepting this VM. Note that this reduction can be negative which means the system selects the server wh ose lopsidedness increases the least. If a destination server is found, the system records the migration of the VM to that server and updates the predicted load of related servers. Otherwise, the system moves onto the next VM in the list and try to find a destination server for it. As long as the system can find a destination server for any of its VMs the system consider this run of the algorithm a success and then move onto the next hot spot. Note that each run of the algorithm migrates away at most one VM from the overloaded server. This does not necessarily eliminate the hot spot, but at least reduces its temperature. If it remains a hot spot in the next decision run, the algorithm will repeat this process. It is possible to design the algorithm so that it can migrate away multiple VMs during each run. But this can add more load on the related servers during a period when they are already overloaded. The system decides to use this more conservative approach and leave the system s ome time to react before initiating additional migrations. IV. System Analysis In Cloud Environment, the user has to give request to download the file. This request will be store and process by the server to respond the user. It checks the appropriate sub server to assign the task. A job scheduler is a computer application for controlling unattended background program execution; job scheduler is create and connects with all servers to perform the user requested tasks using this module. In User Request Analysis, the requests are analyze by the scheduler before the task is give to the servers. This module helps to avoid the task overloading by analyzing the nature of the users request. Fist it checks the type of the file going to be download. The users request can be the downloading request of text, image or video file. In Server Load value, the server load value is identifies for job allocation. To reduce the over load, the different load values are assign to the server according to the type of the processing file. If the requested file is text, then the minimum load value will be assign by the server. If it is video file, the server will assign high load value. If it is image file, then it will take medium load value. In Server Allocation, the server allocation task will take place. To manage the mixed workloads, the job-scheduling algorithm is follow. In this the scheduling, depends upon the nature of the request the load values are assign dynamically. Minimum load value server will take high load value job for the next time. High load value server will take minimum load value job for next time. The aim is to enlarge the efficient resource utilization system that avoids overload and save energy in cloud by allocating the resources to the multiple clients in an efficient manner using virtual machine mapping on physical system and Idle PMs can be turned off to save energy. Fig. 2 Comparison graph IV. Conclusion It presented by the design, implementation and evaluation of efficient resource allocation system for cloud computing services. Allocation system multiplexes by mapping virtual to physical resources based on the demand of users. The contest here is to reduce the number of dynamic servers during low load without sacrificing performance. Then it achieves overload avoidance and saves energy for systems with multi resource constraints to satisfy the new demands locally by adjusting the resource allocation of VMs sharing the same VMM and some of not used PMs could potentially be turn off to save energy. Future work can on prediction algorithm to improve the stability of resource allocation decisions and plan to explore using AI or control theoretic approach to find near optimal values automatically. References [1] Anton Beloglazov and Rajkumar Buyya (2013), ‘Managing Overloaded Hosts For Dynamic Consolidation of Virtual Machines In Cloud Data Centers Under Quality of Service Constraints’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 7, pp. 1366-1379. [2] Ayad Barsoum and Anwar Hasan (2013), ’Enabling Dynamic Data And Indirect Mutual Trust For Cloud Computing Storage Systems’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 12, pp. 2375-2385. [3] Daniel Warneke, and Odej Kao (2011), ‘Exploiting Dynamic Resource Allocation For Efficient Parallel Data Processing In The Cloud’, IEEE Transactions on Parallel and Distributed Systems, Vol. 22, No. 6, pp. 985-997. [4] Fung Po Tso and Dimitrios P. Pezaros (2013), ‘Improving Data Center Network Utilization Using Near-Optimal Traffic Engineering’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 6, pp. 1139-1147. [5] Hong Xu, and Baochun Li (2013), ‘Anchor: A Versatile and Efficient Framework for Resource Management in The Cloud’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 6, pp. 1066-1076. [6] Jia Rao, Yudi Wei, Jiayu Gong, and Cheng-Zhong Xu (2013), ‘Qos Guarantees And Service Differentiation For Dynamic Cloud Applications’, IEEE Transactions on Network and Service Management, Vol. 10, No. 1, pp. 43-54. [7] Junwei Cao, Keqin Li,and Ivan Stojmenovic (2013), ‘Optimal Power Allocation and Load Distribution For Multiple Heterogeneous Multicore Server Processors Across Clouds and Data Centers’, IEEE Transactions on Computers, Vol. 32, No. 99, pp.145-159. [8] Kuo-Yi Chen, Morris Chang. J, and Ting-Wei Hou (2011), ‘Multithreading In Java: Performance and Scalability on Multicore Systems’, IEEE Transactions On Computers, Vol. 60, No. 11, pp. 1521-1534. [9] Olivier Beaumont, Lionel Eyraud-Dubois, Christopher Thraves Caro, and Hejer Rejeb (2013), ‘Heterogeneous Resource Allocation Under Degree Constraints’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 5, pp. 926-937. [10] Rafael Moreno-Vozmediano, Ruben S. Montero, and Ignacio M. Llorente (2011), ‘Multicloud Deployment Of Computing Clusters For Loosely Coupled MTC Applications’, IEEE Transactions on Parallel and Distributed Systems, Vol. 22, No. 6, pp. 924-930. [11] Sangho Yi, Artur Andrzejak, and Derrick Kondo (2012), ‘Monetary Cost-Aware Checkpointing And Migration on Amazon Cloud Spot Instances’, IEEE Transactions on Services Computing, Vol. 5, No. 4, pp. 512-524. [12] Sheng Di and Cho-Li Wang (2013), ‘Dynamic Optimization of Multiattribute Resource Allocation In Self-Organizing Clouds’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 3, pp. 464-478. [13] Xiangping Bu, Jia Rao, and Cheng-Zhong Xu (2013), ‘Coordinated Self-Configuration of Virtual Machines And Appliances Using A Model-Free Learning Approach’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 4, pp.681-690. [14] Xiaocheng Liu, Chen Wang, Bing Bing Zhou, Junliang Chen, Ting Yang, and Albert Y. Zomaya (2013), ‘Priority-Based Consolidation Of Parallel Workloads In The Cloud’, IEEE Transactions on Parallel and Distributed Systems, Vol. 24, No. 9, pp. 1874-1883. [15] Ying Song, Yuzhong Sun, and Weisong Shi(2013), ‘A Two-Tiered On-Demand Resource Allocation Mechanism For VM-Based Data Centers’, IEEE Transactions on Services Computing, Vol. 6, No. 1, pp. 116-129.  ­Ã‚ ­Ã‚ ­