MY PAKISTAN

Total Pageviews

Monday, September 26, 2011

ENRON Scandal Summary

An ENRON Scandal Summary

The ENRON Scandal is considered to be one of the most notorious within American history; an ENRON scandalsummary of events is considered by many historians and economists alike to have been an unofficial blueprint for a case study on White Collar Crime – White Collar Crime is defined as non-violent, financially-based criminal activity typically undertaken within a setting in which its participants retain advanced education with regard to employment that is considered to be prestigious. The following took place in the midst of the ENRON Scandal:

ENRON Scandal Summary: The Deregulation of ENRON

While the term regulation within a commercial and corporate setting typically applied to the government’s ability to regulate and authorize commercial activity and behavior with regard to individual businesses, the ENRON executives applied for – and were subsequently granted – government deregulation. As a result of this declaration of deregulation, ENRON executives were permitted to maintain agency over the earnings reports that were released to investors and employees alike.

This agency allowed for ENRON’s earning reports to be extremely skewed in nature – losses were not illustrated in their entirety, prompting more and more investments on the part of investors wishing to partake in what seemed like a profitable company

ENRON Scandal Summary: Misrepresentation

By misrepresenting earnings reports while continuing to enjoy the revenue provided by the investors not privy to the true financial condition of ENRON, the executives of ENRON embezzled funds funneling in from investments while reporting fraudulent earnings to those investors; this not only proliferated more investments from current stockholders, but also attracted new investors desiring the enjoy the apparent financial gains enjoyed by the ENRON corporation.

ENRON Scandal Summary: Fraudulent Energy Crisis

In the year 2000, subsequent to the discovery of the crimes listed in the above ENRON Scandal Summary, ENRON had announced that there was a critical circumstance within California with regard to the supply of Natural Gas. Due to the fact the ENRON was a then-widely respected corporation, the general populace were not wary about the validity of these statements.

However, upon retroactive review, many historians and economists suspect that the ENRON executives manufactured this crisis in preparation of the discovery of the fraud they had committed – although the executives of ENRON were enjoying the funds rendered from investments, the corporation itself was approaching bankruptcy.

ENRON Scandal Summary: Embezzlement

An ENRON Scandal Summary of the acts of Embezzlement undertaken by ENRON Executives may be defined as the criminal activity involving the unlawful and unethical attainment of monies and funding by employees; typically, funds that are embezzled are intended for company use in lieu of personal use. While the ENRON executives were pocketing the investment funds from unsuspecting investors, those funds were being stolen from the company, which resulted in the bankruptcy of the company.

ENRON Scandal Summary: Losses and Consequences

Due to the actions of the ENRON executives, the ENRON Company went bankrupt. The loss sustained by investors exceeded $70 billion. Furthermore, these actions cost both trustees and employees upwards of $2 billion; this total is considered to be a result of misappropriated investments, pension funds, stock options, and savings plans – as a result of the government regulation and the limited liability status of the ENRON Corporation, only a small amount of the money lost was ever returned.

Thursday, September 22, 2011

Business SWOT Analysis - Threats is an Opportunity

SWOT Analysis is an abbreviation for Strengths; Weaknesses; Opportunities and Threats. In the application of four factors of SWOT, proper understanding of the differences between them would bring about maximum benefits.

The Four SWOT Factors are also known as the Internal and External Factors. The Internal factors consist of Strengths and Weaknesses, whereas the External Factors consist of Opportunities and Threats.

In normal practice, the four SWOT factors can be clearly categorised based on the findings. Below are some examples:-

Strengths:

Strong financials
vast customer base
positive cash flow
Weaknesses:

long delivery lead time
high inventory
inconsistent quality
Opportunities

export incentives
acceptance of middle east countries
Good relations with trade ministry
Threats

escalating of cost
product substitition
computer virus attack by year 2000

As case study: Every Threats is an Opportunity ?
Every Threats is indeed an opportunity! Take for example: Before Y2K millennium, there was a global threat that the computer system may go burst on the 1st day of year 2000. This Threat was clearly beyond anyones' control, and it was inevitable. During that time, all organizations had 2 choices ie. Do something to overcome the computer threats or do nothing and wait for the worst to happen.

Most big and medium size organizations in Malaysia I contacted chosen to pay a high cost to work on an "enhanced" computer system that supposedly can overcome the computer threats. How did this Y2K threats taken as an opportunity? In fact, those organizations who did something to the computer system took the opportunity to upgrade or enhance their computer system to improve their inventory system while overcome the possible computer threats.

With this example of Threats, you can actually flip the Threats the other way round and turn it into an Opportunity.

Saturday, September 17, 2011

7 Important Principles of Total Quality Management

TQM is a set of management practices throughout the organization, geared to ensure the organization consistently meets or exceeds customer requirements. TQM places strong focus on process measurement and controls as means of continuous improvement.

Before reading more about TQM, it might be helpful to quickly review the major forms of quality management in an organization.

Total Quality Management (TQM) is an approach that organizations use to improve their internal processes and increase customer satisfaction. When it is properly implemented, this style of management can lead to decreased costs related to corrective or preventative maintenance, better overall performance, and an increased number of happy and loyal customers.

However, TQM is not something that happens overnight. While there are a number of software solutions that will help organizations quickly start to implement a quality management system, there are some underlying philosophies that the company must integrate throughout every department of the company and at every level of management. Whatever other resources you use, you should adopt these seven important principles of Total Quality Management as a foundation for all your activities.

1. Quality can and must be managed

Many companies have wallowed in a repetitive cycle of chaos and customer complaints. They believe that their operations are simply too large to effectively manage the level of quality. The first step in the TQM process, then, is to realize there is a problem and that it can be controlled.

2. Processes, not people, are the problem

If your process is causing problems, it won’t matter how many times you hire new employees or how many training sessions you put them through. Correct the process and then train your people on these new procedures.

3. Don’t treat symptoms, look for the cure

If you just patch over the underlying problems in the process, you will never be able to fully reach your potential. If, for example, your shipping department is falling behind, you may find that it is because of holdups in manufacturing. Go for the source to correct the problem.

4. Every employee is responsible for quality

Everyone in the company, from the workers on the line to the upper management, must realize that they have an important part to play in ensuring high levels of quality in their products and services. Everyone has a customer to delight, and they must all step up and take responsibility for them.

5. Quality must be measurable

A quality management system is only effective when you can quantify the results. You need to see how the process is implemented and if it is having the desired effect. This will help you set your goals for the future and ensure that every department is working toward the same result.

6. Quality improvements must be continuous

Total Quality Management is not something that can be done once and then forgotten. It’s not a management “phase” that will end after a problem has been corrected. Real improvements must occur frequently and continually in order to increase customer satisfaction and loyalty.

7. Quality is a long-term investment

Quality management is not a quick fix. You can purchase QMS software that will help you get things started, but you should understand that real results won’t occur immediately. TQM is a long-term investment, and it is designed to help you find long-term success.

Before you start looking for any kind of quality management software, it is important to make sure you are capable of implementing these fundamental principles throughout the company. This kind of management style can be a huge culture change in some companies, and sometimes the shift can come with some growing pains, but if you build on a foundation of quality principles, you will be equipped to make this change and start working toward real long-term success.

Monday, September 12, 2011

RISK ANALYSIS TECHNIQUES

The risk analysis process provides the foundation for the entire recovery planning effort

There may be some terminology and definition differences related to risk analysis, risk assessment and business impact analysis. Although several definitions are possible and can overlap, for purposes of this article, please consider the following definitions:
•A risk analysis involves identifying the most probable threats to an organization and analyzing the related vulnerabilities of the organization to these threats.
•A risk assessment involves evaluating existing physical and environmental security and controls, and assessing their adequacy relative to the potential threats of the organization.
•A business impact analysis involves identifying the critical business functions within the organization and determining the impact of not performing the business function beyond the maximum acceptable outage. Types of criteria that can be used to evaluate the impact include: customer service, internal operations, legal/statutory and financial.
Most businesses depend heavily on technology and automated systems, and their disruption for even a few days could cause severe financial loss and threaten survival. The continued operations of an organization depend on management’s awareness of potential disasters, their ability to develop a plan to minimize disruptions of mission critical functions, and the capability to recover operations expediently and successfully. The risk analysis process provides the foundation for the entire recovery planning effort.
A primary objective of business recovery planning is to protect the organization in the event that all or part of its operations and/or computer services are rendered unusable. Each functional area of the organization should be analyzed to determine the potential risk and impact related to various disaster threats

RISK ANALYSIS PROCESS

Regardless of the prevention techniques employed, possible threats that could arise inside or outside the organization need to be assessed. Although the exact nature of potential disasters or their resulting consequences are difficult to determine, it is beneficial to perform a comprehensive risk assessment of all threats that can realistically occur to the organization. Regardless of the type of threat, the goals of business recovery planning are to ensure the safety of customers, employees and other personnel during and following a disaster.
The relative probability of a disaster occurring should be determined. Items to consider in determining the probability of a specific disaster should include, but not be limited to: geographic location, topography of the area, proximity to major sources of power, bodies of water and airports, degree of accessibility to facilities within the organization, history of local utility companies in providing uninterrupted services, history of the area’s susceptibility to natural threats, proximity to major highways which transport hazardous waste and combustible products.
Potential exposures may be classified as natural, technical, or human threats. Examples include:
Natural Threats: internal flooding, external flooding, internal fire, external fire, seismic activity, high winds, snow and ice storms, volcanic eruption, tornado, hurricane, epidemic, tidal wave, typhoon.
Technical Threats: power failure/fluctuation, heating, ventilation or air conditioning failure, malfunction or failure of CPU, failure of system software, failure of application software, telecommunications failure, gas leaks, communications failure, nuclear fallout.
Human Threats: robbery, bomb threats, embezzlement, extortion, burglary, vandalism, terrorism, civil disorder, chemical spill, sabotage, explosion, war, biological contamination, radiation contamination, hazardous waste, vehicle crash, airport proximity, work stoppage (Internal/External), computer crime.
All locations and facilities should be included in the risk analysis. Rather than attempting to determine exact probabilities of each disaster, a general relational rating system of high, medium and low can be used initially to identify the probability of the threat occurring.
The risk analysis also should determine the impact of each type of potential threat on various functions or departments within the organization. The functions or departments will vary by type of organization.
The planning process should identify and measure the likelihood of all potential risks and the impact on the organization if that threat occurred. To do this, each department should be analyzed separately. Although the main computer system may be the single greatest risk, it is not the only important concern. Even in the most automated organizations, some departments may not be computerized or automated at all. In fully automated departments, important records remain outside the system, such as legal files, PC data, software stored on diskettes, or supporting documentation for data entry.
The impact can be rated as: 0= No impact or interruption in operations, 1= Noticeable impact, interruption in operations for up to 8 hours, 2= Damage to equipment and/or facilities, interruption in operations for 8 - 48 hours, 3= Major damage to the equipment and/or facilities, interruption in operations for more than 48 hours. All main office and/or computer center functions must be relocated.
Certain assumptions may be necessary to uniformly apply ratings to each potential threat. Following are typical assumptions that can be used during the risk assessment process:
1. Although impact ratings could range between 1 and 3 for any facility given a specific set of circumstances, ratings applied should reflect anticipated, likely or expected impact on each area.
2. Each potential threat should be assumed to be “localized” to the facility being rated.
3. Although one potential threat could lead to another potential threat (e.g., a hurricane could spawn tornados), no domino effect should be assumed.
4. If the result of the threat would not warrant movement to an alternate site(s), the impact should be rated no higher than a “2.”
5. The risk assessment should be performed by facility.
To measure the potential risks, a weighted point rating system can be used. Each level of probability can be assigned points as follows:

Probability Points
High 10
Medium 5
Low 1

To obtain a weighted risk rating, probability points should be multiplied by the highest impact rating for each facility. For example, if the probability of hurricanes is high (10 points) and the impact rating to a facility is “3” (indicating that a move to alternate facilities would be required), then the weighted risk factor is 30 (10 x 3). Based on this rating method, threats that pose the greatest risk (e.g., 15 points and above) can be identified.
Considerations in analyzing risk include:
1. Investigating the frequency of particular types of disasters (often versus seldom).

2. Determining the degree of predictability of the disaster.

3. Analyzing the speed of onset of the disaster (sudden versus gradual).

4. Determining the amount of forewarning associated with the disaster.

5. Estimating the duration of the disaster.

6. Considering the impact of a disaster based on two scenarios;
a. Vital records are destroyed
b. Vital records are not destroyed.

7. Identifying the consequences of a disaster, such as;
a. Personnel availability
b. Personal injuries
c. Loss of operating capability
d. Loss of assets
e. Facility damage.

8. Determining the existing and required redundancy levels throughout the organization to accommodate critical systems and functions, including;
a. Hardware
b. Information
c. Communication
d. Personnel
e. Services.

9. Estimating potential dollar loss;
a. Increased operating costs
b. Loss of business opportunities
c. Loss of financial management capa- bility
d. Loss of assets
e. Negative media coverage
f. Loss of stockholder confidence
g. Loss of goodwill
h. Loss of income
i. Loss of competitive edge
j. Legal actions.

10. Estimating potential losses for each business function based on the financial and service impact, and the length of time the organization can operate without this business function. The impact of a disaster related to a business function depends on the type of outage that occurs and the time that elapses before normal operations can be resumed.

11. Determining the cost of contingency planning.

DISASTER PREVENTION

Because a goal of business recovery planning is to ensure the safety of personnel and assets during and following a disaster, a critical aspect of the risk analysis process is to identify the preparedness and preventive measures in place at any point in time. Once the potential areas of high exposure to the organization are identified, additional preventative measures can be considered for implementation.
Disaster prevention and preparedness begins at the top of an organization. The attitude of senior management toward security and prevention should permeate the entire organization. Therefore, management’s support of disaster planning can focus attention on good security and prevention techniques and better prepare the organization for the unwelcome and unwanted.
Disaster prevention techniques include two categories: procedural prevention and physical prevention.
Procedural prevention relates to activities performed on a day-to-day, month-to-month, or annual basis, relating to security and recovery. Procedural prevention begins with assigning responsibility for overall security of the organization to an individual with adequate competence and authority to meet the challenges. The objective of procedural prevention is to define activities necessary to prevent various types of disasters and ensure that these activities are performed regularly.
Physical prevention and preparedness for disaster begins when a site is constructed. It includes special requirements for building construction, as well as fire protection for various equipment components. Special considerations include: computer area, fire detection and extinguishing systems, record(s) protection, air conditioning, heating and ventilation, electrical supply and UPS systems, emergency procedures, vault storage area(s), archival systems

SECURITY AND CONTROL CONSIDERATIONS

Security and controls refer to all the measures adopted within an organization to safeguard assets, ensure the accuracy and reliability of records, and encourage operational efficiency and adherence to prescribed procedures. The system of internal controls also includes the measures adopted to safeguard the computer system.
The nature of internal controls is such that certain control procedures are necessary for a proper execution of other control procedures. This interdependence of control procedures may be significant because certain control objectives that appear to have been achieved may, in fact, not have been achieved because of weaknesses in other control procedures upon which they depend.
Concern over this interdependence of control procedures may be greater with a computerized system than with a manual system because computer operations often have a greater concentration of functions, and certain manual control procedures may depend on automated control procedures, even though that dependence is not readily apparent. Adequate computer internal controls are a vital aspect of an automated system.
Security is an increasing concern because computer systems are increasingly complex. Particular security concerns result from the proliferation of PCs, local area networking, and on-line systems that allow more access to the mainframe and departmental computers. Modern technology provides computer thieves with powerful new electronic safecracking tools.
Computer internal controls are especially important because computer processing can circumvent traditional security and control techniques. There are two types of computer control techniques: (1) general controls that affect all computer systems, and (2) application controls that are unique to specific applications.
Important areas of concern related to general computer internal controls include: organization controls, systems development and maintenance controls, documentation controls, access controls, data and procedural controls, physical security, password security systems, communications security.
Application controls are security techniques that are unique to a specific computer application system. Application controls are classified as: input controls, processing controls, output controls.

INSURANCE CONSIDERATIONS

Adequate insurance coverage is a key consideration when developing a business recovery plan and performing a risk analysis. Having a disaster plan and testing it regularly may not, in itself, lower insurance rates in all circumstances.
However, a good plan can reduce risks and address many concerns of the underwriter, in addition to affecting the cost or availability of the insurance.
Most insurance agencies specializing in business interruption coverage can provide the organization with an estimate of anticipated business interruption costs. Many organizations that have experienced a disaster indicate that their costs were significantly higher than expected in sustaining temporary operations during recovery.
Most business interruption coverages include lost revenues following a disaster. Extra expense coverage includes all additional expenses until normal operations can be resumed. However, coverages differ in the definition of resumption of services. As a part of the risk analysis, these coverages should be discussed in detail with the insurer to determine their adequacy.
To provide adequate proof of loss to an insurance company, the organization may need to contract with a public adjuster who may charge between three and ten percent of recovered assets for the adjustment fee. Asset records become extremely important as the adjustment process takes place.
Types of insurance coverages to be considered may include: computer hardware replacement, extra expense coverage, business interruption coverage, valuable paper and records coverage, errors and omissions coverage, fidelity coverage, media transportation coverage.
With estimates of the costs of these coverages, management can make reasonable decisions on the type and amount of insurance to carry.
These estimates also allow management to determine to what extent the organization should self-insure against certain losses.

RECORDS

Records can be classified in one of the three following categories: vital records, important records, and useful records.
Vital records are irreplaceable. Important records can be obtained or reproduced at considerable expense and only after considerable delay. Useful records would cause inconvenience if lost, but can be replaced without considerable expense.
Vital and important records should be duplicated and stored in an area protected from fire or its effects.
Records kept in the computer room should be minimized and should be stored in closed metal files or cabinets. Records stored outside the computer room should be in fire-resistant file cabinets with fire resistance of at least two hours.
Protection of records also depends on the particular threat that is present. An important consideration is the speed of onset and the amount of time available to act. This could range from gathering papers hastily and exiting quickly to an orderly securing of documents in a vault. Identifying records and information is most critical for ensuring the continuity of operations.
A systematic approach to records management is also an important part of the risk analysis process and business recovery planning. Additional benefits include: reduced storage costs, expedited service, federal and state statutory compliance.
Records should not be retained only as proof of financial transactions, but also to verify compliance with legal and statutory requirements. In addition, businesses must satisfy retention requirements as an organization and employer. These records are used for independent examination and verification of sound business practices.
Federal and state requirements for records retention must be analyzed. Each organization should have its legal counsel approve its own retention schedule. As well as retaining records, the organization should be aware of the specific record salvage procedures to follow for different types of media after a disaster.

CONCLUSION

The risk analysis process is an important aspect of business recovery planning. The probability of a disaster occurring in an organization is highly uncertain. Organizations should also develop written, comprehensive business recovery plans that address all the critical operations and functions of the business.
The plan should include documented and tested procedures, which, if followed, will ensure the ongoing availability of critical resources and continuity of operations.
A business recovery plan, however, is similar to liability insurance. It provides a certain level of comfort in knowing that if a major catastrophe occurs, it will not result in financial disaster for the organization.
Insurance, by itself, does not provide the means to ensure continuity of the organization’s operations, and may not compensate for the incalculable loss of business during the interruption or the business that never returns .

Friday, September 9, 2011

Moving toward zero inventory

With the widespread adoption of technologically advanced planning software and supply-chain communications, the zero-inventory door has opened wider than ever. So why are many distributors and manufacturers still finding it difficult to reach their inventory-management goals?

The concept of zero inventory has been around since the 1980s, and it's simple: Pare your inventory to a minimum and boost your profit margins by eliminating the need for warehousing and other associated expenses. Recent technological advances have made it easier (at least in theory) to execute a zero-inventory strategy.

At the fore is the development and widespread adoption of nimble, sophisticated software systems such as manufacturing resource planning (MRP II), enterprise resource planning (ERP), and advanced planning and scheduling (APS) systems, as well as dedicated supply-chain management software systems. These systems offer manufacturers greater functionality. For example, they can calculate resource-limited schedules, track orders of raw materials and monitor shop-floor production. They also can elevate both the visibility and transparency of internal order-fulfillment and manufacturing processes.

A second key technological advance - the development of more-robust security systems - also enables organizations to provide a high level of transparency and insight into inventory-management processes. Manufacturers can grant suppliers access to their planning software, typically via the Internet, with confidence that only appropriate, designated information will be viewed.

As a result of these developments, when a customer places an order, suppliers can immediately see that the manufacturer needs additional raw materials. Or, if a customer cancels an order, suppliers can immediately stop their activity. This increased level of responsiveness and awareness creates better cash flow and increased profitability. It also fosters less adversarial, more partnership-oriented relationships between suppliers and manufacturers. As a result of these technological advances, manufacturers can be more responsive to both their suppliers and their customers - that is, of course, if they follow through with everything else it takes to effectively move toward zero inventory.

Zero inventory: Pie in the sky?
Even after embracing MRP, ERP, APS or other technologies, many companies haven't even scratched the surface when it comes to reaching zero inventory, inventory-management experts agree. That maybe because the concept of zero inventory is an unreachable, theoretical ideal.

Steven A. Melnyk, professor of production and operations management in the Eli Broad Graduate School of Management at Michigan State University, prefers to speak of "lean" rather than "zero" inventory. This shift in nomenclature is just the beginning of a systematic approach that Melnyk says any business can use to reduce inventory in a productive, ong-lasting way.

"Inventory is not the problem; it is a symptom," says Melnyk, a widely recognized authority and consultant on supply-chain management. "Inventory is the result of waste and variance. The way to reduce inventory is simple: You get rid of waste and you reduce variance."

Such an approach sounds basic, but many companies aren't willing to do what it takes.

"Instead, they jump around from program to program, trying to slash inventory here and there, like a dieter trying to lose weight," Melnyk says. "Just as a dieter can't realistically expect to stop eating for a day, jump on a scale, and magically hit his weight-loss goals, a company can't take a few superficial steps and significantly reduce inventory."

Indeed, experts agree that most companies striving for zero inventory must fundamentally change their manufacturing and handling processes. That requires going well beyond simply investing in the latest software to making key operational changes in order to make real progress.

Low-hanging fruit
"If you attack from the right perspective, there's lots of stuff you can do to reduce inventory," Melnyk says. "This is low-hanging fruit."

Melnyk offers the following tips to get started slashing inventory:

Revisit your customer base. Ensure you understand who your critical customers are and what they value. Then communicate that data throughout your organization so your employees can focus on creating value for key customers instead of trying to satisfy everyone - an unfocused approach that can lead to unnecessary inventory.

Examine your processes and establish a baseline. Before you can address problems,you have to recognize them. Consider the surprising experience of an engine manufacturer. An internal audit revealed that one component of the company's engines stayed in its system for 1,500 hours, traveled four miles, was touched by 106 people and passed through 122 steps - of which only 27 changed or added value to the component.

Identify your company's critical processes. Focus on critical processes to quickly identify which parts of your overall system most significantly affect inventory, then begin to make necessary changes. Critical processes include bottlenecks or constraints in your system, processes visible to your customers (your customer will judge you on the basis of process), unique core competencies or skills, the process withthe highest amount of variance, or the process that requires the most resources.

One proven strategy for improving inventory management is a "kaizen event." In a kaizen event, a small team gathers for perhaps several days to focus exclusively on revamping a process. By changing the process, you can reduce the need for inventory. Once you improve one inventory-management process, you can move on to another.

Staying focused
Like most management efforts, controlling inventory requires ongoing commitment.

"Really successful companies understand the basics and master them," Melnyk says. "They keep doing the simple things right, all the time. They understand that you need inventory - you can't ever get away from it. But you have to manage it intelligently."

Experts recommend the following tips to distributors and manufacturers zeroing in on inventory management:

Develop a lean infrastructure. Establish a program, place someone in charge of it and be sure senior managers support it.

Recognize and reward successes. Publicize your successes and the people who helped achieve them to underscore their importance.

Share information. If some parts of your organization become more adept than others at minimizing inventories, they should spread best practices across the company.

Be patient. Give employees enough time to install, learn and become proficient with new inventory-management systems.

Zero inventory may be wishful thinking, but embracing new technology and processes to manage your inventory more efficiently could move you much closer to that ideal.

Categories

Popular Posts