crack ielts with rob
hydrogen cars
A
Record gas prices are making road trips more expensive than ever. But what if, instead of gas, your car ran on the most abundant element in our universe? Many experts think hydrogen will replace petrol, diesel and natural gas as the main fuel for cars, buses and trucks over the next few decades. Already car manufacturers around the world have invested billions of dollars in research and development.
B
The advantages of hydrogen are enormous: no more smog-forming exhaust gases, no more carbon dioxide emissions that contribute to global warming, no more worries about diminishing oil supplies and rising prices. But some tricky questions need to be answered before mass-produced hydrogen cars start appearing on the streets. Where will the hydrogen come from? How will motorists fill up? How will cars store the fuel? And there's also the question of how best to tap the energy in the fuel for good, on-road performance.
C
Two kinds of engines can use hydrogen as a fuel; those that have an internal combustion engine converted to use it and those that are made up of a stack of fuel cells. Internal combustion engines have powered cars since they first began to replace horse-drawn carriages more than 100 years ago. These engines can be converted to run on a variety of fuels, including hydrogen. However, most car makers think that fuel cells powering an electric motor offer a better alternative. Unlike heavy batteries that need frequent recharging, fuel cells make electricity as they go. Recent developments in technology to have greatly increased the amount of power that a stack of cells can provide. This has opened up the prospect of efficient, non-polluting electric cars.
D
Fuel cell technology sounds simple. The hydrogen fuel reacts with oxygen from the air to produce water and electricity, the reverse of the familiar electrolysis process that releases oxygen and hydrogen from water. In reality, of course, it's a bit more complicated. The big advantage of a fuel cell engine over an internal combustion engine running on hydrogen is its greater efficiency. The same amount of hydrogen will take a fuel cell car at least twice as far as one with a converted internal combustion engine.
E
Hydrogen has many advantages as a fuel for vehicles, but a big disadvantage is that it is difficult to store. This is because at normal temperatures hydrogen is a gas. The obvious solutions are to strongly compress the hydrogen or liquefy it. However, tanks designed to hold hydrogen at extremely high pressures, or at temperatures approaching absolute zero, are heavy and expensive. So, high cost and a large amount of energy needed to liquefy the fuel is likely to be the main problems with refuelling with liquid hydrogen. Filling up with compressed hydrogen gas will probably prove more practical, even though it may reduce the distance between fills. Cars could store the hydrogen under high-pressure tanks similar to those used for compressed natural gas or specially treated carbon may also hold large amounts.
F
Although there's no risk that we'll ever run out of hydrogen, on Earth, it exists naturally only in chemical compounds, not as hydrogen gas. A relatively simple principal technology, steam reforming, can produce hydrogen gas for cars at central plants or filling stations. Alternatively, fuel tanks could be filled with petrol or methanol, with the cars using onboard reformers to generate hydrogen for their fuel cells. This shows promise as a transitional measure while research proceeds on the problems of storing hydrogen. Water is the only potentially pollution-free source of hydrogen. Researchers are looking at new ways of producing hydrogen from water such as using algae, bacteria or photovoltaic cells to absorb sunlight and split water into hydrogen and oxygen. But the technology most likely to be adopted on a large scale is electrolysis, which uses an electric current to split water into oxygen and hydrogen.
G
Remember the Hindenburg' – that's a phrase often heard when hydrogen is discussed. This German passenger airship, kept aloft by hydrogen, crashed in flames as it came into land at Lakehurst, New Jersey, USA in May 1937. Thirty-five people died. Nowadays helium, which can't burn, is the gas of choice for the lighter-than-air craft. Hydrogen is highly flammable, but recent research has indicated that the airship's fabric, not hydrogen, was the culprit in the Hindenburg disaster. Properly handled, there's no reason to think hydrogen is any more dangerous as a fuel than petrol, the explosive liquid now carried safely in the tanks of untold millions of motor vehicles.
H
Recent technological advances, particularly in fuel cell design, have made hydrogen-powered cars a practical proposition, and carmakers expect to start mass-producing them within the next decade or so. Their power and acceleration should match those of today's conventionally-powered vehicles, but they may have to be refuelled more often. The best ways to produce, distribute and store the hydrogen still have to be sorted out. In the short term, fossil fuels may remain in demand as a hydrogen source. However, the idea that in the not too distant future most of us will be driving non-polluting cars fuelled by hydrogen from a clean, renewable source is no longer a flight of fantasy.
CLONING.
Paragraph A
The ethics of human cloning has become a great issue over the past few years. The advocates for both sides of the issue have many reasons to clone or not to clone. A recent poll has shown the differences in opinions with half as many women as men approving of the process. Many people find it strange to see such a clear difference between men and women with twenty-six percent of men favoring cloning.
Paragraph B
So, what is cloning? It has been defined as "the production of genetically identical organisms via somatic cell nuclear transfer". You take an egg and remove its nucleus, which contains the DNA/genes. Then you take the DNA from an adult cell and insert it into the egg, either by fusing the adult cell with the enucleated egg or by a sophisticated nuclear transfer. You then stimulate the reconstructed egg electrically or chemically and try to make it start to divide and become an embryo. You then use the same process to implant the egg into a surrogate mother that you would use with artificial insemination. What cloning does is that it copies the DNA/genes of the person and creates a genetic duplicate. The person will not be a Xerox copy. He or she will grow up in a different environment than the clone, with different experiences and different opportunities. Genetics does not wholly define a person and the personality.
Paragraph C
In February 1997, when embryologist Ian Wilmut and his colleagues at Roslin Institute in Scotland were able to clone a lamb named Dolly, the world was introduced to a new possibility and will never be the same again. Before this, cloning was thought to be impossible, but now there is living proof that the technology and knowledge to clone animals exist. Questions began to arise within governments and scientific organizations and they began to respond. Are humans next? Is it possible to use this procedure to clone humans also? Would anyone actually try? What can we learn if we clone humans? How will this affect the world? These are only a few of the questions that have surfaced and need answering. A whole new concept in ethics was created when the birth of Dolly was announced.
Paragraph D
When the cells used for cloning are stem cells, we are talking about cells that are pluripotential. This means that they have the capacity to develop into any of the numerous differentiated cell types that make up the body. Early embryonic cells are pluripotent and a limited number of stem cells are also found in adults, in bone marrow for instance. There is an important distinction to be made between therapeutic cloning and reproductive cloning. Reproductive cloning would be exactly like Dolly; it would involve the creation of a cloned embryo which would then be implanted into a womb to develop to term and the birth of a clone. On the other hand, therapeutic cloning involves the use of pluripotent cells to repair damaged tissue, such as found after strokes, Parkinson's disease and spinal cord injuries.
Paragraph E
There is evidence for the effectiveness of therapeutic cloning as shown by work involving the introduction of stem cells into the brain of patients suffering from brain diseases when the cells which have been added differentiate to form nerve cells which can in turn then lead to the recovery of the lost function. In the US, fetal human cells have been similarly used though recent reports indicate that the results so far are disappointing. However, apart from the ethical problems associated with the use of fetal cells in this way, there are simply not enough cells available for it to be an effective treatment since it needs the cells from three fetuses to treat one patient.
Paragraph F
After Dolly, governments began to take control and make laws before anything drastic could ever happen. Several ethics committees were asked to decide whether scientists should be allowed to try to clone humans. In the United States, the Bioethics Advisory Commission recommended a five-year moratorium on cloning a child through somatic cell nuclear transfer. In the United Kingdom, the Human Fertilisation and Embryology Authority and the Human Genetics Advisory Commission have approved human cloning for therapeutic purposes, but not to clone children. Many organizations have come out and stated their opinions also. Amongst all this ethical defining, many people are being ignored by the governments. People are speaking out about what they want to be done.
Paragraph G
Historically, we find that many a great medical breakthrough, now rightly seen as a blessing, was in its own time condemned by bio-conservative moralists. Such was the case with anesthesia during surgery and childbirth. People argued that it was unnatural and that it would weaken our moral fiber. Such was also the case with heart transplantations and with in vitro fertilization. It was said children created by IVF would be dehumanized and would suffer grave psychological harm. Today, of course, anesthesia is taken for granted; heart transplantation is seen as one of medicine's glories and the public approval rate of IVF is up from 15% in the early seventies to over 70% today.
what is intelligence ?
Intelligence can be defined in many different ways since there are a variety of individual differences. Intelligence to a lot of people is the ability to reason and respond quickly yet accurately in all aspects of life, such as physically, emotionally, and mentally. Anyone can define intelligence because it is an open-ended word that has much room for interpretation but there are some theories that have more general acceptance than others.
Jean Piaget, a Swiss child psychologist, is well known for his four stages of mental growth theory. The first stage is the sensorimotor stage, from birth to age 2, the child is concerned with gaining motor control and getting familiar with physical objects. Then from ages two to seven, the child develops verbal skills. This is called the preoperational stage. In the concrete operational stage, the child deals with abstract thinking from age seven to twelve. The final stage, called the formal operational stage, ends at age fifteen and this is when the child learns to reason logically and systematically. Piaget's theory provides a basis for human intelligence by categorizing the major stages in child development and how they contribute to intelligence. Each of these invariant stages has major cognitive skills that must be learned. Knowledge is not merely transmitted verbally but must be constructed and reconstructed by the learner. Thus this development involves a few basic steps. The first fundamental process of intellectual growth is the ability to assimilate the new events learned into the pre-existing cognitive structures. The second fundamental process is the capability to change those structures to accommodate the new information and the last process is to find equilibrium between the first two processes.
Howard Gardener, a psychologist at Harvard University, has formulated an even more intriguing theory. He arranged human intelligence into seven sections. First of all, Gardner characterizes logistical-mathematical intelligence as people who think logically and are able to transfer abstract concepts to reality. These people enjoy solving puzzles and can be good inventors because they can visualize an invention even before making a prototype. They normally do better in school, which is for the most part due to the fact that schools are designed for the logical-mathematical type of thinkers. The linguistic type, as you might guess, is the natural-born writer and poet. They usually have excellent storytelling skills, spelling skills, and love to play with words. They tend to be bookworms and can easily learn more than one language. This type of intelligence seems to be located in the Broca's Area since damage to that portion of the brain will cause a person to lose the ability to express themselves in clear grammatical sentences, though that person's understanding of vocabulary and syntax remains intact. Next Gardener traced musical intelligence to certain areas of the brain. Impaired or autistic children who are unable to talk or interact with others have often exemplified exceptional musical talent. People of this type of intelligence show great aptitude for music, have the excellent pitch, and a good sense of rhythm. They concentrate better with music playing in the background. A particular concerto by Mozart has shown positive changes in the brains of listeners. Thus, musical intelligence can be a form or means of learning. Another form of intelligence is interpersonal intelligence. This category is for people who are very well aware of their environment. They tend to be sensitive to people around them, have an excellent idea of how people behave, and are especially sociable.
Politicians, leaders, counselors, mediators, and clergy are excellent examples of people with this type of intelligence. Damage to the frontal lobe has shown damage to this type of person's personality and his or her ability to interact with others. Intrapersonal intelligence is almost the opposite of interpersonal intelligence. This kind of intelligence deals with how well you know yourself. People who possess a higher degree of this type of intelligence have high self-esteem, self-enhancement, and a strong sense of character. They are usually deep thinkers, self-teachers, skilled in music or art, and have an inner discipline. This sort of intelligence is hard to measure since it is often difficult to recognize externally. Spatial intelligence is the ability to perceive and interpret images or pictures in three-dimensional space. The right hemisphere of the brain has been proven to control this form of intelligence and scientists are certain that spatial intelligence is clearly an independent portion of this intellect. A person of this intellect enjoys making maps and charts. Lastly, Gardner classifies people who are athletically inclined to body-kinesthetic intelligence. They perform the best in atmospheres of action, touching, physical contact, and working with their hands. Dancers and athletes are good examples of this form of intellect. Critics are a little skeptical that Gardner considers this a form of intellect since it is only a physical component of intelligence, but nonetheless, the brain does use both hemispheres to control movement.
Gardner believes that everyone has a mixture of all the categories varying at different levels. We can see a couple of intelligence types that stand out in people we know and including ourselves. For example, a math major's logical-mathematical intelligence would be more predominant than his linguistic intelligence.
Phrenology – Interpreting the Mind
Phrenology is the doctrine that proposes that psychological traits of personality, intellect, temperament, and character are ascertainable from analysis of the protrusions and depressions in the skull. It was an idea created by Franz Joseph Gall in 1796. Gall referred to his new idea in as cranioscopy. It was only later that Johanne Spurzheim, one of Gall's students, labeled the idea phrenology after Gall's death. Gall's idea was spurred when he noticed that university classmates who could memorize great amounts of information with relative ease seemed to have prominent eyes and large foreheads. He speculated that other internal qualities, besides memory, might be indicated by an external feature also. Gall theorized that traits were located in particular regions of the brain. Enlargements or expressions in the brain in particular areas meant a greater than normal or less than normal quantity of the given trait. It was assumed that the external contour of the skull accurately reflected the external contour of the brain where traits were localized.
Carl Cooter, another advocate of phrenology asserted that there were five major parts to phrenology theory. The first was simply that the brain was the organ of the mind. The second was that the brain was not a homogeneous unity, but a compilation of mental organs with specific functions. The third was that the organs were topographically localized. The fourth was that the relative size of any one of the organs could be taken as a measure of that organ's power over the person's behavior. The fifth and final part of Cooter's theory was that external craniological features could be used to diagnose the internal state of the mental faculties. All of these parts were based on observations Cooter made.
Sebastian Leibl, a student of Cooter's, theorized that there could be anywhere from 27 to 38 regions on the skull indicative of the organs of the brain, each of which stood for a different personality characteristic. Leibl further theorized that the different regions of the brain would grow or shrink with usage, just as muscles will grow larger when exercised. If a certain part of the brain grew from increased use, the skull covering that part of the brain would bulge out to make room for the expanded brain tissue. With these assumptions, the bumps on one's skull could be felt and the abilities and personality traits of a person could be assessed.
Spurzheim put a more metaphysical and philosophical spin on Gall's concept when he named it phrenology, meaning "science of the mind". To Spurzheim phrenology was the science that could tell people what they are and why exactly they are who they are. Spurzheim wrote that the premise of phrenology was to use the methods to identify individuals who stood out at both poles of society: those with a propensity for making important social contributions and those with a greater than the normal tendency for evil. The former was to be encouraged, nurtured, and developed in order to maximize their potential for good. The latter needed to be curbed and segregated to
protect society from their predisposition to be harmful to others.
Phrenology has met up with a good deal of criticism since it was proposed, but over time it has also been credited for certain things. John Fancher, a critic of phrenology, states that it was a curious mixture, combining some keen observations and insights with an inappropriate scientific procedure. Most criticism is aimed at the poor methods used by phrenologists and the tangent from the standard scientific procedure in investigating.
Pierre Flourens was also appalled by the shoddy methods of phrenologists and was determined to study the functions of the brain strictly by experiment. The specific technique that Flourens used was ablation, the surgical removal of certain small parts of the brain. Flourens was a very skilled surgeon and used ablation to cleanly excise certain slices from the brain. He ablated precisely determined portions of bird, rabbit, and dog brains. Flourens then observed the behavior of his subject. Since, for obvious ethical reasons, he was only able to use animals, he could not test uniquely human faculties. He never tested or measured any behavior until he nursed his subjects back to health after their operations. Flourens's subjects did show a lowering of all functions but not just one function as Gall's theory would have predicted. Gall asserted that he wiped out many organs all at once when he ablated part of the brain. This explained the general lowering of all functions in many of his subjects. Despite attacks from Flourens and others, phrenology held its appeal to scientists in Europe who would bring the idea across to America where it would flourish.
detecting deception
a.According to lay theory there exist three core basic signs for spotting liars. These are speaking quickly and excessive fluctuations in pitch of voice, the liar becoming fidgety and hesitant when questioned on detail, and failure to make eye-contact. There is nothing too perplexing about that. Yet, a good liar will be just as aware of these as the person they're lying to and thus will ensure that eye contact especially is evident. Shifty eyes can indicate that someone is feeling emotional perhaps from a lie, or perhaps just from nerves as a result of lying. Of course, this does not apply to instances where eye contact is non-existent, like during a telephone conversation. Psychologist Paul Eckman states that extensive use of details can make lies more believable. But they can also often trip up the liar. If the details change or contradict each other, you should suspect you're being had.
B There exists an intrinsic link between emotional connections and effective lying. The notion is that it is harder to lie to those whom we know well and care for. There are two reasons for this: firstly, those close to us are more aware of our mannerisms and behavioural patterns and can more readily detect our default lying techniques. The second reason is that people we don't know lack the emotional response that people we are close to have regarding lying. Robert Galatzer-Levy, MD, a psychoanalyst in private practice, reasons that, "The good liar doesn't feel bad or have a guilty conscience, so it's much more difficult to pick up on cues that they are lying." This is why it is apparently so easy for salesmen and politicians alike to lie so effortlessly.
CRecently a lot of politicians have been making outrageous claims about their ability to tell when a person is lying. Many lay people apparently believe that people can make a pretty good assessment of when a person is lying or not. Research illustrates, however, that nothing could be further from the truth.
DUniversity of Maryland professor, Patricia Wallace, an expert on deception detection states, "Psychological research on deception shows that most of us are poor judges of truthfulness and this applies even to professionals such as police and customs inspectors whose jobs are supposed to include some expertise at lie detection." She then goes on to describe two of the many experiments in the psychological research literature which support this contention.
E The first study was conducted in 1987 and looked at whether police officers could be trained to detect deceptive eye witness statements. They watched videotaped statements of witnesses, some of whom were truthful and others who were not. They were told to pay close attention to non-verbal cues, such as body movements and posture, gestures, and facial expressions. They were also instructed to pay attention to the tempo and pitch of voices. In the end, however, the officers did only slightly better than chance at determining whether the witnesses were being truthful. And the more confident the officer was of his or her judgment, the more likely he or she was to be wrong.
F Airline customs inspectors, whose very job is to try and determine suspiciousness and lying, and lay people were used in another experiment. The inspectors and lay people in this experiment weren't given any specific training or instructions on what to look for. They were simply told to judge the truthfulness of mock inspection interviews viewed on videotape and determine whether the passenger was carrying contraband and lying about it. The "passengers" being interviewed were actually paid volunteers whose job it was to try and fool the inspectors. Neither lay people nor inspectors did much better than chance. When questioned about what types of signs they looked for to determine lying behavior, the inspectors and lay people relied largely on preconceived notions about liars in general: liars will give short answers, volunteer extra information, show poor eye contact and nervous movements and evade questions.
G What nearly all deception experiments have in common to date is that they use videotape instead of live people in their design. Some might argue that it is this very difference which politicians and others are trying to emphasize. This is that people can't tell when people are lying on videotape but can when the person is there, live, in front of them. Without research teasing out these subtle differences, however, it would be a leap of logic to simply assume that something is missing in a videotaped interview. This is a seemingly baseless assumption. A person interviewed on videotape is very much live to the people doing the interviewing. It is simply a recording of a live event. While there may be differences, we simply don't know that any indeed exist. Without that knowledge, anyone who claims to know is simply speaking from ignorance or prejudice.
H The conclusions from this research are obvious. Trained professionals and untrained lay people, in general, cannot tell when a person is lying. If you've known someone for years, your chances for detecting truthfulness are likely higher, but strangers trying to guess truthfulness in other strangers will do no better than chance in their accuracy.
The History of Papermaking in the United Kingdom
The first reference to a paper mill in the United Kingdom was in a book printed by Wynken de Worde in about 1495. This mill belonged to a certain John Tate and was near Hertford. Other early mills included one at Dartford, owned by Sir John Speilman, who was granted special privileges for the collection of rags by Queen Elizabeth and one built in Buckingham shire before the end of the sixteenth century. During the first half of the seventeenth century, mills were established near Edinburgh, at Cannock Chase in Stafford shire, and several in Buckingham shire, Oxford shire and Surrey.
The Bank of England has been issuing bank notes since 1694, with simple watermarks in them since at least 1697. Henri de Portal was awarded the contract in December 1724 for producing the Bank of England watermarked bank-note paper at Bere Mill in Hampshire. Portals have retained this contract ever since but production is no longer at Bere Mill. There were two major developments at about the middle of the eighteenth century in the paper industry in the UK.
The first was the introduction of the rag engine or Hollander, invented in Holland sometime before 1670, which replaced the stamping mills, which had previously been used, for the disintegration of the rags and beating of the pulp. The second was in the design and construction of the mould used for forming the sheet. Early moulds had straight wires sewn down on to the wooden foundation, this produced an irregular surface showing the haracteristic "laid" marks, and, when printed on, the ink did not give clear, sharp lines. Baskerville, a Birmingham printer, wanted a smoother paper.
James Whatman the Elder developed a woven wire fabric, thus leading to his production of the first woven paper in 1757. Increasing demands for more paper during the late eighteenth and early nineteenth centuries led to shortages of the rags needed to produce the paper. Part of the problem was that no satisfactory method of bleaching pulp had yet been devised, and so only white rags could be used to produce white paper. Chlorine bleaching was being used by the end of the eighteenth century, but excessive use produced papers that were of poor quality and deteriorated quickly.
By 1800 up to 24 million pounds of rags were being used annually, to produce 10,000 tons of paper in England and Wales, and 1000 tons in Scotland, the home market being supplemented by imports, mainly from the continent. Experiments in using other materials, such as sawdust, rye straw, cabbage stumps and spruce wood had been conducted in 1765 by . Similarly, Matthias Koops carried out many experiments on straw and other materials at the Neckinger Mill, Bermondsey around 1800, but it was not until the middle of the nineteenth century that pulp produced using straw or wood was utilised in the production of paper.
By 1800 there were 430 (564 in 1821) paper mills in England and Wales (mostly single vat mills), under 50 (74 in 1823) in Scotland and 60 in Ireland, but all the production was by hand and the output was low. The first attempt at a paper machine to mechanise the process was patented in 1799 by Frenchman Nicholas Louis Robert,but it was not a success.However, the drawings were brought to England by John Gamble in 1801 and passed on to the brother's Henry and Sealy Fourdrinier, who financed the engineer Henry Donkin to build the machine.
The first successful machine was installed at Frogmore, Hertfordshire, in 1803. The paper was pressed onto an endless wire cloth, transferred to a continuous felt blanket and then pressed again. Finally it was cut off the reel into sheets and loft dried in the same way as hand made paper.In 1809 John Dickinson patented a machine that that used a wire cloth covered cylinder revolving in a suspension, the water being removed through the centre of the cylinder and the layer of pulp removed from the surface by a felt covered roller (later replaced by a continuous felt passing round a roller).
This machine was the forerunner of the present day cylinder mould or vat machine, used mainly for the production of boards.Both these machines produced paper as a wet sheet, which require drying after removal from the machine, but in 1821 T B Crompton patented a method of drying the paper continuously, using a woven fabric to hold the sheet against steam heated drying cylinders. After it had been pressed, the paper was cut into sheets by a cutter fixed at the end of the last cylinder.By the middle of the nineteenth century the pattern for the mechanised production of paper had been set.
Subsequent developments concentrated on increasing the size and production of the machines. Similarly, developments in alternative pulps to rags, mainly wood and esparto grass, enabled production increases.Conversely, despite the increase in paper production, there was a decrease, by 1884, in the number of paper mills in England and Wales to 250 and in Ireland to 14 (Scotland increased to 60), production being concentrated into fewer, larger units. Geographical changes also took place as many of the early mills were small and had been situated in rural areas.
The change was to larger mills in, or near, urban areas closer to suppliers of the raw materials (esparto mills were generally situated near a port as the raw material was brought in by ship) and the paper markets.
Wildfires
A
Wildfires are usually the product of human negligence . Humans start about 90% of wild fires and lightning causes the other 10%. Regular causes for wildfires include arson, camping fires,
throwing away cigarettes, burning rubbish, and playing with fireworks or matches. Once begun, wildfires can spread at a rate of up to 23 kph and, as a fire spreads over a landscape, it could
undertake a life of its own – doing different things to keep itself going, even creating other blazes by throwing cinders miles away.
Three components are necessary to start a fire: oxygen, fuel and heat. These three make up "the fire triangle" and fire fighters frequently talk about this when they are attempting to put out
blazes. The theory is that if the fire fighters can remove one of the triangle pillars, they can take control of and eventually put out the fire.
B
The speed at which wildfires spread depends on the fuel around them. Fuel is any living or dead material that will burn. Types of fuel include anything from trees, underbrush and grassland
to houses. The quantity of inflammable material around a fire is known as "the fuel load" and is determined by the amount of available fuel per unit area, usually tons per acre. How dry the fuel
is can also influence how fires behave. When the fuel is very dry, it burns much more quickly and forms fires that are much harder to control. Basic fuel characteristics affecting a fire are size and shape, arrangement and moisture, but with wildfires, where fuel usually consists of the same type of material, the main factor influencing
ignition time is the ratio of the fuel's total surface area to its volume. Because the surface area of a twig is not much bigger than its volume, it ignites rapidly. However, a tree's surface area is much
smaller than its volume, so it requires more time to heat up before ignition.
C
Three weather variables that affect wildfires are temperature, wind and moisture. Temperature directly influences the sparking of wildfires, as heat is one of the three pillars of the
fire triangle. Sticks, trees and underbrush on the ground receive heat from the sun, which heats and dries these potential fuels. Higher temperatures allow fuels to ignite and burn more quickly
and add to the speed of a wildfire's spread. Consequently, wildfires tend to rage in the afternoon, during the hottest temperatures.
The biggest influence on a wildfire is probably wind and this is also the most unpredictable variable. Winds provide fires with extra oxygen, more dry fuel, and wind also makes wildfires
spread more quickly. Fires also create winds of their own that can be up to ten times faster than the ambient wind. Winds can even spread embers that can generate additional fires, an event
known as spotting. Winds also change the course of fires, and gusts can take flames into trees, starting a "crown fire". Humidity and precipitation provide moisture that can slow fires down and reduce their intensity, as it is hard for fuel to ignite if it has high moisture levels. Higher levels of humidity mean fewer wildfires.
D
Topography can also hugely influence wildfire behaviour. In contrast to fuel and weather, topography hardly changes over time and can help or hamper the spread of a wildfire. The
principal topographical factor relating to wildfires is slope. As a rule, fires move uphill much faster than downhill and the steeper the slope, the quicker fires move. This is because fires move in the
same direction of the ambient wind, which generally blows uphill. Moreover, the fire can preheat fuel further uphill as smoke and heat rise in that direction. On the other hand, when the fire
reaches the top of a hill, it has to struggle to come back down.
E
Each year thousands of fire fighters risk their lives in their jobs. Elite fire fighters come in two categories: Hotshots and Smokejumpers. Operating in 20 man units, the key task of hotshots
is to construct firebreaks around fires. A firebreak is a strip of land with all potential fuel removed. As their name suggests, smokejumpers jump out of aircraft to reach smaller fires situated in
inaccessible regions. They attempt to contain these smaller fires before they turn into bigger ones. As well as constructing firebreaks and putting water and fire retardant on fires, fire
fighters also use "backfires". Backfires are created by fire fighters and burn towards the main fire incinerating any potential fuel in its path.Fire fighters on the ground also receive extensive support from the air with tankers dropping thousands of gallons of water and retardant. Dropped from planes and helicopters, retardant is a
red chemical containing phosphate fertilizer, which slows and cools fires.
PROBLEMS WITH WATER
Nearly half the world's population will experience critical water shortages by 2025, according to the United Nations (UN). Wars over access to water are a rising possibility in this century and the main conflicts in Africa during the next 25 years could be over this most
precious of commodities, as countries fight for access to scarce resources. "Potential water wars are likely in areas where rivers and lakes are shared by more than one country," says
Mark Evans a UN worker. Evans predicts that "population growth and economic development will lead to nearly one in two people in Africa living in countries facing water scarcity or what
is known as 'water stress' within 25 years." Water scarcity is defined as less than 1,000 cubicmetres of water available per person per year, while water stress means less than 1,500
cubic metres of water is available per person per year. The report says that by 2025, 12 more African countries will join the 13 that already suffer from water stress or water scarcity. What
makes the water issue even more urgent is that demand for water will grow increasingly fast as larger areas are placed under crops and economic development. Evans adds that "the
strong possibility that the world is experiencing climate change also adds to this urgency."
How to deal with water shortages is in the forefront of the battle between environment activists on the one hand and governments and construction firon the other. At the recent World
Summit on Sustainable Development in Johannesburg activists continued their campaign to halt dam construction, while many governments were outraged about a vocal minority thwarting their plans.One of the UN's eight millennium development goals is to halve the proportion of people without "sustainable" access to safe drinking water by 2015. How to ensure this happens was
one of the big issues of the summit. Much of the text on this was already agreed, but one of the unresolved issues in the implementation plan was whether the goal on water would be extended to cover sanitation. The risks posed by water-borne diseases in the absence of sanitation facilities
means the two goals are closely related. Only US negotiators have been resisting the extension of goals to include sanitation due to the financial commitment this would entail. However, Evans
says the US is about to agree to this extension. This agreement could give the UN a chance to show that in one key area the world development agenda was advanced in Johannesburg.
But the UN has said Johannesburg was not about words alone, but implementation.
A number of projects and funding initiatives were unveiled at the summit. But implementation is always harder, as South Africa has experienced in its water programme. Graham Bennetts, a water official in the South African government explains: "Since the 1994 elections government has provided easy access to water to 7 million people, but extending this to a further 7 million and ensuring this
progress is sustainable is one of South Africa's foremost implementation challenges." In South Africa, access to water is defined as 25 litres a person daily, within a distance of 200m from where they live. "Although South Africa's feat far exceeds the UN millennium goal on water supply, severe constraints on local government capacity make a more rapid expansion difficult," says Bennetts.For some of those who have only recently been given ready access to water, their gains are under threat as the number of cut-offs by municipalities for non-payment rise, says
Liane Greef of the Environmental Monitoring Group. Greef is programme manager for Water
Justice in southern Africa. Those who have their water supply cut off also automatically forfeit their right to 6000 free litres of water for a family a month under South Africa's "water for all"
policy. In the face of continued increases in unemployment, payment for water and other utilities has the potential to fast undo government's high profile feats in delivery since 1994.
It is also the way of ensuring sufficient water supply and its management that will increasingly become a political battleground in South Africa. Water Affairs director-general
Mike Muller says South Africa is near the end of its dam-building programme. However,there are big projects proposed elsewhere in southern Africa that could possibly be halted
by activists who could bring pressure on funding agencies such as the World Bank. Greef says her group will campaign during the summit against the proposed Skuifraam
Dam, which would be built near Franschhoek to supply additional water to Cape Town.Rather than rely on new dam construction, the city should ensure that water is used wisely at all times rather than only in dry spells, Greef says. Another
battleground for her group is over the privatisation of water supply, she says. Water supply, she insists, is best handled in the public interest by accountable government.
There is increasing hope from advances in technology to deal with water shortages. It is agricultural production which takes up about 90% of water consumed for human purposes,
says the UN. To lower agricultural demand for water the Sri Lanka-based International Water Management Institute is researching ways of obtaining "more crop per drop" through the
development of drought resistant crops, as well as through better water management techniques.
One of the institute's research sites is the Limpopo River basin. According to the institute's director-general, Frank Rijsbereman, rice growers in China use a quarter of the water a ton of
produce to those in South Africa. The institute hopes the "green revolution" in crop productivity will soon be matched by the "blue revolution" in improving water utilisation in agriculture.
Amber – Frozen Moments іn Time
Amber hаѕ а deep fascination bоth fоr ordinary people аѕ а gem аnd fоr thе scientist fоr
whоm іt рrоvіdеѕ а glimpse іntо thе past, а window іntо history. Thе majority оf amber whісh
hаѕ bееn discovered аnd studied originates іn thе Cenzoic Era. Thе earlier Mesozoic whісh
consists оf thе Cretaceous, аnd Triassic periods hаѕ аlѕо produced amber but іn
smaller аnd scarcer quantities due tо іtѕ muсh older age. Onе оf thе problems аѕѕосіаtеd
wіth Mesozoic amber іѕ thе level оf degradation іt undergoes. fossil resin саn bе
badly affected bу oxidation, erosion, excessive heat аnd pressure.
Amber begins аѕ resin exuded frоm trees millions оf years аgо possibly tо protect
thеmѕеlvеѕ аgаіnѕt fungal оr insect attack оr аѕ а by-product оf ѕоmе form оf growth
process. Mоѕt knоwn deposits оf amber соmе frоm vаrіоuѕ tree species whісh аrе nоw
extinct. Baltic amber wаѕ produced bу а giant tree called Pinites succinifer, а tree sharing
mаnу characteristics оf thе сurrеntlу living genus Pseudolarix. Thе true reason fоr thіѕ resin
discharge frоm vаrіоuѕ species оf trees іѕ nоt fully understood. Scientists hаvе theorised
thаt іt аlѕо соuld bе а form оf desiccation control, аn aid tо attract insect pollinators оr еvеn
а reaction tо storm оr weather damage.
Thе resin frоm thе trees nееdѕ tо gо thrоugh а number оf stages іn order tо bесоmе amber.
Thе fіrѕt stage involves thе slow cross chain linking оf thе molecular structure wіthіn thе
resin, а kind оf polymerisation. Thіѕ mаkеѕ thе resin hard but easily broken compared tо іtѕ
original state оf bеіng soft аnd plastic. Onсе іt іѕ іn thіѕ state, thе resin саn bе called copal.
Fоllоwіng thе polymerisation thе nеxt stage іѕ thе evaporation оf volatile oils іnѕіdе thе
copal. Thе oils, called turpenes, slowly permeate оut оf thе amber. Thіѕ ѕесоnd stage mау
tаkе millions оf years bеfоrе thе process turns thе copal іntо ѕоmеthіng approaching thе
structure оf amber. It іѕ speculated thаt еіthеr оnе оr bоth оf thеѕе stages іn thе formation оf
amber muѕt tаkе place іn аn anaerobic environment оr іt mау hаvе tо sustain а period оf
immersion іn sea water. Amber whісh іѕ exposed tо air fоr ѕеvеrаl years undergoes oxidation
whісh саuѕеѕ а distinct darkening аnd crusting оf thе gem's surface producing оvеr mаnу
years tiny splinters аnd shards.
Thе chemical structure оf amber іѕ nоt consistent, nоt еvеn wіthіn а single fragment, lеt
аlоnе а single deposit. Cоnѕеquеntlу numerous chemical formulas hаvе bееn attributed tо it.
Thе reason fоr thіѕ wide variation іѕ simply bесаuѕе amber іѕ nоt а true mineral; іt іѕ аn
organic plastic wіth variable mixtures. Sоmе aspects оf amber аrе fаіrlу consistent though.
On Moh's scale оf hardness іt lies bеtwееn 2 аnd 2.5. It hаѕ а refraction index оf 1.54 аnd а
melting point bеtwееn 150 – 180oC. Thе colour range іѕ extremely varied, ranging frоm nеаr
white (osseous) thrоugh аll shades оf yellow, brown аnd red. Thеrе аrе еvеn examples оf
blue аnd green amber. Blue – green amber іѕ thought tо hаvе twо роѕѕіblе causes: еіthеr thе
permeation оf raw resin bу mineral deposits present іn thе soil іntо whісh іt fell, оr thе
settling оf volcanic dust аnd ash оntо thе resin whеn іt wаѕ fіrѕt secreted.
Onе оf thе mоѕt exciting аnd interesting aspects оf amber аrе thе inclusions, bоth flora аnd
fauna, whісh аrе fоund wіthіn it. Thе mоѕt frequent inclusions tо bе fоund іn amber,
раrtісulаrlу Baltic, аrе examples оf thе order Diptera оr true flies. Thеѕе tiny flies wоuld hаvе
lived оn thе fungus growing оn thе rotting vegetation оf thе amber forest оf whісh nо doubt
thеrе wаѕ еnоugh tо support аn enormous population. Occasionally а small lizard wіll bе
fоund trapped аnd encased іn amber, раrtісulаrlу frоm thе Dominican Republic deposits. Thе
American Natural History Museum hаѕ а famous еxаmрlе оf а 25,000,000 year оld gecko.
Anоthеr unusual find іѕ thе remains оf а frog discovered іn а piece mined іn thе Dominican
Republic. At fіrѕt іt wаѕ thought tо bе јuѕt оnе animal wіth ѕоmе tissue preserved. Thе distinct shape оf thе frog саn bе ѕееn but mоѕt оf thе flesh hаѕ deteriorated аnd ѕеvеrаl
bones аrе exposed, ѕоmе broken. Undеr closer scrutiny а count оf thе bones suggests thаt
thіѕ раrtісulаr frog muѕt hаvе hаd аt lеаѕt 6 legs. Palaeontologists speculate thаt а bird thаt
ate thе frogs mау hаvе hаd а feeding site, реrhарѕ оn а branch dіrесtlу аbоvе аn
accumulating pool оf resin; hеnсе thе numerous bones present. Thе complete frog wаѕ
реrhарѕ аn unlucky drop bу thе bird whеn іt alighted оn thе branch. Mammalian hair саn аlѕо
infrequently bе fоund trapped аѕ tufts оr single strands. Whеn fоund іn thе Baltic area, hair іn
amber іѕ оftеn attributed tо sloths thаt lived wіthіn thе ancient forest. Resin іn thе process оf
hardening uѕuаllу develops а skin whіlѕt thе interior іѕ ѕtіll soft. Occasionally amber оf thіѕ
nature hаѕ impressions stamped оn іtѕ surface аnd thuѕ bесоmеѕ а trace fossil. Fоr instance
thе clear impression оf а cat's paw hаѕ bееn fоund оn а piece оf amber fоund іn thе Baltic
area.
Thе faking оf inclusions іn amber hаѕ bееn а major cottage industry ѕіnсе thе earliest times.
Gum іѕ melted gently аnd suitable inclusions рlасеd іntо thе matrix; thіѕ іѕ frequently ѕоmе
kind оf colourful insect. Artificial colour іѕ аlwауѕ а dead give аwау оf а bogus amber fossil.
The Death of the Wild Salmon
The last few decades have seen an enormous increase in the number of salmon farms in countries bordering the north Atlantic. This proliferation is most marked in two countries famous for their salmon, Norway and Scotland. Salmon farming in Norway and Scotland has expanded to become a major industry and as the number of farmed salmon has exploded, the population of its wild relatives has crashed. The rivers of these countries that used to have such great summer runs of fish every season that they used to attract thousands of anglers from all over the world are now in perilous decline. Recently Truls Halstensen, a Norwegian fishing writer, wrote that his local river, the Driva, where he used to be able to catch five or more fish of over 20 pounds weight in a morning, is now almost totally fishless.
The link between the increase in farmed salmon and the decline in the wild population is hotly disputed. Environmentalists claim that the increase in farming has affected wild salmon and the sea environment in various ways. Firstly it is claimed that the mass escapes of farmed fish present a grave threat to the gene pool of wild salmon stocks. Escapees breed less successfully than wild salmon but the young of the escapees, known as parr, breed aggressively and can produce four times more successfully than their wild counterparts. The parr bred by escapees also become sexually active far sooner than wild salmon and fertilise more eggs. The farmed salmon are therefore genetically changing the wild salmon stocks. Jeremy Read, director of the Atlantic Salmon Trust points out that: "the major problem of interbreeding is that it reduces a population's fitness and ability to survive. Native salmon have evolved to meet the circumstances and habitat of sea and river life. Farm fish are under very different selection pressures in an artificial habitat. This could leave the world with a north Atlantic salmon which could not survive in its native conditions." The huge increase in sea lice in coastal waters is another growing problem. Sea lice thrive in salmon farm conditions and their increase in numbers means that wild salmon and other fish entering waters where there are farms can fall prey to the lice.
Another difficulty and one of the most worrying side effects of the salmon farm industry is that salmon farmers cannot function without vast quantities of tiny sea creatures to turn into food pellets to feed their stock. Lars Tennson of the Norwegian Fishermen's association complains that " the huge quantities of small fish caught by industrial trawlers is helping to strip fishing grounds of the small fish and of other species, including wild salmon, that depend on the feed fish."
Fish farms are also being blamed for increasing levels of nitrogen in the ocean. Over the last 2 years there have been 26 effluent leaks involving nitrogen-rich fish droppings. Naturally occurring algae feed on this and grow into large toxic blooms that kill most other marine life.
Even legal chemicals used in farms, such as those used to combat the sea lice, can unbalance micro-organism populations, affecting the other organisms that feed on them. Kevin Dunnon, director of FEO Scotland, has warned that "using inappropriate chemicals and medicines has the potential to do real environmental damage... We will prosecute if we find enough evidence."
In spite of the evidence that farming is harming fish populations, fish farmers are adamant that they are not responsible. Nick Jury insists that "algal blooms and the decline in fish stocks have occurred naturally for decades because of a wide range of unrelated and more complex factors." Jury feels that fish farms are being made a scapegoat for lack of government control of fishing.
Overfishing is a major problem that affects salmon stocks and not just salmon. A combination of high trawler catches, net fishing at estuaries, sport fishing and poaching have all led to stocks of wild salmon diminishing. The UK government likes to think that this problem has been recognized and that the roots of the problems have been attacked by laws passed by them.
Fishermen, at sea and in estuaries, have been set quotas and many salmon rivers have been closed to fisherman. Poachers are more difficult to control but their effect is not as marked as that of the fishermen. Angus Kilrie of the NASF feels that the efforts have been wasted: "Legislation has merely scratched the surface. Not enough money has been forthcoming to compensate fishermen and the allowances have been set too high."
The fate of the wild Atlantic salmon is anybody's guess. Farmers and governments seem unworried, environmentalists fear the worst. Wild Scottish salmon stocks this year have actually gone up this year which is heralded by the UK's fisheries department as a result of their policies. Paul Knight, Director of the Salmon and Trout Fishing Association has stated that he is "delighted with the upturn in numbers this year." He adds the warning though that " there are still significant threats to salmon stocks and that it is important not to take our eye off the ball." Statistics though can always be interpreted in different ways. All issues concerning the health of the wild north Atlantic salmon need to continue to be addressed in order to protect the viability of future runs.
The Can – A Brief History LessonA
The story of the can begins in 1795 when Nicholas Appert, a Parisian, had an idea: why not pack food in bottles like wine? Fifteen years later, after researching and testing his idea, he published his theory: if food is sufficiently heated and sealed in an airtight container, it will not spoil. In 1810 Peter Durand, an Englishman, wanted to surpass Appert's invention, so he elected to try tin instead of glass. Like glass, tin could be sealed airtight but tin was not breakable and was much easier to handle. Durand himself did no canning, but two other Englishmen, Bryan Donkin and John Hall, used Durand's patent. After experimenting for more than a year, they set up a commercial canning factory and by 1813 they were sending tins of food to British army and navy authorities for trial.
B
Perhaps the greatest encouragement to the newborn canning industry was the explosion in the number of new colonial territories. As people and goods were being transported to all parts of the world, the can industry itself was growing in new territories. Englishmen who emigrated to America brought their newfound knowledge with them. One of these was Thomas Kensett, who might fairly be called the father of the can manufacturing industry in the United States. In 1812 he set up a small plant on the New York waterfront to can the first hermetically sealed products in the United States.
C
Just before the Civil War, a technical advance by canners enabled them to speed up production. Adding calcium chloride to the water in which cans were cooked raised the water temperature, speeding up the canning process. Also for almost 100 years, tin cans were made by artisans by hand. It was a laborious process, requiring considerable skill and muscle. As the industrial revolution took hold in the United States, the demand for cans increased and machines began to replace the artisans' handiwork. A good artisan could make only 10 cans a day. True production progress in can making began in 1922, when American engineers perfected the body making process. New methods soon increased production of cans to as many as 250 a minute.
D
As early as 1940, can manufacturers began to explore the possibility of adapting cans to package carbonated soft drinks. The can had to be strengthened to accommodate higher internal can pressures created by carbonation (especially during warm summer months), which meant increasing the thickness of the metal used in the can ends. Another concern for the new beverage can was its shelf life. Even small amounts of dissolved tin or iron from the can could impair the drinking quality of drinks. Also the food acids, including carbonic, citric and phosphoric, in soft drinks presented a risk for the rapid corrosion of exposed tin and iron in the can. At this point the can was upgraded by improving the organic coatings used to line the inside. The can manufacturers then embarked on a program of material and cost savings by reducing both the amount of steel and the amount of coating used in can making. These efforts were in part inspired by a new competitor – aluminium.
E
Beverage cans made from aluminum were first introduced in 1965. This was an exciting innovation for the packaging industry because the aluminum can was made with only two pieces – a body and an end. This made production easier. Some of the reasons for the aluminum can's acceptance were its ductility, its support of carbonation pressure, its lighter weight and the fact that aluminum does not rust. Both steel and aluminum cans used an easy-open end tab but the aluminum tab was much easier to make. Perhaps the most critical element in the aluminum can's market success was its recycling value. Aluminum can recycling excelled economically in the competition with steel because of the efficiencies aluminum cans realized in making new cans from recycled materials compared with 100 percent virgin aluminum. Steel did not realize similar economies in the recycling process.
F
Prior to 1970, can makers, customers and consumers alike were unaware of the impact that the mining and manufacturing of steel or aluminium had on the environment. The concept of natural resource preservation was not an issue of great importance and the low growth of population during these early years further de-emphasized concerns for resource depletion. Both industries, however, came to realize the importance of reducing their impact on the environment in the late 1960s and early 1970s as a new environmentally conscious generation emerged. Manufacturers began to recognize the economics of recycling, namely lower manufacturing costs from using less material and less energy. By the 1980s and 1990s, recycling had become a way of life. Aluminum can recycling has become a billion-dollar business and one of the world's most successful environmental enterprises. Over the years, the aluminum can has come to be known as America's most recyclable package, with over 60 percent of cans being recycled annually.
G
Advances in can manufacturing technology have also brought us lighter aluminum cans. In 1972, one pound of aluminum yielded only 21.75 cans. Today, by using less material to make each can, one pound of aluminum makes approximately 32 cans – a 47 percent improvement. Just the lightening of can ends makes a huge difference. When you multiply the savings by the 100 billion cans that are made each year, the weight and savings are phenomenal – over 200 million pounds of aluminum!
# COD IN TROUBLE
A
In 1992, the devastating collapse of the cod stocks off the East coast of Newfoundland forced the Canadian government to take drastic measures and close the fishery. Over 40,000 people lost their jobs, communities are still struggling to recover and the marine ecosystem is still in a state of collapse. The disintegration of this vital fishery sounded a warning bell to governments around the world who were shocked that a relatively sophisticated, scientifically based fisheries management program, not unlike their own, could have gone so wrong. The Canadian ignored warnings that their fleets were employing destructive fishing practices and refused to significantly reduce quotas citing the loss of jobs as too great a concern.
B
In the 1950s Canadian and US east coast waters provided an annual 100,000 tons in cod catches rising to 800,000 by 1970. This overfishing led to a catch of only 300,000 tons by 1975. Canada and the US reacted by passing legislation to extend their national jurisdictions over marine living resources out to 200 nautical miles and catches naturally declined to 139,000 tons in 1980. However, the Canadian fishing industry took over and restarted the overfishing and catches rose again until, from 1985, it was the Canadians who were landing more than 250,000 tons of northern cod annually. This exploitation ravaged the stocks and by 1990 the catch was so low (29,000 tons) that in 1992 (121⁄2000 tons) Canada had to ban all fishing in east coast waters. In a fishery that had for over a century yielded a quarter-million ton catches, there remained biomass of fewer than 1700 tons and the fisheries department also predicted that, even with immediate recovery, stocks need at least 15 years before they would be healthy enough to withstand previous levels of fishing.
C
The devastating fishing came from massive investment poured into constructing huge "draggers". Draggers haul enormous nets held open by a combination of huge steel plates and heavy chains and rollers that plough the ocean bottom. They drag up anything in the way, inflicting immense damage, destroying critical habitat, and contributing to the destabilization of the northern cod Academic Test 4; Page 10 ecosystem. The draggers targeted huge aggregations of cod while they were spawning, a time when the fish population is highly vulnerable to capture. Excessive trawling on spawning stocks became highly disruptive to the spawning process and ecosystem. In addition, the trawling activity resulted in a physical dispersion of eggs leading to a higher fertilization failure. Physical and chemical damage to larvae caused by the trawling action also reduced their chances of survival. These draggers are now banned forever from Canadian waters.
D
Canadian media often cite excessive fishing by overseas fleets, primarily driven by the capitalist ethic, as the primary cause of the fishing out of the North Atlantic cod stocks. Many nations took fish off the coast of Newfoundland and all used deep-sea trawlers, and many often blatantly exceeded established catch quotas and treaty agreements. There can be little doubt that non-North American fishing was a contributing factor in the cod stock collapse, and that the capitalist dynamics that were at work in Canada were all too similar for the foreign vessels and companies. But all of the blame cannot be put there, no matter how easy it is to do, as it does not account for the management of the resources.
E
Who was to blame? As the exploitation of the Newfoundland fishery was so predominantly guided by the government, we can argue that a fishery is not a private area, as the fisher lacks management rights normally associated with property and common property. The state had appropriated the property and made all of the management decisions. Fishermen get told who can fish, what they can fish, and essentially, what to do with the fish once it is caught. In this regard then, when a resource such as the Newfoundland fishery collapses, it is more a tragedy of government negligence than a tragedy of the general public.
F
Following the '92 ban on northern cod fishing and most other species, an estimated 30 thousand people that had already lost their jobs after the 1992 Northern Cod moratorium took effect, were joined by an additional 12,000 fishermen and plant workers. With more than forty thousand people out of jobs, Newfoundland became an economic disaster area, as processing plants shut down, and vessels from the smallest dory to the monster draggers were made idle or sold overseas at bargain prices. Several hundred Newfoundland communities were devastated.
G
Europeans need only look across the to see what could be in store for their cod fishery. In Canada they were too busy with making plans, setting expansive goals, and then allocating fish, and lots of it, instead of making sound business plans to match fishing with the limited availability of the resource. Cod populations in European waters are now so depleted that scientists have recently warned that "all fisheries in this area that target cod should be closed." The Canadian calamity demonstrates that we now have the technological capability to find and annihilate every commercial fish stock, in any ocean and do irreparable damage to entire ecosystems in the process. In Canada's case, a two billion dollar recovery bill may only be a part of the total long-term costs. The costs to individuals and desperate communities now deprived of meaningful and sustainable employment are staggering.
The Rise of Antibiotic-Resistant Infections
A
When penicillin became widely available during the Second World War, it was a medical
miracle, rapidly vanquishing the biggest wartime killer – infected wounds. Discovered initially by a
French medical student, Ernest Duchesne, in 1896, and then rediscovered by Scottish physician
Alexander Fleming in 1928, Penicillium crippled many types of disease-causing bacteria. But
just four years after drug companies began mass-producing penicillin in 1943, microbes began
appearing that could resist it.
B
"There was complacency in the 1980s. The perception was that we had licked the bacterial
infection problem. Drug companies weren't working on new agents. They were concentrating
on other areas, such as viral infections," says Michael Blum, M.D., medical officer in the Food
and Drug Administration's division of anti-infective drug products. "In the meantime, resistance
increased to a number of commonly used antibiotics, possibly related to overuse. In the 1990s,
we've come to a point for certain infections that we don't have agents available."
C
The increased prevalence of antibiotic resistance is an outcome of evolution. Any population
of organisms, bacteria included, naturally includes variants with unusual traits – in this case, the
ability to withstand an antibiotic's attack on a microbe. When a person takes an antibiotic, the
drug kills the defenceless bacteria, leaving behind – or "selecting," in biological terms – those that
can resist it. These renegade bacteria then multiply, increasing their numbers a million fold in a
day, becoming the predominant microorganism. "Whenever antibiotics are used, there is selective
pressure for resistance to occur. More and more organisms develop resistance to more and more
drugs," says Joe Cranston, Ph.D., director of the department of drug policy and standards at the
American Medical Association in Chicago.
D
Disease-causing microbes thwart antibiotics by interfering with their mechanism of action.
For example, penicillin kills bacteria by attaching to their cell walls, then destroying a key part of
the wall. The wall falls apart, and the bacterium dies. Resistant microbes, however, either alter
their cell walls so penicillin can't bind or produce enzymes that dismantle the antibiotic.
Antibiotic resistance results from gene action. Bacteria acquire genes conferring resistance
in different ways. Bacterial DNA may mutate spontaneously. Drug-resistant tuberculosis arises this
way. Another way is called transformation where one bacterium may take up DNA from another
bacterium. Most frightening, however, is resistance acquired from a small circle of DNA called a
plasmid, which can flit from one type of bacterium to another. A single plasmid can provide a slew
of different resistances.
E
Many of us have come to take antibiotics for granted. A child develops a sore throat or
an ear infection, and soon a bottle of pink medicine makes everything better. Linda McCaig, a
scientist at the CDC, comments that "many consumers have an expectation that when they're ill,
antibiotics are the answer. Most of the time the illness is viral, and antibiotics are not the answer.
This large burden of antibiotics is certainly selecting resistant bacteria." McCaig and Peter Killeen,
a fellow scientist at the CDC, tracked antibiotic use in treating common illnesses. The report cites
nearly 6 million antibiotic prescriptions for sinusitis alone in 1985, and nearly 13 million in 1992.
Ironically, advances in modern medicine have made more people predisposed to infection. McCaig
notes that "there are a number of immunocompromised patients who wouldn't have survived in
earlier times. Radical procedures produce patients who are in difficult shape in the hospital, and
there is routine use of antibiotics to prevent infection in these patients."
F
There are measures we can take to slow the inevitable resistance. Barbara Murray, M.D.,
of the University of Texas Medical School at Houston writes that "simple improvements in public
health measures can go a long way towards preventing infection". Such approaches include more
frequent hand washing by health-care workers, quick identification and isolation of patients with
drug-resistant infections, and improving sewage systems and water purity.
Drug manufacturers are also once again becoming interested in developing new antibiotics.
The FDA is doing all it can to speed development and availability of new antibiotic drugs. "We can't
identify new agents – that's the job of the pharmaceutical industry. But once they have identified a
promising new drug, what we can do is to meet with the company very early and help design the
development plan and clinical trials," says Blum. In addition, drugs in development can be used for
patients with multi-drug-resistant infections on an emergency compassionate use basis for people
with AIDS or cancer, for example." Blum adds.
Appropriate prescribing is important. This means that physicians use a narrow spectrum
antibiotics – those that target only a few bacterial types – whenever possible, so that resistances
can be restricted. "There has been a shift to using costlier, broader spectrum agents. This
prescribing trend heightens the resistance problem because more diverse bacteria are being
exposed to antibiotics," writes Killeen. So, while awaiting the next wonder drug, we must
appreciate, and use correctly, the ones that we already have.
Another problem with antibiotic use is that patients often stop taking the drug too soon,
because symptoms improve. However, this merely encourages resistant microbes to proliferate.
The infection returns a few weeks later, and this time a different drug must be used to treat it. The
conclusion: resistance can be slowed if patients take medications correctly.
Hydroelectric Power
Hydroelectric power is America's leading renewable energy resource. Of all the
renewable power sources, it's the most reliable, efficient, and economical. Water is needed to
run a hydroelectric generating unit. It's held in a reservoir or lake behind a dam, and the force of
the water being released from the reservoir through the dam spins the blades of a turbine. The
turbine is connected to the generator that produces electricity. After passing through the turbine,
the water re-enters the river on the downstream side of the dam.
Hydroelectric plants convert the kinetic energy within falling water into electricity. The
energy in moving water is produced in the sun, and consequently is continually being renewed.
The energy in sunlight evaporates water from the seas and deposits it on land as rain. Land
elevation differences result in rainfall runoff, and permit some of the original solar energy to be
harnessed as hydroelectric power. Hydroelectric power is at present the earth's chief renewable
electricity source, generating 6% of global energy and about 15% of worldwide electricity.
Hydroelectric power in Canada is plentiful and provides 60% of their electrical requirements.
Usually regarded as an inexpensive and clean source of electricity, most big hydroelectric
projects being planned today are facing a great deal of hostility from environmental groups and
local people.
The earliest recorded use of water power was a clock, constructed around 250 BC. Since
then, people have used falling water to supply power for grain and saw mills, as well as a host
of other uses. The earliest use of flowing water to generate electricity was a waterwheel on the
Fox River in Wisconsin in 1882.
The first hydroelectric power plants were much more dependable and efficient than the
plants of the day that were fired by fossil fuels. This led to a rise in number of small to medium
sized hydroelectric generating plants located wherever there was an adequate supply of falling
water and a need for electricity. As demand for electricity soared in the middle years of the 20th
century, and the effectiveness of coal and oil power plants improved, small hydro plants became
less popular. The majority of new hydroelectric developments were focused on giant mega-
projects.
Hydroelectric plants harness energy by passing flowing water through a turbine. The
water turbine rotation is delivered to a generator, which generates electricity. The quantity
of electricity that can be produced at a hydroelectric plant relies upon two variables. These
variables are (1) the vertical distance that the water falls, called the "head", and (2) the flow rate,
calculated as volume over time. The amount of electricity that is produced is thus proportional to
the head product and the flow rate.
So, hydroelectric power stations can normally be separated into two kinds. The most
widespread are "high head" plants and usually employ a dam to stock up water at an increased
height. They also store water at times of rain and discharge it during dry times. This results in
reliable and consistent electricity generation, capable of meeting demand since flow can be
rapidly altered. At times of excess electrical system capacity, usually available at night, these
plants can also pump water from one reservoir to another at a greater height. When there is
peak electrical demand, the higher reservoir releases water through the turbines to the lower
reservoir.
"Low head" hydroelectric plants usually exploit heads of just a few meters or less. These
types of power station use a weir or low dam to channel water, or no dam at all and merely use
the river flow. Unfortunately their electricity production capacity fluctuates with seasonal water
flow in a river.
Until only recently people believed almost universally that hydroelectric power was an
environmentally safe and clean means of generating electricity. Hydroelectric stations do not
release any of the usual atmospheric pollutants emitted by power plants fuelled by fossil fuels
so they do not add to global warming or acid rain. Nevertheless, recent studies of the larger
reservoirs formed behind dams have implied that decomposing, flooded vegetation could give
off greenhouse gases equal to those from other electricity sources.
The clearest result of hydroelectric dams is the flooding of huge areas of land. The
reservoirs built can be exceptionally big and they have often flooded the lands of indigenous
peoples and destroyed their way of life. Numerous rare ecosystems are also endangered by
hydroelectric power plant development.
Damming rivers may also change the quantity and quality of water in the rivers below
the dams, as well as stopping fish migrating upstream to spawn. In addition, silt, usually taken
downstream to the lower parts of a river, is caught by a dam and so the river downstream loses
the silt that should fertilize the river's flood plains during high water periods.
Theoretical global hydroelectric power is approximately four times larger than the
amount that has been taken advantage of today. Most of the residual hydro potential left in the
world can be found in African and Asian developing countries. Exploiting this resource would
involve an investment of billions of dollars, since hydroelectric plants normally have very high
building costs. Low head hydro capacity facilities on small scales will probably increase in the
future as low head turbine research, and the standardization of turbine production, reduce the
costs of low head hydro-electric power production. New systems of control and improvements
in turbines could lead in the future to more electricity created from present facilities. In addition,
in the 1950's and 60's when oil and coal prices were very low, lots of smaller hydroelectric
plants were closed down. Future increases in the prices of fuel could lead to these places being
renovated.
The Canals of De Lesseps
Two of the most spectacular engineering feats of the last 200 years were of the same type though thousands of miles apart. They were the construction of the Suez and Panama canals. The Panama Canal joins the Pacific and Atlantic oceans while the Suez joins the Red Sea (Indian Ocean) and the Mediterranean (Atlantic Ocean). Both offer ships huge savings in time and mileage. For example, a nine hour trip on the Panama Canal would save a total of 18,000 miles on a trip from New York to San Francisco. Amazingly enough the same French engineer, Ferdinand de Lesseps, played a major part in the construction of both.
The history of the Panama Canal goes back to 16th century with a survey of the isthmus and a working plan for a canal ordered by the Spanish government in 1529. In the 18th century various companies tried and failed to construct the canal but it wasn't until 1880 that a French company, organized by Ferdinand Marie de Lesseps, proposed a sea level canal through Panama. He believed that if a sea level canal worked when constructing the Suez Canal, it must work for the Panama Canal. Finally the Panama Canal was constructed in two stages.
The first between 1881 and 1888, the work being carried out by the French company headed by de Lesseps, and secondly, the work by the Americans which eventually completed the canal's construction between 1904 and 1914. The French company ran out of money and an attempt was unsuccessful to raise funds by applying to the French government to issue lottery bonds which had been successful during the construction of the Suez Canal when that project was at the point of failure through lack of money. The French problems stemmed from their inability to create a viable solution to the differences in tidal changes in the Pacific and Atlantic Oceans.
There is a tidal range of 20 feet at the Pacific whereas the Atlantic range is only about 1 foot. The Americans proposed that a tidal lock should be constructed at Panama which solved the problem and reduced excavation by an enormous amount. When construction was finally finished, the canal ran through various locks, four dams and ran the lengths of two naturally occurring lakes, the 32 mile Gatun Lake and the 5 mile Miraflores Lake.
When the US took on finishing the canal they and the new state of Panama signed the Hay-Bunau-Varilla treaty, by which the United States guaranteed the independence of Panama and secured a perpetual lease on a 10 mile strip for the canal. Panama was to be compensated by an initial payment of $10 million and an annuity of $250,000, beginning in 1913. On December 31st 1999 United States transferred the 51 mile Panama Canal, the surrounding Panama Canal Area and the income back to the Panamanian government.
The idea of a canal linking the Mediterranean to the Red Sea also dates back to ancient times. Unlike the modern canal, earlier ones linked the Red Sea to the Nile, therefore forcing the ships to sail along the River on their journey from Europe to India. It consisted of two parts: the first linking the Gulf of Suez to the Great Bitter Lake, and the second connecting the Lake to one of the branches in the Nile Delta that runs into the Mediterranean. The canal remained in good condition during the Ptolemaic era, but fell into disrepair afterwards and was completely abandoned upon the discovery of the trade route around Africa.
It was Napoleon's engineers who, around 1800 AD, revived the idea of a shorter trade route to India via a Suez canal. However, the calculation carried out by the French engineers showed a difference in level of 10 meters between both seas. If constructed under such circumstances, a large land area would be flooded. Later the digging of the canal was undertaken by the Ferdinand de Lesseps, who showed the previous French sea height estimates to be incorrect and that locks or dams were not needed.
In 1859, Egyptian workers started working on the construction of the canal in conditions described by historians as slave labor, and the project was completed around 1867. The canal is 163 km long, and has a width of a minimum of 60 metres. The canal cuts through three lakes, Lake Manzala in the north, Lake Timsah in the middle and the Great Bitter Lake further south. The largest, the Great Bitter Lake makes up almost 30 km of the total length. The canal is extensively used by modern ships as it is the fastest crossing from the Atlantic Ocean to the Indian Ocean.
In July 1956 the Egyptian president Nasser announced the nationalization of the canal in response to the British, French and American refusal for a loan aimed at building the Aswan High Dam on the Nile. The revenue from the canal, he argued, would help finance the High Dam project. Since then the Egyptians have controlled the canal. Today, approximately 50 ships cross the canal daily and the cities and beaches along the Great Bitter Lake and the canal serve as a summer resort for tourists.
The Ozone HoleParagraph A
Ozone is a bluish gas that is harmful to breathe. Nearly 90% of the Earth's ozone is in the stratosphere and is referred to as the ozone layer. Ozone absorbs a band of ultraviolet radiation called UVB that is particularly harmful to living organisms. Stratospheric ozone is constantly being created and destroyed through natural cycles. Various ozone depleting substances however, accelerate the destruction processes, resulting in lower than normal ozone levels.
Reductions in ozone levels will lead to higher levels of UVB reaching the Earth's surface. The sun's output of UVB does not change; rather, less ozone means less protection, and hence more UVB reaches the Earth. Studies have shown that in the Antarctic, the amount of UVB measured at the surface can double during the annual ozone hole. Laboratory and epidemiological studies demonstrate that UVB causes non melanoma skin cancer and plays a major role in malignant melanoma development. In addition, UVB has been linked to cataracts.
Paragraph B
Dramatic loss of ozone in the lower stratosphere over Antarctica was first noticed in the 1970s by a research group from the British Antarctic Survey (BAS) who were monitoring the atmosphere above Antarctica from a research station. Folklore has it that when the first measurements were taken in 1975, the drop in ozone levels in the stratosphere was so dramatic that at first the scientists thought their instruments were faulty.
Replacement instruments were built and flown out and it wasn't until they confirmed the earlier measurements, several months later, that the ozone depletion observed was accepted as genuine. Another story goes that the BAS satellite data didn't show the dramatic loss of ozone because the software processing the raw ozone data from the satellite was programmed to treat very low values of ozone as bad readings. Later analysis of the raw data when the results from the British Antarctic Survey team were published, confirmed their results and showed that the loss was rapid and large-scale; over most of the Antarctica continent.
Paragraph C
Ozone occurs naturally in the atmosphere. The earth's atmosphere is composed of several layers. We live in the Troposphere, ground level up to about 10km high, where most of the weather occurs such as rain, snow and clouds. Above that is the Stratosphere, an important region in which effects such as the Ozone Hole and Global Warming originate. The layer next to space is the Exosphere and then going inwards there are the Thermosphere and the Mesosphere. Supersonic passenger jets fly just above the troposphere whereas subsonic commercial airliners are usually well in the troposphere. The narrow region between these two parts of the atmosphere is called the Tropopause.
Ozone forms a layer in the stratosphere, thinnest in the tropics and denser towards the poles. The amount of ozone above a point on the earth's surface is measured in Dobson units (DU) – typically ~260 DU near the tropics and higher elsewhere, though there are large seasonal fluctuations. It is created when ultraviolet radiation in the form of sunlight strikes the stratosphere, splitting oxygen molecules to atomic oxygen. The atomic oxygen quickly combines with further oxygen molecules to form ozone.
Paragraph D
The Ozone Hole often gets confused in the popular press and by the general public with the problem of global warming. Whilst there is a connection because ozone contributes to the greenhouse effect, the Ozone Hole is a separate issue. Over Antarctica (and recently over the Arctic), stratospheric ozone has been depleted over the last 15 years at certain times of the year. This is mainly due to the release of man-made chemicals containing chlorine such as CFCs (ChloroFluoroCarbons), but also compounds containing bromine, other related halogen compounds and also nitrogen oxides. CFC's are a common industrial product, used in refrigeration systems, air conditioners, aerosols, solvents and in the production of some types of packaging. Nitrogen oxides are a by-product of combustion processes, for example aircraft emissions.
Paragraph E
The ozone depletion process begins when CFCs and other ozone depleting substances are emitted into the atmosphere where winds efficiently mix and evenly distribute the gases. CFCs are extremely stable, and they do not dissolve in rain. After a period of several years natural gases in the stratosphere combine with CFCs and this releases chlorine atoms, halons and methyl bromide. These in turn all release bromine atoms and it is these atoms that actually destroy ozone. It is estimated that one chlorine atom can destroy over 100,000 ozone molecules before it is removed from the stratosphere.
Paragraph F
The first global agreement to restrict CFCs came with the signing of the Montreal Protocol in 1987 ultimately aiming to reduce them by half by the year 2000. Two revisions of this agreement have been made in the light of advances in scientific understanding, the latest being in 1992.
Agreement has been reached on the control of industrial production of many halocarbons until the year 2030. The main CFCs will not be produced by any of the signatories after the end of 1995, except for a limited amount for essential uses, such as for medical sprays.
The countries of the European Community have adopted even stricter measures. Recognizing their responsibility to the global environment they have agreed to halt production of the main CFCs from the beginning of 1995. It was anticipated that these limitations would lead to a recovery of the ozone layer within 50 years of 2000. The World Meteorological Organisation estimated 2045 but recent investigations suggest the problem is perhaps on a much larger scale than anticipated.
OLIVE OIL PRODUCTION
Olive oil has been one of the staples of the Mediterranean diet for thousands of years and its popularity is growing rapidly in other parts of the world. It is one of the most versatile oils for cooking and it enhances the taste of many foods. Olive oil is the only type of vegetable/fruit oil that can be obtained from just pressing. Most other types of popular oils (corn, canola, etc.) must be processed in other ways to obtain the oil. Another important bonus is that olive oil has proven health benefits. Three basic grades of olive oil are most often available to the consumer:
extra Virgin, Virgin and Olive Oil. In addition to the basic grades, olive oil differs from one country or region to another because of the types of olives that are grown, the harvesting methods, the time of the harvest, and the pressing techniques. These factors all contribute to the individual characteristics of the olive oil.
Olive trees must be properly cared for in order to achieve good economic yields. Care includes regular irrigation, pruning, fertilising, and killing pests. Olives will survive on very poor sites with shallow soils but will grow very slowly and yield poorly. Deep soils tend to produce excessively vigorous trees, also with lower yields. The ideal site for olive oil production is a clay loam soil with good internal and surface drainage. Irrigation is necessary to produce heavy crops and avoid alternate bearing. The site must be free of hard winter frosts because wood damage will occur at temperatures below 15°F and a lengthy spell of freezing weather can ruin any chances for a decent crop. The growing season also must be warm enough so fruits mature before even light fall frosts (usually by early November) because of potential damage to the fruit and oil quality. Fortunately olive trees are very hardy in hot summer temperatures and they are drought tolerant.
The best olive oils hold a certificate by an independent organization that authenticates the stone ground and cold pressed extraction process. In this process, olives are first harvested by hand at the proper stage of ripeness and maturity. Experts feel that hand harvesting, as opposed to mechanical harvesting, eliminates bruising of the fruit which causes tartness and oil acidity. The olives harvested are transferred daily to the mill. This is very important because this daily transfer minimizes the time spent between picking and pressing. Some extra virgin olive oil producers are known to transfer the olives by multi-ton trucks over long distances that expose the fragile fruit to crushing weight and the hot sun, which causes the olives to begin oxidizing and thus becoming acidic. In addition to the time lapse between harvesting and pressing, olive oil must be obtained using mechanical processes only to be considered virgin or extra virgin. If heat and/or chemical processes are used to produce the olive oil or if the time lapse is too long, it cannot be called virgin or extra virgin.
Once at the mill, the leaves are sucked away with air fans and the olives are washed with circulating potable water to remove all impurities. The first step of extraction is mashing the olives to create a paste. The oil, comprising 20% to 30% of the olive, is nestled in pockets within the fruit's cells. The olives are crushed in a mill with two granite millstones rolling within a metal basin. Crushing and mixing the olives releases the oil from the cells of the olive without heating the paste. A side shutter on the mill's basin allows the mixed olive paste to be discharged and applied to round mats. The mats are stacked and placed under the head of a hydraulic press frame that applies downward pressure and extracts the oil. The first pressing yields the superior quality oil, and the second and third pressings produce inferior quality oil.
Some single estate producers collect the oil that results from just the initial crushing while many other producers use an additional step to extract more oil. The olive pulp is placed on mats constructed with hemp or polypropylene that are stacked and then pressed to squeeze the pulp. Oil and water filter through the mats to a collection tank below. The water and oil are then separated in a centrifuge.Regardless of the method used for the first pressing, the temperature of the oil during production is extremely important in order to maintain the distinct characteristics of the oil. If the temperature of the oil climbs above 86ºF, it will be damaged and cannot be considered cold- pressed.
The first pressing oil contains the most "polyphenols", substances that have been found to be powerful antioxidants capable of protecting against certain types of disease. The polyphenols are not the only substances in the olive with health-promoting effects, but they are quite unique when compared to other commonly used culinary oils such as sunflower and soy. It is these polyphenols that really set extra virgin olive oils apart from any other oil and any other form of olive oil. The more refined the olive oil is, the smaller the quantity of polyphenols.
The result of the producers' efforts is a cold pressed extra virgin olive oil with high quality standards and organoleptic characteristics, which give the oil its health-protective and aromatic properties.
Cleaning up the Thames
The River Thames, which was biologically "dead" as recently as the 1960s, is now the cleanest metropolitan river in the world, according to the Thames Water Company. The company says that thanks to major investment in better sewage treatment in London and the Thames Valley, the river that flows through the United Kingdom capital and the Thames Estuary into the North Sea is cleaner now than it has been for 130 years. The Fisheries Department, who are responsible for monitoring fish levels in the River Thames, has reported that the river has again become the home to 115 species of fish including sea bass, flounder, salmon, smelt, and shad. Recently, a porpoise was spotted cavorting in the river near central London.
But things were not always so rosy. In the 1950s, sewer outflows and industrial effluent had killed the river. It was starved of oxygen and could no longer support aquatic life. Until the early 1970s, if you fell into the Thames you would have had to be rushed to the hospital to get your stomach pumped. A clean-up operation began in the 1960s. Several Parliamentary Committees and Royal Commissions were set up, and, over time, legislation has been introduced that put the onus on polluters-effluent-producing premises and businesses to dispose of waste responsibly. In 1964 the Greater London Council (GLC) began work on greatly enlarged sewage works, which were completed in 1974.
The Thames clean up is not over, though. It is still going on, and it involves many disparate arms of government and a wide range of non-government stakeholder groups, all representing a necessary aspect of the task. In London's case, the urban and non-urban London boroughs that flank the river's course each has its own reasons for keeping "their" river nice. And if their own reasons do not hold out a sufficiently attractive carrot, the government also wields a compelling stick. The 2000 Local Government Act requires each local borough to "prepare a community strategy for promoting or improving the economic, social and environmental well-being of their area." And if your area includes a stretch of river, that means a sustainable river development strategy.
Further legislation aimed at improving and sustaining the river's viability has been proposed. There is now legislation that protects the River Thames, either specifically or as part of a general environmental clause, in the Local Government Act, the London Acts, and the law that created the post of the mayor of London. And these are only the tip of an iceberg that includes industrial, public health and environmental protection regulations. The result is a wide range of bodies officially charged, in one way or another, with maintaining the Thames as a public amenity. For example, Transport for London – the agency responsible for transport in the capital – plays a role in regulating river use and river users. They now are responsible for controlling the effluents and rubbish coming from craft using the Thames. This is done by officers on official vessels regularly inspecting craft and doing spot checks. Another example is how Thames Water (TW) has now been charged to reduce the amount of litter that finds its way into the tidal river and its tributaries. TW 's environment and quality manager, Dr. Peter Spillett, said: "This project will build on our investment which has dramatically improved the water quality of the river.
"London should not be spoiled by litter which belongs in the bin not the river." Thousands of tons of rubbish end up in the river each year, from badly stored waste, people throwing litter off boats, and rubbish in the street being blown or washed into the river. Once litter hits the water it becomes too heavy to be blown away again and therefore the rivers act as a sink in the system. While the Port of London already collects up to 3,000 tons of solid waste from the tideway every year, Thames Water now plans to introduce a new device to capture more rubbish floating down the river. It consists of a huge cage that sits in the flow of water and gathers the passing rubbish. Moored just offshore in front of the Royal Naval College at Greenwich, south-east London, the device is expected to capture up to 20 tons of floating litter each year.If washed out to sea, this rubbish can kill marine mammals, fish and birds. This machine, known as the Rubbish Muncher, is hoped to be the first of many, as the TW is now looking for sponsors to pay for more cages elsewhere along the Thames.
Monitoring of the cleanliness of the River Thames in the past was the responsibility of a welter of agencies – British Waterways, Port of London Authority, the Environment Agency, the Health and Safety Commission, Thames Water – as well as academic departments and national and local environment groups. If something was not right, someone was bound to call foul and hold somebody to account, whether it was the local authority, an individual polluter or any of the many public and private sector bodies that bore a share of the responsibility for maintaining the River Thames as a public amenity. Although they will all still have their part to play, there is now a central department in the Environment Agency, which has the remit of monitoring the Thames. This centralisation of accountability will, it is hoped, lead to more efficient control and enforcement.
nicotine
If it weren't for nicotine, people wouldn't smoke tobacco. Why? Because of the more than
4000 chemicals in tobacco smoke, nicotine is the primary one that acts on the brain, altering
people's moods, appetites and alertness in ways they find pleasant and beneficial. Unfortunately,
as it is widely known, nicotine has a dark side: it is highly addictive. Once smokers become hooked
on it, they must get their fix of it regularly, sometimes several dozen times a day. Cigarette smoke
contains 43 known carcinogens, which means that long-term smoking can amount to a death
sentence. In the US alone, 420,000 Americans die every year from tobacco-related illnesses.
Breaking nicotine addiction is not easy. Each year, nearly 35 million people make a
concerted effort to quit smoking. Sadly, less than 7 percent succeed in abstaining for more than a
year; most start smoking again within days. So what is nicotine and how does it insinuate itself into
the smoker's brain and very being?
The nicotine found in tobacco is a potent drug and smokers, and even some scientists,
say it offers certain benefits. One is enhance performance. One study found that non-smokers
given doses of nicotine typed about 5 percent faster than they did without it. To greater or lesser
degrees, users also say nicotine helps them to maintain concentration, reduce anxiety, relieve
pain, and even dampen their appetites (thus helping in weight control). Unfortunately, nicotine can
also produce deleterious effects beyond addiction. At high doses, as are achieved from tobacco
products, it can cause high blood pressure, distress in the respiratory and gastrointestinal systems
and an increase in susceptibility to seizures and hypothermia.
First isolated as a compound in 1828, in its pure form nicotine is a clear liquid that turns
brown when burned and smells like tobacco when exposed to air. It is found in several species of
plants, including tobacco and, perhaps surprisingly, in tomatoes, potatoes, and eggplant (though in
extremely low quantities that are pharmacologically insignificant for humans).
As simple as it looks, the cigarette is highly engineered nicotine delivery device. For
instance, when tobacco researchers found that much of the nicotine in a cigarette wasn't released
when burned but rather remained chemically bound within the tobacco leaf, they began adding
substances such as ammonia to cigarette tobacco to release more nicotine. Ammonia helps
keep nicotine in its basic form, which is more readily vaporised by the intense heat of the burning
cigarette than the acidic form. Most cigarettes for sale in the US today contain 10 milligrams
or more of nicotine. By inhaling smoke from a lighted cigarette, the average smoker takes 1 or
2 milligrams of vaporised nicotine per cigarette. Today we know that only a miniscule amount
of nicotine is needed to fuel addiction. Research shows that manufacturers would have to cut
nicotine levels in a typical cigarette by 95% to forestall its power to addict. When a smoker puffs
on a lighted cigarette, smoke, including vaporised nicotine, is drawn into the mouth. The skin and
lining of the mouth immediately absorb some nicotine, but the remainder flows straight down into
the lungs, where it easily diffuses into the blood vessels lining the lung walls. The blood vessels
carry the nicotine to the heart, which then pumps it directly to the brain. While most of the effects a
smoker seeks occur in the brain, the heart takes a hit as well. Studies have shown that a smoker's
first cigarette of the day can increase his or her heart rate by 10 to 20 beats a minute. Scientists
have found that a smoked substance reaches the brain more quickly than one swallowed, snorted
(such as cocaine powder) or even injected. Indeed, a nicotine molecule inhaled in smoke will
reach the brain within 10 seconds. The nicotine travels through blood vessels, which branch out
into capillaries within the brain. Capillaries normally carry nutrients but they readily accommodate
nicotine molecules as well. Once inside the brain, nicotine, like most addictive drugs, triggers the
release of chemicals associated with euphoria and pleasure.
Just as it moves rapidly from the lungs into the bloodstream, nicotine also easily diffuses
through capillary walls. It then migrates to the spaces surrounding neurones – ganglion cells that
transmit nerve impulses throughout the nervous system. These impulses are the basis for our
thoughts, feelings, and moods. To transmit nerve impulses to its neighbour, a neurone releases
chemical messengers known as neurotransmitters. Like nicotine molecules, the neurotransmitters
drift into the so-called synaptic space between neurones, ready to latch onto the receiving neurone
and thus deliver a chemical "message" that triggers an electrical impulse.
The neurotransmitters bind onto receptors on the surface of the recipient neurone. This
opens channels in the cell surface through which enter ions, or charged atoms, of sodium. This
generates a current across the membrane of the receiving cell, which completes delivery of the
"message". An accomplished mimic, nicotine competes with the neurotransmitters to bind to the
receptors. It wins and, like the vanquished chemical, opens ion channels that let sodium ions into
the cell. But there's a lot more nicotine around than the original transmitter, so a much larger current
spreads across the membrane. This bigger current causes increased electrical impulses to travel
along certain neurones. With repeated smoking, the neurones adapt to this increased electrical
activity, and the smoker becomes dependent on the nicotine.
DEER FARMING IN AUSTRALIA
Paragraph A
Deer are not indigenous to Australia. They were introduced into the country during the nineteenth century under the acclimatization programs governing the introduction of exotic species of animals and birds into Australia. Six species of deer were released at various locations. The animals dispersed and established wild populations at various locations across Australia, mostly depending upon their points of release into the wild. These animals formed the basis for the deer industry in Australia today.
Commercial deer farming in Australia commenced in Victoria in 1971 with the authorized capture of rusa deer from the Royal National Park, NSW. Until 1985, only four species of deer, two from temperate climates (red, yellow) and two tropical species (rusa, chital) were confined for commercial farming. Late in 1985, pressure from industry to increase herd numbers saw the development of import protocols. This resulted in the introduction of large numbers of red deer hybrids from New Zealand and North American elk directly from Canada. The national farmed deer herd is now distributed throughout all states although most are in New South Wales and Victoria.
Paragraph B
The number of animals processed annually has continued to increase, despite the downward trend in venison prices since 1997. Of concern is the apparent increase in the number of female animals processed and the number of whole herds committed for processing. With more than 40,000 animals processed in 1998/99 and 60,000 in 1999/2000, there is justified concern that future years may see a dramatic drop in production. At least 85% of all venison produced in Australia is exported, principally to Europe. At least 90% of all velvet antler produced is exported in an unprocessed state to Asia.
Schemes to promote Australian deer products continue to have a positive effect on sales that in turn have a positive effect on prices paid to growers. The industry appears to be showing limited signs that it is emerging from a state of depression caused by both internal and external factors that include: (i) the Asian currency downturn; (ii) the industry's lack of competitive advantage in influential markets (particularly in respect to New Zealand competition), and (iii) within industry processing and marketing competition for limited product volumes of venison.
Paragraph C
From the formation of the Australian Deer Breeders Federation in 1979, the industry representative body has evolved through the Deer Farmers Federation of Australia to the Deer Industry Association of Australia Ltd (DIAA), which was registered in 1995. The industry has established two product development and marketing companies, the Australian Deer Horn and Co-Products Pty Ltd (ADH) and the Deer Industry Projects and Development Pty Ltd, which trades as the Deer Industry Company (DIC). ADH collects and markets Australian deer horn and co-products on behalf of Australian deer farmers. It promotes the harvest of velvet antler according to the strict quality assurance program promoted by the industry. The company also plans and coordinates regular velvet accreditation courses for Australian deer farmers.
Paragraph D
Estimates suggest that until the early 1990s the rate of the annual increase in the number of farmed deer was up to 25%, but after 1993 this rate of increase fell to probably less than 10%. The main reasons for the decline in the deer herd growth rate at such a critical time for the market were: (i) severe drought conditions up to 1998 affecting eastern Australia during 1993-96 and (ii) the consequent slaughter of large numbers of breeding females, at very low prices. These factors combined to decrease confidence within the industry. Lack of confidence saw a drop in new investment within the industry and a lack of willingness of established farmers to expand their herds. With the development of strong overseas markets for venison and velvet and the prospect of better seasons ahead in 1996, the trends described were seen to have been significantly reversed. However, the relatively small size of the Australian herd was seen to impose undesirable restraints on the rate at which herd numbers could be expanded to meet the demands for products. Supply difficulties were exacerbated when the supply of products, particularly venison, was maintained by the slaughter of young breeding females. The net result was depletion of the industry 's female breeding herds.
Paragraph E
Industry programs are funded by statutory levies on sales of animals for venison, velvet antler sales and the sale of live animals into export markets. The industry has a 1996-2000 five year plan including animal nutrition, pasture quality, carcass quality, antler harvesting, promotional material and technical bulletins. All projects have generated a significant volume of information, which complements similar work undertaken in New Zealand and other deer farming countries.
Major projects funded by levy funds include the Venison Market Project from 1992 to 1996. This initiative resulted in a dramatic increase in international demand for Australian venison and an increase in the domestic consumption of venison. In an effort to maintain existing venison markets in the short term and to increase them in the long term, in 1997 the industry's top priority became the increase in size and production capacity of the national herd.
diabetes
Here are some facts that you probably didn't know about diabetes. It is the world's fastest growing disease. It is Australia's 6th leading cause of death. Over 1 million Australians have it though 50% of those are as yet unaware. Every 10 minutes someone is diagnosed with diabetes. So much for the facts but what exactly is diabetes?
Diabetes is the name given to a group of different conditions in which there is too much glucose in the blood. Here's what happens: the body needs glucose as its main source of fuel or energy. The body makes glucose from foods containing carbohydrate such as vegetables containing carbohydrate (like potatoes or corn) and cereal foods (like bread, pasta and rice) as well as fruit and milk. Glucose is carried around the body in the blood and the glucose level is called glycaemia. Glycaemia (blood sugar levels) in humans and animals must be neither too high nor too low, but just right. The glucose running around in the blood stream now has to get out of the blood and into the body tissues. This is where insulin enters the story. Insulin is a hormone made by the pancreas, a gland sitting just below the stomach. Insulin opens the doors that let glucose go from the blood to the body cells where energy is made. This process is called glucose metabolism. In diabetes, the pancreas either cannot make insulin or the insulin it does make is not enough and cannot work properly. Without insulin doing its job, the glucose channels are shut. Glucose builds up in the blood leading to high blood glucose levels, which causes the health problems linked to diabetes.
People refer to the disease as diabetes but there are actually two distinctive types of the disease. Type 1 diabetes is a condition characterized by high blood glucose levels caused by a total lack of insulin. It occurs when the body's immune system attacks the insulin-producing beta cells in the pancreas and destroys them. The pancreas then produces little or no insulin. Type 1 diabetes develops most often in young people but can appear in adults. Type 2 diabetes is the most common form of diabetes. In type 2 diabetes, either the body does not produce enough insulin or the cells ignore the insulin. Insulin is necessary for the body to be able to use sugar. Sugar is the basic fuel for the cells in the body, and insulin takes the sugar from the blood into the cells.
The diagnosis of diabetes often depends on what type the patient is suffering from. In Type 1 diabetes, symptoms are usually sudden and sometimes even life threatening - hyperglycaemia (high blood sugar levels) can lead to comas – and therefore it is mostly diagnosed quite quickly. In Type 2 diabetes, many people have no symptoms at all, while other signs can go unnoticed, being seen as part of 'getting older'. Therefore, by the time symptoms are noticed, the blood glucose level for many people can be very high. Common symptoms include: being more thirsty than usual, passing more urine, feeling lethargic, always feeling hungry, having cuts that heal slowly, itching, skin infections, bad breath, blurred vision, unexplained weight change, mood swings, headaches, feeling dizzy and leg cramps.
At present, there is no cure for diabetes, but there is a huge amount of research looking for a cure and to provide superior management techniques and products until a cure is found. Whether it's Type 1 or Type 2 diabetes, the aim of any diabetes treatment is to get your blood glucose levels as close to the non-diabetic range as often as possible. For people with Type 1 diabetes, this will mean insulin injections every day plus leading a healthy lifestyle. For people with Type 2 diabetes, healthy eating and regular physical activity may be all that is required at first: sometimes tablets and/or insulin may be needed later on. Ideally, blood glucose levels are kept as close to the non-diabetic range as possible so frequent self-testing is a good idea. This will help prevent the short-term effects of very low or very high blood glucose levels as well as the possible long-term problems. If someone is dependent on insulin, it has to be injected into the body. Insulin cannot be taken as a pill. The insulin would be broken down during digestion just like the protein in food. Insulin must be injected into the fat under your skin for it to get into your blood. Diabetes can cause serious complications for patients. When glucose builds up in the blood instead of going into cells, it can cause problems. Short term problems are similar to the symptoms but long term high blood sugar levels can lead to heart attacks, strokes, kidney failure, amputations and blindness. Having your blood pressure and cholesterol outside recommended ranges can also lead to problems like heart attack and stroke and in fact 2 out of 3 people with diabetes eventually die of these complications. Young adults age 18 - 44 who get type 2 diabetes are 14 times more likely to suffer a heart attack, and are up to 30 times more likely to have a stroke than their peers without diabetes. Young women account for almost all the increase in heart attack risk, while young men are twice as likely to suffer a stroke as young women. This means that huge numbers of people are going to get heart disease, heart attacks and strokes years, sometimes even decades, before they should.
Contaminating the Arctic
Our perception of the Arctic region is that its distance from industrial centers keeps it pristine and clear from the impact of pollution. However, through a process known as transboundary pollution, the Arctic is the recipient of contaminants whose sources are thousands of miles away. Large quantities of pollutants pour into our atmosphere, as well as our lakes, rivers, and oceans on a daily basis. In the last 20 years, scientists have detected an increasing variety of toxic contaminants in the North, including pesticides from agriculture, chemicals and heavy metals from industry, and even radioactive fall-out from Chernobyl. These are substances that have invaded ecosystems virtually worldwide, but they are especially worrisome in the Arctic.
Originally, Arctic contamination was largely blamed on chemical leaks, and these leaks were thought to be "small and localized." The consensus now is that pollutants from around the world are being carried north by rivers, ocean currents, and atmospheric circulation. Due to extreme conditions in the Arctic, including reduced sunlight, extensive ice cover and cold temperatures, contaminants break down much more slowly than in warmer climates. Contaminants can also become highly concentrated due to their significantly lengthened life span in the Arctic.
Problems of spring run-off into coastal waters during the growth period of marine life are of critical concern. Spring algae blooms easily, absorbing the concentrated contaminants released by spring melting. These algae are in turn eaten by zooplankton and a wide variety of marine life. The accumulation of these contaminants increases with each step of the food chain or web and can potentially affect northerners who eat marine mammals near the top of the food chain. Pollutants respect no borders; transboundary pollution is the movement of contaminants across political borders, whether by air, rivers, or ocean currents. The eight circumpolar nations, led by the Finnish Initiative of 1989, established the Arctic Environmental Protection Strategy (AEPS) in which participants have agreed to develop an Arctic Monitoring and Assessment Program (AMAP). AMAP establishes an international scientific network to monitor the current condition of the Arctic with respect to specific contaminants. This monitoring program is extremely important because it will give a scientific basis for understanding the scope of the problem.
In the 1950's, pilots traveling on weather reconnaissance flights in the Canadian high Arctic reported seeing bands of haze in the springtime in the Arctic region. It was during this time that the term "Arctic haze" was first used, referring to this smog of unknown origin. But it was not until 1972, that Dr. Glenn Shaw of the Geophysical Institute at the University of Alaska first put forth ideas of the nature and long-range origin of Arctic haze. The idea that the source was long range was very difficult for many to support. Each winter, cold, dense air settles over the Arctic. In the darkness, the Arctic seems to become more and more polluted by a buildup of mid-latitude emissions from fossil fuel combustion, smelting and other industrial processes. By late winter, the Arctic is covered by a layer of this haze the size of the continent of Africa. When the spring light arrives in the Arctic, there is a smog-like haze, which makes the region, at times, looks like pollution over such cities as Los Angeles.
This polluted air is a well-known and well-characterized feature of the late winter Arctic environment. In the North American Arctic, episodes of brown or black snow have been traced to continental storm tracks that deliver gaseous and particulate-associated contaminants from Asian deserts and agricultural areas. It is now known that the contaminants originate largely from Europe and Asia.
Arctic haze has been studied most extensively in Point Barrow, Alaska, across the Canadian Arctic and in Svalbard (Norway). Evidence from ice cores drilled from the ice sheet of Greenland indicates that these haze particles were not always present in the Arctic, but began to appear only in the last century. The Arctic haze particles appear to be similar to smog particles observed in industrial areas farther south, consisting mostly of sulfates mixed with particles of carbon. It is believed the particles are formed when gaseous sulfur dioxide produced by burning sulfur-bearing coal is irradiated by sunlight and oxidized to sulfate, a process catalyzed by trace elements in the air. These sulfate particles or droplets of sulfuric acid quickly capture the carbon particles, which are also floating in the air. Pure sulfate particles or droplets are colourless, so it is believed the darkness of the haze is caused by the mixed-in carbon particles.
The impact of the haze on Arctic ecosystems, as well as the global environment, has not been adequately researched. The pollutants have only been studied in their aerosol form over the Arctic. However, little is known about what eventually happens to them. It is known that they are removed somehow. There is a good degree of likelihood that the contaminants end up in the ocean, likely into the North Atlantic, the Norwegian Sea and possibly the Bering Sea — all three very important fisheries.
Currently, the major issue among researchers is to understand the impact of Arctic haze on global climate change. The contaminants absorb sunlight and, in turn, heat up the atmosphere. The global impact of this is currently unknown but the implications are quite powerful.
THE STORY OF COFFEE
A
Coffee was first discovered in Eastern Africa in an area we know today as Ethiopia. A popular legend refers to a goat herder by the name of Kaldi, who observed his goats acting unusually friskily after eating berries from a bush. Curious about this phenomenon, Kaldi tried eating the berries himself. He found that these berries gave him renewed energy.
B
The news of this energy laden fruit quickly moved throughout the region. Coffee berries were transported from Ethiopia to the Arabian Peninsula, and were first cultivated in what today is the country of Yemen. Coffee remained a secret in Arabia before spreading to Turkey and then to the European continent by means of Venetian trade merchants.
C
Coffee was first eaten as a food though later people in Arabia would make a drink out of boiling the beans for its narcotic effects and medicinal value. Coffee for a time was known as Arabian wine to Muslims who were banned from alcohol by Islam. It was not until after coffee had been eaten as a food product, a wine and a medicine that it was discovered, probably by complete accident in Turkey, that by roasting the beans a delicious drink could be made. The roasted beans were first crushed and then boiled in water, creating a crude version of the beverage we enjoy today. The first coffee houses were opened in Europe in the 17th Century and in 1675, the Viennese established the habit of refining the brew by filtering out the grounds, sweetening it, and adding a dash of milk.
D
If you were to explore the planet for coffee, you would find about 60 species of coffee plants growing wild in Africa, Malaysia, and other regions. But only about ten of them are actually cultivated. Of these ten, two species are responsible for almost all the coffee produced in the world: Coffea Arabica and Coffea Canephora (usually known as Robusta). Because of ecological differences existing among the various coffee producing countries, both types have undergone many mutations and now exist in many sub-species.
E
Although wild plants can reach 10 - 12 metres in height, the plantation one reaches a height of around four metres. This makes the harvest and flowering easier, and cultivation more economical. The flowers are white and sweet-scented like the Spanish jasmine. Flowers give way to a red, darkish berry. At first sight, the fruit is like a big cherry both in size and in colour. The berry is coated with a thin, red film (epicarp) containing a white, sugary mucilaginous flesh (mesocarp). Inside the pulp there are the seeds in the form of two beans coupled at their flat surface. Beans are in turn coated with a kind of resistant, golden yellow parchment, (called endocarp). When peeled, the real bean appears with another very thin silvery film. The bean is bluish green verging on bronze, and is at the most 11 millimetres long and 8 millimetres wide.
F
Coffee plants need special conditions to give a satisfactory crop. The climate needs to be hot-wet or hot temperate, between the Tropic of Cancer and the Tropic of Capricorn, with frequent rains and temperatures varying from 15 to 25 Degrees C. The soil should be deep, hard, permeable, well irrigated, with well-drained subsoil. The best lands are the hilly ones or from just-tilled woods. The perfect altitude is between 600 and 1200 metres, though some varieties thrive at 2000-2200 metres. Cultivation aimed at protecting the plants at every stage of growth is needed. Sowing should be in sheltered nurseries from which, after about six months, the seedlings should be moved to plantations in the rainy season where they are usually alternated with other plants to shield them from wind and excessive sunlight. Only when the plant is five years old can it be counted upon to give a regular yield. This is between 400 grams and two kilos of arabica beans for each plant, and 600 grams and two kilos for robusta beans.
G
Harvesting time depends on the geographic situation and it can vary greatly therefore according to the various producing countries. First, the ripe beans are picked from the branches. Pickers can selectively pick approximately 250 to 300 pounds of coffee cherry a day. At the end of the day, the pickers bring their heavy burlap bags to pulping mills where the cherry coffee can be pulped (or wet milled). The pulped beans then rest, covered in pure rainwater to ferment overnight. The next day the wet beans are hand-distributed upon the drying floor to be sun dried. This drying process takes from one to two weeks depending on the amount of sunny days available. To make sure they dry evenly, the beans need to be raked many times during this drying time. Two weeks later the sun dried beans, now called parchment, are scooped up, bagged and taken to be milled. Huge milling machines then remove the parchment and silver skin, which renders a green bean suitable for roasting. The green beans are roasted according to the customers' specifications and, after cooling, the beans are then packaged and mailed to customers.
Bạn đang đọc truyện trên: AzTruyen.Top