text
stringlengths
100
578k
source
stringclasses
4 values
config
stringclasses
2 values
weight
float64
0.02
0.37
This article is only available in the PDF format. Download the PDF to view the article, as well as its associated figures and tables. This short work by the well known dermatologist from Barcelona, Spain, who is now professor of dermatology at the University of Nueva Leon, Monterrey, Mexico, is one of a series of medical monographs. The author's statement in the foreword that his aim was to set down actual knowledge of the most frequently encountered dermatoses for the purpose of teaching medical students explains why the volume is neither a treatise nor a complete manual of dermatology and why, with certain exceptions, only the most essential topics are discussed. In the brief chapter on general cutaneous physiology, the concept of the skin as an organ is emphasized and the importance of its bacterial flora, particularly with regard to "facultative" behavior and relation to pH, is stressed. The appraisal of chemical reactions, secretions and chemical and metabolic processes is intelligent and up-to-date, but the allergic functions are merely mentioned. Dermatología. Arch Derm Syphilol. 1945;51(3):227–228. doi:10.1001/archderm.1945.01510210069020
HuggingFaceFW/fineweb-edu
default
0.333
denial, anger, depression...and a seriously wicked groove What happens after a loss, when the phone calls, the flowers, the gifts of food, the offers of comfort and help...stop. What do you do when the people who gathered around you move on? How do you fill those empty hours? The Beat Goes On, Randy Kovitz’s third short film, starts with those questions, then follows a heartbroken man on his journey through seemingly bottomless grief. The story is propelled by the power of music and primal rhythm, and a friend who just won’t give up. The Beat Goes On is about the healing powers of music, friendship and time.
mlfoundations/dclm-baseline-1.0
default
0.37
The primary drivers behind basking shark migrations are still unclear. Image courtesy of Philip Doherty. Basking sharks seek out winter sun The winter habits of Britain’s basking sharks have been revealed for the first time. Scientists from the University of Exeter have discovered some spend their winters off Portugal and North Africa, some head to the Bay of Biscay and others choose a staycation around the UK and Ireland. Little was known about basking sharks’ winter behaviour as they spend little time at the surface and are often far from land, so the researchers used cutting-edge satellite tracking to carry out the most detailed ever study of their migrations in the north-east Atlantic. It was once thought that the giant, plankton-eating fish hibernated in the waters off the UK and Ireland, but evidence in recent years has undermined this theory. “Knowing where these animals are all year round allows us to understand the threats they face,” said lead author Philip Doherty, of the Environment and Sustainability Institute on the University of Exeter’s Penryn Campus in Cornwall. “This is essential information if we want to protect them, especially as they swim far outside UK waters, meaning any conservation efforts must be international. “In terms of man-made threat they may face, we tend to think of commercial fishing as the only danger to these animals, but other issues such as boat strike, marine litter, civil engineering and ocean noise might also have an effect.” The researchers tagged 70 sharks and, of the 28 tags which continued transmitting for more than five months, they found most sharks either stayed near the UK or swam to the waters off Spain, Portugal and North Africa. A smaller number spent the winter in the Bay of Biscay, west of France. Those which swam south left in late summer and autumn, and returned in spring and early summer. “We don’t yet know whether individuals make the same migration each year or alter their behaviour based on factors such as body condition, reproduction and food availability,” said senior author Dr Matthew Witt, also of the University of Exeter. “The primary drivers behind basking shark migrations are still unclear but they may include mating, searching for foraging grounds and finding water of preferred temperature.” Dr Suzanne Henderson, of Scottish Natural Heritage, co-funder of the research, added: “This huge and mysterious shark has intrigued us as a nation for many years, and evolving tagging technology is now allowing us to piece together vital parts of their life cycle. “This is shedding new light on their seasonal residency and winter migration, which is key to their conservation. “Our partnership with the University of Exeter has confirmed that the Sea of the Hebrides is an important destination in the migratory cycle for the sharks, and Scottish ministers are currently considering our proposal to designate it a Marine Protected Area to help the species.” Basking sharks, the world's second-largest fish species, are classified as “vulnerable” by the International Union for Conservation of Nature, but the north-east Atlantic population is officially “endangered”. Basking sharks have been known to migrate across oceans, but scientists do not yet fully know why or how often they do this. The paper, published in the journal Scientific Reports, is entitled: “Long-term satellite tracking reveals variable seasonal migration strategies of basking sharks in the north-east Atlantic.” Date: 21 February 2017
HuggingFaceFW/fineweb-edu
default
0.333
A debate on the use of virtual reality to study animal behaviour Neuroscientists are increasingly using virtual reality to facilitate studies of animal behaviour, but whether behaviour in the virtual world mimics that in real life is a matter for debate. Here, scientists discuss the strengths and limitations of the approach. The topic in brief - Virtual-reality (VR) systems simulate real-world inputs to one or more of an organism’s sensory neural circuits, then measure the subject’s actions and apply updates to sensory stimuli in response. - In most rodent set-ups, the animal receives visual information from an immersive screen that spans its field of vision. The animal’s movements control the visual flow, thereby replicating the sensory–motor coupling of the real world. - Typically, movement is restricted by fixing the rodent’s head in position; this allows precise measurements of neural activity to be taken and correlated with motor actions in animals that are awake, rather than anaesthetized. - Many researchers think that VR is a valuable tool for studying both navigation and sensory systems. - However, a body of work indicates that the way in which mice navigate in real and virtual worlds is different. Matthias Minderer & Christopher D. Harvey Virtual reality is a valuable tool for understanding neural function because it combines precise experimental control with natural behaviours. It allows experiments that are not possible using real-world approaches. As such, it has increased our understanding of neural processes in subjects ranging from humans to insects. What are the experimental benefits of VR? First, the technology allows researchers to define explicitly and exhaustively the sensory cues that carry information about the virtual world. In real-world experiments, it is not possible to control all sensory cues. For example, when studying the contribution of visual cues to navigation, confounding information could be provided by unmeasured smells, sounds, textures and vestibular stimuli (internal information about balance and spatial orientation). VR offers the means to add or remove sensory cues to test the contribution of each one to a neural code, and to build up a ‘minimal’ set of stimuli needed to produce a given behaviour or neural activity pattern. A second benefit comes from the ability to redefine the laws that link the subject’s actions to changes in its world. When an animal explores the real world, it is difficult to disentangle which neural responses are attributable to the animal’s actions and which are caused by sensory stimuli, because the two are rigidly linked by the laws of physics. In VR, this link can be modified in informative ways — sensory and motor features can be dissociated by changing the gain or lag between an action and a subsequent update of the virtual environment, or be made independent of one another for brief periods. Sensory and motor variables can therefore be separated while allowing the subject to interact naturally and actively with the sensory world. Third, VR increases the range of tools available to measure neural activity. Because the subject is usually constrained, techniques can be applied that are either not possible or of poorer quality in freely moving subjects. These include functional magnetic resonance imaging, high-resolution fluorescence imaging and intracellular single-neuron electrophysiology. “VR experiments can be designed to create informative differences between neural function in real and virtual worlds.” Many studies have shown that animals can solve navigational tasks in virtual worlds . But the aspects of navigation that can be studied in VR depend on the experimental set-up — for instance, the number of sensory cues simulated, the degree of sensory immersion and how naturally the subject interacts with the virtual world. In VR experiments that provide visual inputs and allow body rotations to trigger vestibular signals, neural activity patterns during navigation are consistent with those measured in real-world experiments. Furthermore, studies that remove key sensory inputs such as vestibular stimuli reveal which aspects of navigational neural activity depend on vestibular input and which can be supported by visual cues alone. Therefore, VR can recapitulate neural activity in real environments, and VR experiments can be designed to create informative differences between neural function in real and virtual worlds. Overall, VR has yielded many insights into sensorimotor integration, decision-making and navigation6. But it is important to remember that, like all reductionist approaches, VR requires a trade-off between improved experimental accessibility and consistency with natural processes — the optimum set-up depends on the research question being asked. For instance, in studies of sensorimotor integration, it is crucial to dissociate sensory and motor variables. In navigation studies, convincing simulations are needed to probe the subject’s internal model of the physical world. VR must be used judiciously, so that its implementation matches the needs of the question. Of course, this requirement applies to all experimental tools and is not specific to VR. In summary, we consider VR as bridging the gap between natural behaviour and conventional reductionist approaches; this is a major step forward in the study of complex behaviours in many species. As the community of VR users grows and commercial VR technologies expand, we expect the range of applications for VR to continue to grow, enhancing our understanding of neural function. Flavio Donato & Edvard I. Moser Technology that involves VR has obvious advantages for studies of simple sensori-motor computations, in which a defined set of inputs, such as those corresponding to an animal’s movement, is associated linearly with neural output. However, some pressing concerns are raised when VR technology is used to study higher-order computations such as spatial navigation. Navigation reflects the integration of many sensory inputs. The resulting outputs are not linearly related to sensory perception, but rather express cognitive abstractions. Goal-driven navigation relies on several cell types in the brain, including place cells (which fire when an animal is in a particular location), grid cells (which fire at periodically spaced positions across the entire environment) and border cells (which fire selectively along local borders). By fixing an animal’s head in place, investigators can monitor the activity of these neurons at high resolution while the animal runs between specific locations in virtual space. But do animals navigate in the same way in VR as in real life? Navigating in the real world is a multisensory process that integrates visual, olfactory and tactile stimuli with vestibular information and information about the activity of moving body parts. But in VR, these elements are often not coordinated, and the animal’s sensory experience is largely reduced to a combination of visual inputs and locomotion, which are easy to control. The animal must overcome discrepancies between visual cues that follow movements and cues that are static in VR, such as smell or head direction. Conflicts between movement and sensory inputs might alter the activity of space-encoding neurons to reflect only information coordinated to motion, such as visually changing landmarks and accumulated distance, at the expense of other cues. This could lead researchers to overestimate the contribution of visual inputs to navigation and, in the most extreme cases, might lead to the loss of computation altogether. “A particular concern is whether the loss of vestibular input that accompanies movement restriction affects animals’ computation of their position.” A particular concern is whether the loss of vestibular input that accompanies movement restriction affects animals’ computation of their position. A continuous mismatch between vestibular and visual inputs might not be detrimental in linear environments. When an animal runs in a straight line, visual inputs are repeatedly and stereotypically paired to the same locomotor information, which may, with continued training, allow the animal to compensate for mismatches. However, such a mismatch might have a greater effect in two-dimensional or 3D VR arenas. Indeed, if movement is unrestrained, the position-coding activity of place and grid cells is similar in 2D VR to that in the real world. In stark contrast to this, position coding is disrupted and a new coding emerges when body movement is restricted, or if the head is fixed. These data cast doubt on whether the way animals interpret 2D or 3D space can ever be understood using VR under conditions of head or body restriction. Strategies that compensate for the loss of synchrony between vestibular information and the animal’s behaviour would be a welcome advance. Finally, are all types of position-coding cell represented in VR-based navigation? It is unclear if and how border, speed and head-direction cells are activated when movement is restricted. Moreover, cells might not fire in the same way in the two worlds. In one analysis, 60% of the place cells activated in the real world were silent in VR. Whereas studies typically check that VR-activated cells are represented in real-world sessions, the opposite direction of investigation lags behind — although there are exceptions to this. More than 40 years ago, the neuroscientist John O’Keefe changed our understanding of the physiology of navigation by studying rats freely foraging for food. By allowing the natural sensory–motor interactions required for the formation of an internal representation of space, O’Keefe discovered the first element of the ‘cognitive map’ — the place cell. VR can extend that ecological approach to higher cognitive functions. But to do so successfully, the technology needs further development and validation. Nature (2016) doi:10.1038/nature17899 - Published online 11 May 2016
HuggingFaceFW/fineweb-edu
default
0.333
Collected essays in the sociology of religion weber his fourth major work on the sociology of religion, Weber attempted to explain the .. Collected Essays in the Sociology of Religion - ... Weber is the first scholar to conceptualize that sociology is not a prescriptive discipline rather it is a descriptive and interpretative discipline. A sociologist necessarily pursues a vocation he should not be guiding either social rebellion nor should operate as the high-priest of the society. Rather the concern of the sociologist is to conduct and guide research in order to study the essence of the reality in a value-neutral and rational manner. Organisation of the weber collected essays on the sociology of religion book CiteSeerX — Collected Essays in the Sociology of Religion On a more analytical plateau, all these disparate processes ofrationalization can be surmised as increasing knowledge, growingimpersonality, and enhanced control [Brubaker 1991, 32–35].First, knowledge. Rational action in one very general sensepresupposes knowledge. It requires some knowledge of the ideationaland material circumstances in which our action is embedded, since toact rationally is to act on the basis of conscious reflection aboutthe probable consequences of action. As such, the knowledge thatunderpins a rational action is of a causal nature conceived in termsof means-ends relationships, aspiring towards a systematic, logicallyinterconnected whole. Modern scientific and technological knowledge isa culmination of this process that Weber called intellectualization,in the course of which, the germinating grounds of human knowledge inthe past, such as religion, theology, and metaphysics, were slowlypushed back to the realm of the superstitious, mystical, or simplyirrational. It is only in modern Western civilization, according toWeber, that this gradual process of disenchantment(Entzauberung) has reached its radical conclusion. Collected Essays in the Sociology of Religion .. Weberian modes of thinking emerged gradually in the twentieth century. Max Weber did not found a school of thought. The posthumous collection of his main work provided a starting point for renewed interest in his ideas. Beginning in the 1920s American scholars discovered Weber’s work for reasons of their own, supplemented later by a generation of German-speaking émigrés. The institutionalization of Weberian analysis was a postwar phenomenon, encouraged by professional needs and controversies in the sciences. Today a Weberian paradigm has emerged as one of the principal ways of thinking about society and human affairs. Essay on Weber’s Theory of Religion or Sociology of Religion But he also contributed fundamental works to the sociology of law (which he virtually invented), the sociology of music (also a first), the sociology of the economy, the philosophy of social science method, the comparative sociology of religion (also his creation), social stratification, the sociology of bureaucracy and of power and “charisma” (his term), and so on.The following is a chronology of Weber’s major works: , 1889, age 25 (120 pages); , 1891, age 27 (280 pages); , 1892, age 28 (900 pages); , 1894–1895, age 30 (329 pages); , 1897, age 33 (400 pages); , 1905, age 41 (250 pages); , 1903–1906 (300 pages); , 1906, age 42 (250 pages); , 1907, age 43 (200 pages); , 1908, age 44 (120 pages); , 1909, age 45 (400 pages); , 1913, age 49 (200 pages); , 1916, age 52 (450 pages); (400 pages); , 1917, age 53 (500 pages); , 1918, age 54 (130 pages). Essay on Weber’s Theory of Religion or Sociology of Religion .. Weber believes that collectivity doesn't have any life to think, feel or perceive. The basic unit of a social structure is social action. The concern of sociology is to understand the meanings associated with the action of the actor than mechanically studying action and its consequence using the methods of natural science. Sociology being concerned with problem of understanding, he introduces Verstehen method into the fold of sociology. He divides verstehen method in two types Direct observational verstehen Indirect explanatory verstehen (Collected Essays in the Sociology of Religion) .. d. Direction of causation. Which direction does the causal connection go? Weber continually asserts that the religious doctrines were separated from the economic aspects, but does not really disprove the Marxist view that the changes in religion occurred because of economic necessities. The new religions probably did develop on the basis of spiritual considerations only, but they did not remain spiritual only for very long. Luther, Calvin, the Puritans, and many others were heavily involved in political activities and pronouncements. The interests of the bourgeois class may have acted to help encourage the development of the Calvinist religious views and encouraged their widespread influence.
mlfoundations/dclm-baseline-1.0
default
0.37
In every level of politics transparency is a word that tends to be thrown around a lot. It’s a word that we often listen for and want to hear from our candidates for good reason. Transparency doesn’t mean anything political until we attach its definition to politics. Simply, it means to be transparent which is to be clear and to have the ability to see through something of substance. When we align this definition to politics, what we then mean by being “transparent” is that we are being clear and open about our actions and what we are doing; that we are hiding nothing and being open about everything. In relation to politics on all levels, transparency is difficult to ensure. Political transparency is something that we value because it implies integrity, honesty, fairness and openness. This is a great aim and it should be something every politician at any level should strive to have. However, transparency has become more of a buzzword in Nova Scotia’s politics. It is a word that is taken hostage by all sides and each of them refuses to let it go. We see the word transparency and we might immediately begin to think the opposite; that someone is simply trying to gain support by offering a promise they may never or are unable to keep. What then, in Nova Scotia, has turned the meaning of transparency inward so that it means either nothing or the opposite to those that hear it? The answer is not a simple one, but it shouldn’t necessarily be complex. The short answer is that transparency is being said and not done. The long answer is that being transparent means to take a number of measures and steps that can be tedious or inconvenient to politicians at times. Such measures can be as simple as posting public minutes of board meetings or could be larger in nature as organizing public assemblies for discussion. Another aspect of transparency is being honest to your membership or constituency when asked questions or when they are seeking information; that your constituents are no longer left in the dark but left with answers. This is the true aim of transparency where political representatives consult in open and honest means of communications with those they are representing. How else will they be able to make a decision in the best interest of their constituents if they are not honestly consulting and providing their constituency with accurate information? So then, we can tell the transparency of a political figure by their actions as opposed to their words. If one makes promise to be transparent then they must be actively transparent. There are many ways to do this for elected MLAs, local politicians, and even representatives like student leaders! These can include public minutes of meetings, audited financial statements, public assemblies and town halls, and open consultation with people. These are all great ways to be transparent; however they are only a method in which one may be transparent. They don’t guarantee the spirit of transparency. For example there are many instances of public minutes of a meeting not containing all information that was actually discussed (beyond understandable in camera discussions that may deal with sensitive information) which defeats the point of public minutes. This then begs the question that if the methods of transparency are just that, a method, what ensures transparency? The answer to that is simply nothing. Nothing will completely ensure transparency of political representatives to perfection. The only fallback that comes close to policing transparency of politicians is the will of the constituents; the will of those being represented. Ultimately, transparency is a responsibility. Not just a responsibility of political representatives, but a responsibility of the people. Transparency must be a mutual understanding between politician and constituency where the politician must remain transparent or be held accountable by the people if they fall anything short of that. It is something in which every party must take the responsibility given to them seriously and with utmost care. This responsibility can be difficult to maintain, particularly when you are bound by confidentiality in regards to certain information. In my role as Executive Vice-President at CBU’s Students’ Union this has been a constant obstacle. There is no easy way around confidentiality or non-disclosure agreements because. Particularly difficult are some of the sensitive matters discussed at CBU’s Board of Governors because there we are expected to maintain confidentiality to the fullest extent. Yet, as a student leader I am elected and expected to provide information to the membership of the Students’ Union. It is unfortunate how often I and my fellow executive team of the students’ union have found ourselves in this situation. Whether it was our legal strife with the Canadian Federation of Students or the collective bargaining between CBU and the faculty association, we have been bound to keep certain information private and confidential. The responsibility of transparency to us then, is not to break that confidentiality but to explain it and what it means and the circumstances surrounding it. Confidentiality, I have learned, does not mean a complete lack of transparency. Yes, it does make transparency more difficult to maintain, but it does not make it impossible. Transparency, in the case of the collective bargaining, was maintained when we were honest with our membership about what we could or could not say. We held a student town hall solely for our membership so that they had the opportunity to come and ask any question to us and gather as much possible information. We explained the situation as best as we could and how we ended up where we did. In addition, we had the town hall videotaped and posted online for those that couldn’t attend and we did a general FAQ on our website with information students should know. The greatest takeaway from the situation in the context of transparency was that our honesty of what we could or could not disclose about the situation was appreciated and understood. Perhaps it did not give everyone satisfaction, but we provided the maximum amount of information we could as soon as we could without breaching confidentiality. It took work and it took time, but we did achieve that delicate balance, I believe, between upholding our responsibilities to the Board of Governors and upholding our responsibility of transparency to our students. The responsibility of transparency to the politician means being open and honest about their actions and how they’re representing their constituency. It means that they are answering questions honestly and making information and facts public and available to anyone. It means that they are actively working in the best interest of the people they represent and that they are not hiding anything that might benefit them at the cost of those represented people. The responsibility of transparency to the people means the active seeking of information and asking questions to their representatives. It means identifying the methods in which transparency can be conveyed. But above all, it means identifying the lack of transparency when it arises and ensuring that we hold our representatives accountable for it and that our voices are heard so that we may move forward with that mutual responsibility in a civil manner.
HuggingFaceFW/fineweb-edu
default
0.333
The Law A person commits disorderly conduct if he or she intentionally or knowingly: 2. makes an offensive gesture or display in a public place, and the gesture or display tends to incite an immediate breach of the peace; 3. creates, by chemical means, a noxious and unreasonable odor in a public place; 4. abuses or threatens a person in a public place in an obviously offensive manner; 5. makes unreasonable noise in a public place other than a sport shooting range or in or near a private residence that he has no right to occupy; 6. fights with another in a public place; 7. discharges a firearm in a public place other than a public road or a sport shooting range; 9. discharges a firearm on or across a public road; 10. exposes his anus or genitals in a public place and is reckless about whether another may be present who will be offended or alarmed by his act; or 11. for a lewd or unlawful purpose: • enters on the property of another and looks into a dwelling on the property through any window or other opening in the dwelling; • while on the premises of a hotel or comparable establishment, looks into a guest room not the person’s own through a window or other opening in the room; or • while on the premises of a public place, looks into an area such as a restroom or shower stall or changing or dressing room that is designed to provide privacy to a person using the area. If an individual commits the crime of disorderly conduct, the punishment is a . However, if disorderly conduct is committed under (7) or (8), it is a . A possible defense is if an individual had significant provocation for his or her abusive or threatening conduct; the individual may have a defense to his or her act. Under (7) or (9), it is a defense if the person who discharged the firearm had a reasonable fear of bodily injury to the person or to another by a dangerous wild animal. Additionally, words that may be considered abusive, profane, indecent, vulgar, or threatening, may not be prohibited by law. Under the Constitution of the United States of America and specifically the First Amendment, only fighting words or conduct can be prohibited; fighting words or conduct are words or conduct that is likely to cause an ordinary person to react in a violent manner. There is no laundry list stating what words and conduct arise to the level of “fighting;” it can only be determined by using the facts and circumstances of the situation. Many First Amendment issues are raised in the offense of disorderly conduct. The exact language of this law, further details, and additional punishment concerns can be found in section 42.01 of the Texas Penal Code (see Links). None of this information can take place of the information, knowledge, and expertise provided by a licensed attorney.
mlfoundations/dclm-baseline-1.0
default
0.37
Recruitment & Selection - Overview - GCSE, AS, A-Level - AQA, Edexcel, OCR, IB Last updated 22 Mar 2021 Recruitment and selection is the process of identifying the need for a job, defining the requirements of the position and the job holder, advertising the position and choosing the most appropriate person for the job. Undertaking this process is one of the main objectives of management. Indeed, the success of any business depends to a large extent on the quality of its staff. Recruiting employees with the correct skills can add value to a business and recruiting workers at a wage or salary that the business can afford, will reduce costs. Employees should therefore be carefully selected, managed and retained, just like any other resource. Managing job applications For many jobs, a business will ask applicants to provide a Curriculum Vitae (CV). This is a document that the applicant designs providing the details such as: In some circumstances however an applicant may be asked to fill in a firm's own application form. This is different from a CV in that the employer designs it and sends it to applicants, but it will still ask for much of the same information. It has the benefit over a CV in that a business is able to tailor it to their exact needs and ask specific questions. Once a business has received all the applications, they need to be analysed and the most appropriate form of selection decided upon. When analysing applications, a business will normally sieve the applications into three categories. (1) Those to reject Candidates may be rejected because they may not meet the standards set out in the job specification such as wrong qualifications or insufficient experience or they may not have completed the application form to a satisfactory standard (2) Those to place on a short list Often comprises 3-10 of the best candidates who are asked to interview (3) Those to place on a long list A business will not normally reject all other candidates immediately but keep some on a long list in case those on the short list drop out or do not appear suitable during interview. The business would not want to incur costs putting them through the selection process, such as interviews, unless they have to.
HuggingFaceFW/fineweb-edu
default
0.333
This is the first in a series of articles designed to explore some of the issues and concerns that arise around what is currently called Asperger’s syndrome, which will soon be incorporated into the broader spectrum of autism disorder when the new Diagnostic and Statistical Manual of Mental Disorders (DSM-5) is published in 2013. As a therapist, I see clients with a variety of traits clustered at the high-functioning end of autism, now commonly referred to as Asperger’s syndrome, a term I will use until the DSM-5 makes it no longer accurate. No two clients with Asperger’s syndrome exhibit the same cluster of traits, nor does any one client exhibit them all. However, there is one element that I recognize as pervasively diminished in all Asperger’s clients. This element is called “theory of mind.” What is theory of mind? It is a person’s ability to imagine the interior life of another person. This includes understanding why someone else does something, how someone might feel in a certain circumstance, what might be important to that person: in short, it is the ability to put oneself in the mind of another person and see the world from that person’s point of view. Theory of mind means being able to create a theory about the way another person’s mind works. Theory of mind provides the basis for empathy because if you can walk in someone else’s shoes, you also become capable, by extension, of feeling any pain or delight that person experiences. You understand motivation. You catch a glimpse of fears and dislikes. You get to know the other person from the inside out. According to autism specialist Simon Baron-Cohen, individuals with Asperger’s syndrome typically have delayed access or no access to this phenomenon of human communication and share a problem that is called mind-blindness. Since interpersonal communication is approximately 65% nonverbal, you can quickly see that not being able to formulate a theory of mind leaves these individuals at a distinct disadvantage in relationship with others because the behavior of other people does not make sense to them. For parents, this gap can create difficulties when they treat their son or daughter with Asperger’s with the same set of interpersonal expectations with which they treat their other children and assume intact theory of mind capabilities. This can lead to incorrect understanding of the child’s behavior as being intentionally hurtful, for example, when in fact it was based in lack of awareness. A common test used with children suspected of being autistic is called the Sally and Anne Test: Sally has a basket. Anne has a box. Sally has a marble. She puts the marble into her basket. Sally goes out for a walk. Anne takes the marble out of the basket and puts it into the box. Now Sally comes back. She wants to play with her marble. Where will Sally look for the marble? Most children will answer that Sally will look in her basket, because that’s where she put it and that’s where she expects it to be when she returns from her walk. Baron-Cohen discovered that only 20% of children with autism were able to answer correctly. A full 80% answered that Sally would look in the box, because that is where the marble is. This test is often used to demonstrate the theory of mind deficits in children with Asperger’s syndrome. They believe Sally will look in the box for her marble because they know that’s where it is. They are unable to put themselves into Sally’s mind in order to understand that from her perspective the marble should be right where she left it: in her basket. Can you imagine how unpredictable and irrational the world must appear to a child whose logic is denied in such a manner? This is the world of a child with Asperger’s syndrome. I work with children to help them build bridges toward understanding the behavior of others, so that they can come to anticipate that their own logical view of the world may not apply in all circumstances. This is one of the primary goals of therapy with these children. It is an attempt to help them experience the world as a safer place than it appears when their logical perspective is consistently shattered by experiences that do not align with it. The preceding article was solely written by the author named above. Any views and opinions expressed are not necessarily shared by GoodTherapy.org. Questions or concerns about the preceding article can be directed to the author or posted as a comment below.
HuggingFaceFW/fineweb-edu
default
0.333
So, the sister of a girl who attended President Obama's speech on Friday was killed that very same day. And the Gun Nuts argue that such things prove gun control laws don't work. Chicago has some of the toughest in the nation, so tough the Supreme Court struck them down. So there's no point in having gun laws, since criminals disobey the law, and blahblahblah. By the same logic, of course, the fact that hit and runs occur when some criminal drivers don't stop at red lights means that we should have no traffic laws requiring people to stop at red lights. The real issue at hand is that without a required, universal background check for gun purchases, straw buyers like this asshole will go to places where the gun laws are lax, buy duffel bags full of guns without background checks required, at gun shows. And then cross those pesky state lines and country lines and city boundaries and sell the guns to the bad guys. Money quote: As he sold four handguns in a South Side parking lot last year, Levaine Tanksley boasted to his customer that there were plenty more illicit weapons available, investigators say. "Twenty-five more in four hours," Tanksley told his customer, who was secretly working for law enforcement and recording the conversation. "Give me $5,000 and you can put your order in then. I'll get you whatever, give me a list." Buying a gun should be at least as difficult as getting a driver's license (if not an abortion). And a national database of who bought what gun when, then matched up against crimes, would make it oh so much nicer for our legendary law-abiding Second-Amendment respecting gun owners to live in peace, since it means fewer bad guys with legally-bought guns.
mlfoundations/dclm-baseline-1.0
default
0.37
Arbia Hkimi Monday 22th August 2016 To provide practice of vocabuary in the context of city and countryside To provide practice of new lexical items in the context of city and To provide fluency speaking practice (conversation) in the context of life in city and countryside Procedure (47-60 minutes) Tell students that the lesson is about different places. Ask students where they live and how they feel about it. Tell students that Emna and Aymen are my friends and they need to guess where they live while looking at the pictures to complete the following sentences: Ayman lives in the........................................ Emna lives in the ......................................... Go through the pictures with students to name the places. Ask students if they know other places one can find in the city and in the countryside. Tell students to match words with their suitable pictures. Distribute the handout for students to do the task in pairs. Encourage students to check their answers with their classmates. Feedback whole class. Go through the answers provided by students. Teach new vocabulary while paying attention to meaning, form and pronuciation. hill: Can you find it in the city?/ Is it higher than a mountain? cottage: Is it a very big house in the city? bridge: Is it built over a river or road? / Is it used to cross from one side to another? field: Is it a land used forgrowing crops? / Do you find it in cities? car park: Is it a place where you can put your car? factory: Is it a place where you can find machines? church: Is it a place where you watch movies?/ Is it a small building? cathedral: Is it a large building? / Is it a place to pray? pub: Can you drink alcoholic drinks there? Ask students to work individually so as to complete sentences with words from the box. Pair work check. Encourage students to check their answers with answers keys. Role play Student A plays the manager, Student B plays the applicant. Give Ss pieces of papers with different colors (Red & Green) Split Ss into Two groups (Ss having the red color play the role of manager, the rest play the role of Applicant) Give students role cards with information and prompts Ss spend few minutes preparing what they are going to say Start the activity with a demonstration (One pair performs in front of the whole class) Change the teams (Those who played the manager role will do the applicant and vice versa)
HuggingFaceFW/fineweb-edu
default
0.333
The Dungarvon Whooper marked “hooper” is a horror story immortalized in a song by Michael Whelan around a late-nineteenth-century killing along the Dungarvon River in central New Brunswick, Canada. The plot centers on a young Irish cook who goes by the name Ryan. Ryan relocates to a sawmills hideout along or near the Dungarvon River, having brought all of his belongings with him, along with a money belt. Whilst lumberjacks are away, Ryan is isolated from the camp’s boss, who plans to kill and steal the young cook. When the team comes back, the leader clarifies that the cook became ill and died unexpectedly. People then bury the corpse in the woods not far from the camp. A truly horrible “whooping” sound, however, prevents the group from falling asleep that day, possibly the ghost of Ryan protesting the murder of that he was the victim. The men evacuated the camp the next morning, terrified. A cutting chisel of Ryan can be found at the Town Park in Blackville, New Brunswick, Canada. The story, which had been passed down to New Brunswick lumberjacks across the twentieth century, is well known and popular. Let’s know more about The Legends Of Dungarvon Whooper. Also Read: The Al Qasimi Palace Ghosts: Haunted Palace What Took Place On That Particular Day? Ryan was the first person up each morning to begin preparing breakfast and enter the food pails with loaf and salt pork. And he’d let out a huge ear-splitting whoop to wake everyone up. After eating, the men would leave Ryan alone to go to work. Ryan had a bad day because the camp boss had stayed with the young cook on this particular morning. The boss was unfamiliar, but he was treated with respect, and his instructions were carried out. When the men returned late in the evening, they discovered Ryan collapsed on the ground. He was no longer alive, and his money belt had vanished. Once asked what happened, the boss stated that the young chef had mysteriously become ill and died. None dared to investigate him further, but the lumberjacks were extremely cautious. Where had the money belt gone? A storm cloud swept through the camp that night, creating it difficult to leave, so the men were forced to hide the poor cook in an unmarked grave in the woods. They stopped for a moment on their way back to camp because above the crying and grumbling of the wind came the most terrifying whoops and yells anyone had ever got to hear. It went on all night and the following day, pushing the men insane with fear. They abandoned camp, never to come back. Bernard Colepaugh of Renous wrote a story called “Dungarvon Whooper.” The Legacy Participants, a team, devoted to conducting plays that reflect New Brunswick’s rich heritage, staged their 1st output. Mr. Colepaugh is Michael Whalen’s blood relative. The play begins in a 1920s schoolhouse, with instructor Michael Whalen (Bernard Colepaugh) summoning his pupils to class and afterward enticing them with the prospect of learning outside under “God’s Beautiful Blue Sky.” After a little intelligent feedback from Billy Phader (Thomas Saulnier), the class’s older boy, the 4 students persuade their tutor to take a moment away from British History and share a ghost story. Susan (Katie McCabe) instructs Mr. Whalen to inform him about the Dungarvan Whooper. Michael Whalen starts his adventure in Ireland. The event then shifts back in time to Peter Ryan (performed by student actor Tom Daley) in Ireland, well before he prepares to leave for the New Country to take a job, as his mother, family, and friends perish as a result of the Great Famine. Just before kissing his mom goodbye, he is handed his dad’s money belt and some Prayer Rugs. One other scene shift places us in the camp, in which Peter Ryan has been hired as the cook. Jack Hogan (also ended up playing by Bernard Colepaugh) enters with the team, and they sit down to eat. Mr. Henry Kelly knocks on the door just as they are about to sit down. He is welcomed in, and they dine together. The ghostly noises of the Dungarvon Whoop lasted for years until Father Murdock, a Renous spiritual leader, was requested to put the poor soul to sleep. Father Murdock read several divine commandments from the Bible and created the symbol of the cross from over the forest grave. Some claim Father Murdock was successful in calming the ghost, while others claim Ryan’s terrifying cries can still be heard today. The sound of the train that traveled by the Dungarvon echoed through the hilly terrain, representing the whoops of the dead; hence the train’s name, THE DUNGARVON WHOOPER. Citizens in Mirimichi country still occasionally hear the haunting shouts of the Dungarvon Whooper when they wander outside at sunset.
HuggingFaceFW/fineweb-edu
default
0.333
The Axiom of Choice is Wrong When discussing the validity of the Axiom of Choice, the most common argument for not taking it as gospel is the Banach-Tarski paradox. Yet, this never particularly bothered me. The argument against the Axiom of Choice which really hit a chord I first heard at the Olivetti Club, our graduate colloquium. It’s an extension of a basic logic puzzle, so let’s review that one first. 100 prisoners are placed in a line, facing forward so they can see everyone in front of them in line. The warden will place either a black or white hat on each prisoner’s head, and then starting from the back of the line, he will ask each prisoner what the color of his own hat is (ie, he first asks the person who can see all other prisoners). Any prisoner who is correct may go free. Every prisoner can hear everyone else’s guesses and whether or not they were right. If all the prisoners can agree on a strategy beforehand, what is the best strategy? The answer to this in a moment; but first, the relevant generalization. A countable infinite number of prisoners are placed on the natural numbers, facing in the positive direction (ie, everyone can see an infinite number of prisoners). Hats will be placed and each prisoner will be asked what his hat color is. However, to complicate things, prisoners cannot hear previous guesses or whether they were correct. In this new situation, what is the best strategy? Intuitively, strategy is impossible since no information can be conveyed from anyone who knows your hat color to you, so it would seem that everyone guessing blindly. However, all but a finite number of prisoners can go free! I should give credit where it is due. I heard of both of these puzzles in Mike O’Connor’s talk, and I believe that he came up with the solution to second puzzle which is so troubling. Also, I cannot find it anywhere, but I have heard that Chris Hardin wrote a paper on problems of this type. First, lets review the solution the the basic problem. The first prisoner who has to guess his hat color is out of luck; he can’t possibly have any information about his hat, so he has a 50% of being right no matter what. However, that means he is free to use his guess to try to convey some information to the rest of the prisoners with his guess. Hmm, he could say the color of the hat on the guy in front of him. That guy would then guess correctly, but then the next guy would be in the same situation as the first guy. Repeating this idea gets only 50 prisoners out guarenteed, with an average of 75 getting out. We can do better. Instead of just telling the guy in front of him his hat color, the first guy counts the total number of white hats. If it is odd, he says “white”, and if it is even, he says “black”. Then the guy in front of him can count the number of white hats he can see, and if differs from the parity the first guy counted, he knows his hat is white. But now the next guy knows the parity of white hats the first guy saw, and whether or not the second guy had a white hat, so he can compare it to the white hats he sees, and find out if his own hat is white. This argument repeats, and so everyone except the first guy guesses correctly. Its interesting to notice that a larger number of hat colors poses no problem here. For any set of hat colors H, the prisoners can pick an abelian group structure on H. Then, the first prisoner guesses the ‘sum’ of all the hat colors he can see. The next guy can then subtract the sum of the hat colors he sees from the hat color the first guy said to find his own hat color. Again, this argument repeats, and so everyone except the first guy gets out. For the case of black and white, the previous argument used black = 0 (mod 2) and white = 1 (mod 2). This is all well and good, but it doesn’t seem to help the countably infinite prisoners in the second puzzle. Since they can’t hear anyone else’s guess, they can’t set up a similar system for passing on information. So what can they do? First, instead of thinking of hat colors, they just turn white into 1 and black into 0 (like above). Then, a possible scenario of hats on their heads is an infinite sequence of 1’s and 0’s. Call two such sequences ‘equivalent’ if they are equal after a finite number of entries. This is an equivalence relation, and so we can talk about equivalence classes of sequences. Next, the prisoners invoke the Axiom of Choice to pick an element in each equivalence class, which they all agree on and memorize. Now, when they are put in line and get a hat, they will be able to see all but a finite part of the sequence, and so they can all tell what equivalence class they are in. Their strategy is then to guess as if they were in the pre-chosen element in that equivalence class. How well does this work? Well, the sequence they are actually in and the representative element they picked with the axiom of choice must be equivalent, so they are the same after a finite number of entries. Therefore, after a finite number of incorrect guesses, each prisoner will miraculously guess his hat color correctly! This solution is also pretty stable, in that most attempts to make the puzzle harder don’t break it. The warden can know their plan and even know their precise choice of representative sequences. If so, he can make sure any arbitrarily large finite number of them are wrong, but he can’t get an infinite number of them. Also, the number of hat colors can be arbitrarily big; the same solution works identically. This last point is pretty trippy. In the two color case, its very reasonable for any prisoner to guess his hat color correctly, and also for arbitrarily large numbers of them to get it right in a row. Effectively, at no finite point in the guessing do the results of the optimal strategy appear to differ from random guessing. However, if there are uncountably many hat colors, then the probability of any prisoner randomly guessing his hat color is 0. One can reasonably expect no prisoners to be correct for random guessing, so when eventually that first prisoner guesses correctly, the warden should be rightly shocked (though not as shocked as he will be when all but a finite number of prisoners guess correctly). I find this solution deeply troubling to the intuitive correctness of the axiom of choice. Sure, this is based primarily on my intuition for finite things and a naive hope that they should extend to infinities. I think particularly troubling is in the uncountably many colors case, where any given prisoner has no chance to guess his hat color correctly, and yet almost all prisoners are correct. Tags: , 174 Responses to “The Axiom of Choice is Wrong” 1. John Armstrong Says: So you mean that you consider the existence of non-Lebesgue-measurable sets the death-knell of the Axiom of Choice? Because the use of the Axiom in this strategy is essentially the same as how it’s used to pick out a non-measurable subset of the interval. 2. Charles Says: The problem I see with the intuition is that you’re bringing in several infinities. For instance, first you have to have a countably infinite set of people, which itself poses a certain problem. The second is that each of them must have a countably infinite memory to be able to recall a countable collection of real numbers. Personally, I find the notion that these people are not merely perfect logicians, but also have such enormous memory enough to push this beyond the scope of anything I am willing to believe I have intuition for...infinitely many people I have no trouble with...but infinite memory is a much trickier thing... 3. Greg Muller Says: Its not that the existance of a non-Lebesgue-measureable set kills the axiom of choice. I have no direct opinions on the validity of a world with or without non-measurable sets. Its that when you restate the same construction in this language, it violates how I would like probability to work, if you could generalize these probabilistic ideas to the infinite cases here. 4. Kea Says: I never took much notice of Banach-Tarski until recently either, but having decided against AC on very pragmatic grounds (non distributivity of lattices in higher topos theory for physics) I found myself looking at it again. Now I think it always was an excellent argument against AC, especially in the context of measurement! Have you seen the cool book by Wagon? 5. Grant Says: I agree with Charles — in the solution, people have countably infinite memories. Even if that was possible, they have to agree upon a representative element in each of a countably infinite equivalence classes. How would such an agreement happen in a finite amount of time? 6. Isabel Says: Having not seen this post, I wrote a more humorous post about the Bananach-Tarski paradox. It must be something in the air today. 7. Aaron F. Says: Pardon me, but isn’t the axiom of choice logically independent of the ZF axioms? If the axiom of choice and its negation are both valid extensions of ZF, why bother arguing about which one to use? Relatedly, is there an analogue in set theory to the idea that Euclidean geometry describes flat space, while non-Euclidean geometries describe curved spaces? Are there many possible alternative versions of the axiom of choice, just as there are many possible alternative versions of the parallel postulate? 8. Terence Tao Says: This paradox is actually very similar to Banach-Tarski, but involves a violation of additivity of probability rather than additivity of volume. Consider the case of a finite number N of prisoners, with each hat being assigned independently at random. Your intuition in this case is correct: each prisoner has only a 50% chance of going free. If we sum this probability over all the prisoners and use Fubini’s theorem, we conclude that the expected number of prisoners that go free is N/2. So we cannot pull off a trick of the sort described above. If we have an infinite number of prisoners, with the hats assigned randomly (thus, we are working on the Bernoulli space {\Bbb Z}_2^{\Bbb N}), and one uses the strategy coming from the axiom of choice, then the event E_j that the j^th prisoner does not go free is not measurable, but formally has probability 1/2 in the sense that E_j and its translate E_j + e_j partition {\Bbb Z}_2^{\Bbb N} where e_j is the j^th basis element, or in more prosaic language, if the j^th prisoner’s hat gets switched, this flips whether the prisoner gets to go free or not. The “paradox” is the fact that while the E_j all seem to have probability 1/2, each element of the event space lies in only finitely many of the E_j. This can be seen to violate Fubini’s theorem – if the E_j are all measurable. Of course, the E_j are not measurable, and so one’s intuition on probability should not be trusted here. There is a way to rephrase the paradox in which the axiom of choice is eliminated, and the difficulty is then shifted to the construction of product measure. Suppose the warden can only assign a finite number of black hats, but is otherwise unconstrained. The warden therefore picks a configuration “uniformly at random” among all the configurations with finitely many black hats (I’ll come back to this later). Then, one can again argue that each prisoner has only a 50% chance of guessing his or her own hat correctly, even if the prisoner gets to see all other hats, since both remaining configurations are possible and thus “equally likely”. But, of course, if everybody guesses white, then all but finitely many go free. Here, the difficulty is that the group \lim_{n \to \infty} {\Bbb Z}_2^n is not compact and so does not support a normalised Haar measure. (The problem here is similar to the two envelopes problem, which is again caused by a lack of a normalised Haar measure.) 9. Charles Says: Aaron, the reason that people argue about which one to use is that not everyone is a formalist. To a strict formalist, neither one is truer than the other, and so neither has an advantage (except that you can prove more theorems with AC). Some people are Platonists, in which case they believe that even independent statements are true or false, and it’s really a matter of finding the right axioms that imply the correct theorems. Personally, I fall into the latter camp and believe that the Axiom of Choice is true. As far as your question about set theory. One way to look at the different geometries is as different models of absolute geometry (just the first four postulates), which hold in hyperbolic, euclidean and elliptic geometry. In this context, it’s just that ZF(not C) and ZFC define distinct models of ZF, and we can think of ZF as playing the same role as absolute geometry. 10. Todd Trimble Says: As someone generally out of sympathy with Platonist philosophies of mathematics, I myself am completely agnostic on the issue of AC. As Kea points out, it holds in some universes of mathematics (read: toposes), and not in others, insofar as any of these things “exist”. In my view, an appropriate response to, “is it right, is it wrong?” might be: Mu! Wrong question! Although I have to agree: the paradox Greg describes does show very strongly and vividly how much is really at stake with AC, if you really “believe” in these things. Excellent post. I read somewhere (maybe in Wagon’s book? and Kea’s right, it is a nice book) that the Banach-Tarski paradox was a kind of fishes and loaves story invented for the express purpose of making AC seem ridiculous. But the plan sort of backfired — the overall reaction in the mathematical community seems to have been a kind of delight at the crazy things you can do with mathematics, and not that many took it to be all that devastating, seeing as how these funky non-measurable sets are so obviously mathematical and so obviously devoid of physical sense. I guess my reaction to this entry’s paradox is also somewhat in that vein. 11. John Armstrong Says: Todd, I agree with your comments in re topoi wholeheartedly, but with one slight modification: I believe that along with a Real World Out There there exists a Real Topos Out There. That is, there exists some topos whose internal logic is the logic underlying physical systems, be that classical, intuitionistic, quantum, or what-have-you. I’m ambivalent as to which topos “obtains”, but only one does. And that topos either includes AC or it doesn’t. 12. John Armstrong Says: Oh, and that response you mention is spelled “無”. 😀 13. Todd Trimble Says: John, I don’t believe in a Real Topos (but we can agree to disagree). I’m not a philosopher by temperament or training, but if pressed, I would not ascribe a definite internal logic to physical reality. I see logic more in terms of valid rules of inference applied to certain classes of propositions, i.e., in terms of language structure, and not as something residing in things-in-themselves. Of course I can see that one type of logic may be more appropriate than another for theories in given domains of validity, but I don’t envision a monolithic “master logic” which would cover all domains, and so I don’t envision a master topos either. My idea would be that one day we will need a whole network of toposes with differing internal logics to adequately express physical theory (if toposes are at all the right language to use!). 14. Walt Says: So you are agnostic on the question of whether the infinite direct product of non-empty sets is non-empty? 15. Ars Mathematica » Blog Archive » Axiom of Constructibility Says: [...] is a discussion at the Everything Seminar about everyone’s favorite topic, the axiom of choice. The axiom of choice has various [...] 16. Aaron Bergman Says: I always thought that one was a bit of a cheat. It’s really the product of sets over an uncountable indexing set that’s the fishy thing. 17. Pierre Says: Surely you mean uncountably infinite memories, given that they have to remember an element from ‘equivalence classes’->’representative thereof’, with the former set already being uncountable (given that each class is countable and ‘countable set’x’countable set’ is countable). 18. Z Says: I think this is yet another example of the stark difference between our intuition and the way uncountable sets behave. I am inclined not to use the axiom of (uncountable) choice. Horst Herrlich alos has a nice book on the topic. 19. Todd Trimble Says: Walt, certainly I’m agnostic about that. I don’t hold a fixed conception of “set”. That said, I am perfectly willing to contemplate models of ZFC (so I’m not an “atheist” 🙂 ). (Of course, some infinite products like (Z_2)^X are obviously nonempty, since you have e.g. constant sequences or constant maps.) 20. Tom Leinster Says: I’d go further than what Todd says (though I might not go further than what he *thinks*). I’m not an “agnostic” or a “believer” or a “non-believer” in the Axiom of Choice. That’s because it makes no sense to me to ask whether it’s “true”. That would imply some kind of external reality in which sets – including the uncountable ones – are supposed to live. If a physicist or astronomer could locate such a reality, then we could have a meaningful debate about whether AC is true. Otherwise, sets are a figment of our imagination – just a formal system. The analogy with the parallel postulate is a good one: it makes no sense to ask whether that’s “true”. There are models where it holds, and models where it fails. What more is there to say? To be honest, I’m baffled by talk of a “real world” of sets. I have no idea what it means. 21. Walt Says: I think it’s fine to have a non-fixed notion of “set”. But do toposes exist? If I said, “The effective topos clearly doesn’t exist. What are you, drunk?” you have no mathematical counterargument? 22. Oren Cheyette Says: This problem appeared a couple of years ago on an undergrad “problem of the week list”: http://mathforum.org/wagon/fall05/p1035.html. The issue I have with the infinite solution (which I also objected to in the Macalaster problem) is in this sentence: “Next, the prisoners invoke the Axiom of Choice to pick an element in each equivalence class, which they all agree on and memorize. ” The difficulty is that AC doesn’t provide a means to choose the elements of the equivalence classes (EC). It just says “it can be done”. It’s not a magic incantation that presents the actual EC elements. If there were an algorithm for choosing the EC elements, you wouldn’t need AC. But the prisoners need an actual algorithm. I got a (probably warranted) dismissive reply to this objection from the list maintainer, but I still don’t see how AC provides a solution. “Assume AC” doesn’t solve the problem: the prisoners have to be able to identify the EC element, and without an algorithm they can’t do it. Of course you can’t have an”actual” infinite set of prisoners either, so maybe this is a pointless nitpick. But the semantics of the word problem imply that one should find a method. That’s not captured by invoking AC, which amounts to saying “assume a method exists” (for choosing the EC element). • Jean-Louis Dornstetter Says: My take on AC is different than yours: AC *is* the “means to choose the elements ...”. Of course, it is not specific on *how* to do this, else there would be no need to accept it as an additional axiom, it would be a ZF-theorem. Accepting AC, we allow ourselves to ‘actually pick’ one such beast, and share it among prisonners etc... My favourite look at AC is that it ‘provides’ you a procedure to pick integers with uniform density: real numbers x and y are ‘equivalent’ if they differ by a rational. Use AC to pick (and memorize!) a set E of real numbers in [0,1] that contains one representative of each equivalence class. Now pick a random number u in [0,1]: u differs from exactly one member of E by the n-th rational number. Your random integer pick is n and by every reasonable argument, none of the outcome is more nor less likely than another: AC ‘provides’ a *uniform* random choice over N, which I find simpler and no less puzzling than this paradox or Banach-Tarski. 23. Todd Trimble Says: Actually, to be clear, I’m in full agreement with Tom. The word “agnostic” is not at all what I mean, if it is interpreted as my claiming there’s a mathematical “truth” out there, but I make no claims on what it is. Rather, I think the whole question of “truth” of the axioms of ZFC, or of any piece of mathematics, doesn’t even make sense to ask, is inadmissible from the get-go. (My earlier Mu! was a too-cute way of expressing that, and I’m glad Tom expressed it more clearly.) From this POV, it is inadmissible to ask whether a piece of mathematics is “true”, but it is however possible to be reasonably clear and certain about its deductions and calculations. I strongly recommend Saunders Mac Lane’s book Mathematics: Form and Function, for some very clear thinking on this and other philosophical topics. In particular, the philosophical position taken here is not to be confused with a straw-man Formalism, which holds that mathematics is nothing but manipulation of symbols which have no meaning. I rather tend to an opposite conclusion, that mathematical forms (“point”, “line”, “set”) can carry a multiplicity of meanings, not fixed in advance. The example of Euclidean vs. non-Euclidean geometry provides good evidence for this. 24. Tom Leinster Says: Todd – yup, I thought we’d probably be in agreement. (And your “mu” was perfectly clear, not too cute at all.) I guess I just wanted to ram home the point that there are many possible notions of “set”, and to make sure no one interpreted “agnostic” as meaning “I don’t know whether it’s true or false”. Walt asks how I’d reply to “do toposes exist?”, or “does the effective topos exist?” I’d reply: it’s a meaningless question unless you say what you mean by existence. You could change “toposes” to “groups” or “circles” or “whole numbers”, and my reply would be the same. 25. Todd Trimble Says: I’d reply roughly the same way as Tom did to whether toposes exist, and add that I would find the question “are the axioms of topos theory, or of ZFC, etc. consistent?” much more palatable. At least the consistency hypothesis is in principle falsifiable, and therefore not meaningless. And naturally, I am perfectly prepared to act as if ZFC (or topos theory or whatnot) *is* consistent. But that doesn’t commit me to believing toposes “exist” — that sounds almost religious to me! 26. Slawekk Says: One criterion may be how much useful mathematics can be done with vs. without AC. How much measure theory can be done without some form of AC? If measured by cash flow generated by direct application, stochastic calculus is probably the most applied area of mathematics. Can we do stochastic calculus without AC? 27. Kea Says: Although mathematically I would tend to side with the category theorists, this neoplatonism is interesting, with the caveat that an ill defined ‘out there’ is a non-sensical physical notion in the physical domain to which topos theory might possibly be useful, namely quantum gravity. Nonetheless, if we try to extend this philosophy into higher-category constructive-number-theory land, where one isn’t allowed to dump Natural Number objects into Set, it makes some sense: there isn’t a Real 1-Topos but there should be a canonical heirarchy of weak n-toposes which characterises a Set (probably without the axiom of choice). 28. John Armstrong Says: Kea, thank you for pointing out the benefits of philosophical sloganeering, namely that they’re concise at the expense of hideous accuracy. 29. Walt Says: I think “interpretable in a standard model that we all agree is consistent” is the minimal shared meaning of mathematical existence. So I’m happy to say that a topos exists if you can show me a model in ZFC or something similar. 30. Todd Trimble Says: Well, a model of ZFC would be a topos (you don’t need the C, and you don’t need the F). And there is no problem in defining other toposes relative to that model (e.g., Grothendieck toposes, realizability toposes...). I’m honestly not sure what you’re driving at here, but surely this is getting off-topic. Would you like to continue off-line? 31. Cale Gibbard Says: Your prisoners have quite good memories. I think I might have trouble remembering the selected sequence for each of the uncountably many equivalence classes that the warden could choose. If you only allow the prisoners a finite amount of memory, then it no longer works. I think that might have something to do with what is so intuitively troubling to you. You’re not used to running into people who can keep track of an infinite amount of information, like your prisoners can. 32. Greg Muller Says: Wow, people had lots of strong opinions on this. I’ll take that as a good sign. Several people mentioned the inherent difficulties in the prisoners nearly-divine capabilities: explicitly constructing the representative sequences, communicating them, memorizing them, seeing and processing countably many hats, etc. Certainly this is both impossible and a fairly legitimate reason for intuition to be inapplicable. I mean, my brain dismisses the Banach Tarski paradox as non-concerning for similar reason (“infinity can do crazy things!”). I think some arguments just hit certain people the right way. Maybe more analytically minded people can’t conscience the existance of a non-measurable set, and find that particular fact most distasteful about AC. I’m a bit scared to wade into the discussion of whether or not there is a One True Topos, since it seems roughly as dangerous as discussing religious beliefs with a group of strangers. Still, I think that its hard to be a mathematician and not form opinions on the ‘right’ choices for various undecidable propositions: I beleive in Countable Choice and I keep Uncountable Choice at arms-length. I beleive that there are no infinities between the number of integers and the number of reals. I beleive that |2^A|=|2^B| implies that |A|=|B|. Why do I believe these things? For no better reason than the crude generalized-intuition arguments like above. However, I also think its important to remember how baseless these beliefs are. Its like how I can root for a football team, but still remember that my allegiance is due primarily to proximity and nothing more meaningful. 33. Walt Says: My argument, which I’m clearly butchering into incomprehensibility, is that when a mathematical object is said to “exist”, this means that it has a model in some system that everyone agrees is consistent, nothing more. So the gap between the Platonist position and your own is small. 34. Michael Smith Says: Interesting problem. The difficulties involving a probability measure were the first to spring to mind for me, but I don’t think we have to bring that into it. What I noticed is this: the probability of *any particular* prisoner guessing correctly is 1/2, just like you would expect. The finite part that the two sequences disagree on can be arbitrarily long, so it doesn’t help out any particular prisoner. Thus the prisoners who are guaranteed to guess correctly are part of some sort of fictitious “infinite tail” of the line of prisoners; it seems like this is more a symptom of poor intuitive notions of infinity than some actual problem with the Axiom of Choice. I notice that the prisoners don’t care about *any* of the hats on finite prisoners, only the infinitely distant ones, which is somewhat nonsensical. Another interesting point is that in addition to having infinite memories, the prisoners have infinite computational power: comparing any finite number of hats against their memories doesn’t get them anywhere. Consequently, there are no halting algorithms that implement the algorithm you suggest. If we limit the prisoners to computable functions of the hats they can see, they’re screwed. Just my two cents. 35. Plac Ebo Says: QUOTE FROM NEAR TOP OF ARTICLE: “That guy would then guess correctly, but then the next guy would be in the same situation as the first guy. Repeating this idea gets only 25 prisoners out guarenteed, with an average of 37.5 getting out.” This is way over my head, but fascinating none the less. Just a simple question/observation: The initial problem stated that there were 100 prisoners. Using the above quoted technique shouldn’t 50, rather than 25, be guaranteed to be set free? And, assuming the warden assigned hats randomly, wouldn’t the average be 75 set free? 36. tdbwd Says: If all you smart folks will suffer the comments of a decided non-scientist... I think that most of the prisoners will think it’s a great joke to lie to one another about their hat colors and all but three will stay locked up. Given the recidivism rate, one of the three will be back in prison within a month and another within a year, where the story of the colored hats will circulate and grow into a legend and provide much welcome entertainment. Don’t worry, I won’t invade your discussions again, except to read them — very interesting stuff. 37. Top English WP Blogs « Hành trang 8X Says: [...] The Axiom of Choice is Wrong When discussing the validity of the Axiom of Choice, the most common argument for not taking it as gospel is the [...] [...] 38. Todd Trimble Says: Walt said: “My argument ... is that when a mathematical object is said to ‘exist’, this means that it has a model in some system that everyone agrees is consistent, nothing more. So the gap between the Platonist position and your own is small.” (Walt, I’ll assume that your “your” generally refers to me, but it’s sometimes hard to tell, since these comments are not nested. Could be you meant Tom Leinster, for example.) Let me take your second sentence first. I don’t agree — the traditional Platonist insists there is an *absolute truth value* for the continuum hypothesis, say. That is nowhere close to my position (it’s hard for me even to make sense of it). I interpret mathematical truth in a relative sense — relative to whatever framework we happen to be talking about. And your first sentence suggests to me, at least in one reading, that it’s this sort of relative existence that you mean (whether syntactic: a provable consequence of a theory, or semantic: holding in a specific model of a theory). If so, then yeah, sure, I use the word “exists” that way all the time without batting an eye — that’s just ordinary language use. That’s not Platonism — not even close! The way you put it leaves me wondering though what you really mean. “Some system that everyone agrees is consistent.” Do you mean some foundational theory, say ZFC, specified at the outset? Or do you mean just any old system of axioms that we (whoever “we” happen to be) agree to work with? [Sorry — I don’t like the formulation “everyone agrees is consistent”. “Everyone”: I believe there are excellent mathematicians, e.g., Edward Nelson, who wouldn’t agree to say ZFC is consistent. Speaking personally, if someone collared me and asked, “do you agree that ZFC is consistent?”, I’d probably splutter a bit and say “How the heck should I know? But we can take these axioms as starting point,” etc. — maybe that’s all you meant.] Now, if you do have in mind a specific foundations like ZFC as the standard to appeal to, I’m still not 100% happy. Here’s why: imagine for example two nineteenth-century geometers, and one says to the other, “The sense in which your hyperbolic plane exists is that we can translate it in terms of a disk model construction which lives in an ambient Euclidean space, and of course we all agree that Euclidean geometry is consistent. (But those aren’t real lines, you know; they’re circular arcs.)” This would be a possible system of “government”, of course, but this privileging of one framework to which all others must appeal means that the others must inevitably undergo contortions first before they are admitted into the guild. Something like this seems to happen all the time: if you take something like non-standard analysis, the models may look horribly complicated from the standpoint of encoding them in terms of “ordinary” set theory, but from within the theory may look simple and beautiful and compelling. Don’t get me wrong: relative consistency checks are important. But it’s a two-way (or many-way) street, and in general I would favor a more decentralized, “federated” approach — let people work with whatever axiomatic framework they want! The more pragmatic and socially adapted will build strong networks between various frameworks (transfer principles and so on), and thereby derive a lot of insight. Of course, this sort of thing goes on anyway — all I’m suggesting is let’s not be too provincial and religious in our beliefs about what is “really true”, or proper foundations, in mathematics. That wouldn’t make sense to me. 39. Richard Says: Hasty thought while traveling. Attempting to mix The Axiom of Choice with hypothetical undefined human conciousness in this way, and and in fact an infinite number of such, does not seem to me to be rigorous mathematics. The AOC merely asserts the existence of a choice function, and this existence has no dependency on the concept of consciousness. Moreover, as noted above, a specific choice function is unreachable by conciousness in the case of infinite collections of infinite sets. Attempting to marry the AOC with consciousness like this does not feel valid to me in any way, and leaves a taste in my mouth like a mix of milk and vinegar. 40. Walt Says: I was arguing with you, Todd, don’t worry. 🙂 Your argument actually illuminates something to me that I never understood. I think it’s perfectly fine to translate hyperbolic geometry into a model within Euclidean geometry. I wouldn’t think that demotes hyperbolic geometry into a second-class citizen. But clearly some people would feel that way, and that explains the eagerness in some quarters to dethrone set theory as the conventional testing ground for “existence”. 41. Barak Pearlmutter Says: Terence Tao’s comments about measurably are of course technically correct, but to my mind they miss the deep intuition here, which is the connection between the apparent paradoxes of measure theory with Godel’s incompleteness theorem and model theory. Given this “axiom of choice” escape strategy, let us ask the following question. We know that, by definition, at “some point” the sequence we are on will agree with the “chosen” sequence forever. In other words, if s=(s_1, s_2, ...) is the sequence we are on, and q=(q_1, q_2, ...) is the “chosen” sequence, i.e., the member of the equivalence class that includes s which the axiom of choice has chosen for us, then by definition \exists N \; \forall i, i \geq N, \qquad s_i = q_i Let us call that N the “point of convergence”. How big is N? One way of getting at this is to consider some particular value K and ask whether K0. In other words, K>N with probability zero. So for any K you pick, you can be sure (with probability one) that K0$ by simple computation. • EMF Says: And my second-favorite set theory is a variation on NBG. Take the language of ZF plus a unary function $u$, let the axioms be those of NBG relativized to $u(x)$, where we replace “$y$ is a set” by “$y\in u(x)$” and “$y$ is a class” by “$y\in u(u(x))\wedge \forall z\in y. z\in u(x)$”, except for the axiom of infinity (which we won’t need, because $u(\emptyset)$ contains all the hereditarily finite sets anyway). 154. A game with AC, plus the story of a poster. – General abstract nonsense. Says: [...] I stumbled upon an interesting article about a ‘paradoxical’ appeareance of the axiom of choice in a generalization of the [...] 155. fashionbeautytipss Says: 156. jhoagie70 Says: Been following your site for a while now. Visit mine, I write tips on a blog for earning some extra cash online. https://deepwebsites.org 157. behance.net Says: I have saved it for later! 158. Edwardo Fioranelli Says: Leave a Reply to Kea Cancel reply WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s %d bloggers like this:
mlfoundations/dclm-baseline-1.0
default
0.37
digitalzerodigitalzero Member, BASIC Posts: 639 is anyone else having trouble with the scene transparency? im trying to make it totally opaque but its not seeming to work... .its a pause menu but i would like for it to be transparent • AlkaPPAlkaPP Member, PRO Posts: 194 Sorry but I’m confused. Do you want transparent or no transparent? By the default, the pause scene will be transparent and you want opaque then add a blank actor as background. My Gamesalad Games On App Store: Greedy Chubby: • NKBDLNKBDL Member, PRO Posts: 100 If you wish to have "transparency" for pause menu, do the following 1. Make a "scene2" as your pause scene 2. On scene 1, your base scene with your backgrounds etc. Add an actor as your pause button with a rule when touch is released, pause game and show this scene, pick "scene2" 3. On "scene2", add an actor covering your scene size, make it whatever color you wish, say Black, set alpha to 0.5. That should answer your question You will need to make an unpause button for scene2 as well next Sign In or Register to comment.
mlfoundations/dclm-baseline-1.0
default
0.37
The recent and ongoing global financial crisis has resulted in many terms from the financial world making it into the everyday household through media and politicians. In the early years of this millennium most people would never have heard of a CDO, CDS, MBS or Subprime Mortgages. Another term that has been often used in the many reports is the threat of a run on the banks. This raises the question "What is a Bank Run?" In order to answer that question we first need to get a basic understanding of how banks and the monetary system work on a very basic level. So don't worry, I'm not going to overwhelm you with highly technical details that would bore you to insanity. How Banks Handle Money Everybody understands that banks are in the money business by accepting deposits from one group of people and making loans out to another group of people. As a save you go to a bank and hand over your money in the trust that they will look after it, and in return you receive interest (depending where you are in the world, interest in 2012 is pretty close to zero). Unless you deposit your money with a bank for a term that cannot be broken, through something that is known as a CD, or Certificate of Deposit, you essentially make a demand deposit. What this means is that you deposit money with the bank and can at any time go to your bank and ask for your money returned to you or moved to another account or another bank. While you have little control over what the bank does with your money once you leave, you still have, in theory, full control over whether your money stays in that bank. Some readers may have heard of Fractional Reserve Banking (FRB), which is the way our monetary system works. Now I don't want to go into a complicated discussion of how FRB increases the money supply or whether it is a good or bad system; that is a debate that probably goes beyond a short article. For the purpose of understanding what a bank run is you only need to comprehend the following. Let's say you deposit $1000 with your bank in a demand deposit account, which allows you to withdraw the money whenever you so choose. The bank uses a portion of that money (about 90%) to make loans to other people and charge them interest on those loans. Thousands of other people do exactly the same thing and the bank essentially only holds a fraction of the deposits quickly available, hence the term Fractional Reserve Banking. All is fine so long as those people that took out the loans continue to make repayments, and those that deposit money do not want to withdraw large amounts in a short space of time. What Triggers A Bank Run? In normal economic conditions where there are no increased levels of loan defaults, this system of fractional reserves functions without too many people worrying about it. The problem arises when there is some sort of mass worry, panic or hysteria. The most recent crisis can definitely be described as a panic situation, and as we all lived through it we should have a good idea what the general sentiment was like. For a run on a bank to be triggered some bad, sorry, very bad news has to make it out to the public through the media. Bad news can include, but is not limited to, a major fraud scandal, unforeseen losses on the loans made, a crippling information systems fault or sudden deterioration in the overall economy. These types of situations can then cause a lot of people who have deposits with a bank to arrive at the bank to withdraw or transfer their money. Remember how I just pointed out that in the Fractional Reserve Banking system, banks only hold a portion of customer's deposits readily available, generally about 10%? All it effectively takes for bank to be forced to close its doors is for more than 10% of its deposit customers to demand their money. It is because of this low level of withdrawals needed, that bank runs happen so suddenly and can cripple a bank in a matter of days or even hours. Especially in today's era of Electronic Funds Transfer banks runs can happen with a lot shorter queues outside physical bank branches. Some Historic Examples As already mentioned, bank runs are a phenomenon directly linked to Fractional Reserve Banking, so one has to look at the early days of FRB to see the first incidents of such occurring. 17th century England saw some of the first experiments with FRB, when gold smiths took in people's gold and silver, which was money at the time, for safe keeping and gave customers a receipt. When these receipts started being used for exchange of goods, it became very tempting for gold smiths to create more receipts than they had gold in their safes. This quickly back fired when their customers found out about this demanding back their gold.Credit: http://www.flickr.com/photos/vitike/ During the Great Depression in the US, hundreds of banks were forced out of business because the economy in general deteriorated so much that people simply didn't trust banks any more. Pictures of queus of people outside banks whose doors were closed are only too common, and the events were also dramatised in the 1946 movie "It's a Wonderful Life". Some people may remember that during the late 80s and early 90s there was a Savings and Loan crisis in the US, where over 700 saving and loan associations failed in the US. But the most notable events on modern times were during the earl days of the current global financial crisis. Pictures of queues of people outside Britain's Northern Rock bank circulated the world. A very new term that has been doing the rounds in the media is "Bank jog", which has been used to describe the situation in Greece where depositors are withdrawing money from banks at a relatively slow pace compared to bank runs that happen within days or hours. I hope this article has helped explain and clarify the phenomenon of a bank Run.
HuggingFaceFW/fineweb-edu
default
0.333
Lately, many studies indicate that children with an autism spectrum disorder (ASD) diagnosis have brain pathology suggestive of ongoing neuroinflammation or encephalitis in different regions of their brains. This is unfortunate because if a child with ASD has neuroinflammation, dealing with the root mind inflammation may lead to improved outcomes then. The goal of this overview of the books can be to examine the data of neuroinflammation/encephalitis in people that have an ASD analysis also to address what sort of medical analysis of encephalitis, when suitable, could advantage these small children by traveling more immediate and targeted treatments. = 13). The writers stated how the microglia were turned on in 9 of 13 instances with autism (69%). Tetreault et al. (2012) noticed all except one individual identified as having an ASD (from the 11 researched) got higher degrees of microglial activation than settings. Thus, 91% demonstrated microglial activation or neuroinflammation. Nevertheless, Tetreault et al. (2012) also mentioned that the main one individual with no microglia activation or neuroinflammation was an outlier, behaviorally, regarding other individuals identified as having autism and analyzed. Thus, predicated on the obtainable research, a traditional estimation shows that at least 69% of people with an ASD analysis possess microglial activation or neuroinflammation. Nevertheless, given the low number of topics analyzed in each one of the shown studies, this estimation is highly recommended with care. TH-302 The actual percentage could possibly be pretty much. For a far more accurate estimation, a larger research is necessary C one which quantitatively examines multiple parts of the mind for glial activation in collaboration with an evaluation of additional markers of activation (e.g., cytokines); this might permit analysts to determine even more precisely the rate of recurrence/percentage of people with an ASD analysis who also display microglial activation. How Neuroinflammation Might Contribute to the introduction of ASD: Regression, Encephalitis, and Clinical Symptoms Knowledge of the effects of sustained and exaggerated neuroinflammation and microglia activation TH-302 on brain connectivity is critical to understand how neuroinflammation could contribute to the development of an ASD. Sustained and exaggerated microglial activation can lead to cell loss and loss of connectivity. As mentioned earlier, in a sustained neuroinflammatory state, microglia can adopt an amoebic phenotype and start engulfing synapses and other healthy brain tissue with deleterious consequences for neurons and synaptic architecture (Lu et al., 2011; Rodriguez and Kern, 2011). Furthermore, when microglia are brought on to switch to an inflammatory phenotype, not only can this lead to microgliosis and neuroinflammation resulting in a disruption of normal neuroimmune homeostasis, but also this detrimental TH-302 process can continue long after the initial insult or cause for the activation has been resolved (Lu et al., 2011). As mentioned, the consequence of sustained microglial activation is usually cell loss and reduced connectivity, both of which are found in TH-302 the brains of those with an ASD diagnosis (Rodriguez and Kern, 2011). An examination of the scientific literature in ASD clearly shows that connectivity is usually disrupted (Wass, 2011). Numerous studies show loss of connectivity in ASD (Kern et al., 2015). In addition, the issues of connectivity in ASD have been shown to correlate with ASD symptom severity C the greater the cell loss and connectivity issues, the worse the ASD symptom severity (Kikuchi et al., 2014; Kern et al., 2015). Neuronal cell loss and reduced connectivity could understandably lead to neurological loss of skills and abilities or regression. Once a threshold of sufficient neuronal cell loss and neuronal disconnection has been reached, a child would become clinically symptomatic, i.e., present indicators of regression or loss of skills and TH-302 abilities. In addition, astroglial activation, usually associated with chronic neuroinflammation and found in ASD, has beneficial as well as detrimental effects (Kern et al., 2012; Skripuletz et al., 2013). Astrogliosis is sometimes accompanied by microgliosis and demyelination (Skripuletz et al., 2013). Neuronal demyelination could also lead to neurological loss of skills and abilities and possibly characterize the regression scenario in ASD. The concept of regression (loss of previously acquired skills and abilities) in some children with ASD has been validated by many studies (Tuchman, 1996; Davidovitch et al., 2000; Goldberg et al., 2003; Ozonoff et al., 2005, 2010; Werner and Dawson, 2005; Hansen et al., 2008; Stefanatos, 2008; Singhi and Malhi, 2012; Kern et al., 2014a,b). For instance, Werner and Dawson (2005) examined house videotapes of kids with autism between their initial and second birthday celebrations with and with out a reported background of regression, aswell simply because videotapes of developing kids typically. Analyses uncovered that infants identified as having an ASD with regression present similar usage of joint interest and more regular use of phrases and babble weighed against typical newborns at a year of age. On the other hand, infants Gfap identified as having an ASD characterized.
HuggingFaceFW/fineweb-edu
default
0.333
With greater emphasis being placed on patient engagement, it is becoming increasingly more evident that healthcare organizations stand a lot to gain from extending care to their patients, beyond just hospital visits. In order to develop continuous, long-term relationships with their patients, health organizations need to be more accessible to them. On examining what makes established health organizations successful, it isn’t hard to see that they are thriving largely because they prioritize and anticipate the needs and wants of their patients. Understanding pain-points and being able to provide a satisfactory healthcare experience (even before assisting with their health condition) creates a strong impression with your patients. Unfortunately, patients aren’t necessarily getting the healthcare experience they want. An Accenture study showed that patients are switching healthcare providers in search of better customer service, which is a huge loss annually (on an average $100 million) for health organizations. While technology in healthcare may be more commonplace now, there is a pressing need to align the multitude of channels to streamline communication and patient care in order to make it an effective addition to your health organization. Fragmented solutions aren’t going to help your patients or organization Appointment booking, a patient portal or an online health records system are examples of the popular online tools that health organizations are introducing to their patients. However, owing to little or no communication between these channels, patients are still left wanting when it comes to effective engagement. The lack of integration between these different platforms and the disconnect from existing treatment workflows are reasons why patients switch care providers. Added to this, introducing new solutions like telemedicine, e-visits, online billing and prescriptions potentially can fragment communication between provider and patients further if not integrated into a common platform that allows for comprehensive health data to be available across the platform to both providers and patients. Mobile apps as an integrated solution for continuous, collaborative care A custom mobile application for health organization can be an integrative solution for patient care and communication Your healthcare organization can hugely benefit from leveraging on the increased dependency your patients have on their mobile phones for information and communication. Having a custom mobile application would, therefore, ensure you are accessible to your patients at a click of a button. Here’s why this might be the simplest (yet most powerful) solution you could be providing your patients- 1. Improving patient engagement Mobility gives health organizations the opportunity to deliver the best quality patient care. Mobile apps that link all patient services, like appointments, consultations, health records etc. make it user-friendly and easier for patients to use to stay in touch with your organization. 2. Secure channel of communication Given the sensitive nature of health communication, a personal mobile app specifically designed for provider-patient communication is safer than email, instant messaging or social media. 2. Empowering patients by understanding their needs custom mobile app for hospitalsAccording to a 2016 study conducted by Accenture, what patients want most from their provider’s mobile apps are-access to their health records, ability to book and change appointments and the ability to request prescription refills. In short, patients are looking for ways to reduce the need for repeated hospital visits. A mobile app offers a solution that is mutually beneficial to you and your patients. 3. Personalization with technology Being able to send notifications to your diabetic patients about a health camp or sharing information with heart patients about the latest medication can be done without worrying about SMS subscriptions and emails. With mobile apps, you are also assured this communication is seen by your patients and not lost within email inboxes. 4. Leveraging the latest technology Besides automating administrative tasks like appointment scheduling and notifications, the mobile app is a powerful channel to utilize for telehealth technology. The ability to consult with their providers through a visual medium, while being able to share important health information means that your patients can get second opinions, follow-up care and refill prescriptions with their providers. 5. Making collaborative care a reality An integrative platform for patient engagement and communication, connected to patient health records implies that your providers have all their patient information in one place, which is easily accessible to them. Being able to do so on a mobile app makes collaboration between providers even more possible and also offers better opportunities for coordination between affiliate health providers that your organization may be working with.
mlfoundations/dclm-baseline-1.0
default
0.37
Congo Basin Forests Stopping illegal logging and forest exploitation in Central Africa’s threatened tropical rainforest The vast forest of the Congo Basin is the second-largest tropical rainforest on Earth and serves as the lungs of Africa. It’s incredibly rich and diverse ecosystem provides food, fresh water, shelter and medicine for tens of millions of people and a home to critically endangered wildlife species. © Christian Kaiser / Greenpeace Of the hundreds of mammal species discovered in the Congo Basin so far — including forest elephants, gorillas, chimpanzees, and okapis — 39 are found nowhere else on Earth. Of its estimated 10,000 plant species, 3,300 are also unique to the region. The Congo Basin rainforest supports an astonishing range of life within its teeming rivers, swamps and savannahs, but it also helps to sustain life across the whole planet. The soils and plants of the Congo Basin rainforest store incredible amounts of carbon, preventing it from being emitted into our atmosphere and fueling climate change. Forests in the Democratic Republic of the Congo (DRC) alone are estimated to be the fourth-largest terrestrial carbon reservoir in the world. Despite all this, the Congo Basin’s forests are under threat. Land Grabbing and Industrial Agriculture In recent years, investors from around the world have been focusing heavily on Africa in efforts to exploit the continent’s rich natural resources, often at the expense of local communities and the environment. The trend of buying or leasing large areas of land in Africa to extract resources for export has been termed “land-grabbing” due to the speed and scale at which it’s taking place and the opaque nature of some of the land deals that have been negotiated. The UN has warned that these deals could severely undermine food security, hamper long-term economic development, and lead to the loss of important ecosystems. The Congo Basin is the target of several international industrial-scale agriculture developers, including palm oil and rubber, who are looking to cash in on new operations in Africa. These plantations, however, often fuel wide-scale deforestation and spark social conflict. Herakles Farms and SGSOC Greenpeace first introduced you to one of these destructive palm oil projects in 2012. Herakles Farms was a deceitful U.S.-owned corporation aiming to develop a huge new palm oil plantation, known as SG Sustainable Oil Cameroon (SGSOC), in the southwest region of Cameroon. This project became widely known as a stark example of the kind of threats that industrial plantations can present to human rights, wildlife, and the global climate. It was poised to destroy a large area of dense, carbon-rich natural rainforest, including critical habitat for endangered wildlife like the chimpanzee. And it was doing so without the consent of local communities, many of which began actively opposing this international corporation in an effort to stop them from stealing their traditional land. Many voices joined Greenpeace in the call for this destructive development to be stopped before it was too late for the people and forest of Cameroon. Unfortunately, despite the project size being dramatically reduced and the original investors leaving the project, SGSOC is still moving forward. Research shows this project is continuing to clearcut dense natural rainforest that has been identified as vital for endangered wildlife and often serves as a corridor between five nearby protected areas. And despite claims that the project will boost the economy and create jobs, the company’s plans continue to be met with widespread opposition from communities and have attracted fierce criticism from local NGOs. In 2015, some of the few that did get jobs from this project not only lost them but claim to not have been paid for past work. The SGSOC palm oil plantation is the wrong project in the wrong place, and Greenpeace is dedicated to fighting for the people and environment in Cameroon and against destructive projects like this one. Unsustainable and illegal logging in the Congo Basin forest — by both big and small companies — is leading to deforestation, destruction of wildlife habitat, diminished resilience to climate change, and damage to local communities. For too long, valuable trees have been illegally cut for timber and exported for products like furniture and flooring. Currently, illegal timber cut in the Congo Basin is being sent around the world, including to the United States, the European Union, and increasingly to China. Both the United States and the EU have banned importing illegal timber. The Lacey Act and EU Timber Regulation respectively are beginning to be enforced and changing how companies assess the timber they are buying. However, as long as illegal timber can flow into China, be turned into finished consumer goods, and then resold on the global market, the incentive to illegally log the Congo Basin forest will remain. Greenpeace is investigating how this triangle of timber interests (between Congo Basin countries, China, and developed countries ) fuels uncontrolled logging operations today, but may one day be the path to stopping illegal loggers in their tracks. By following where illegal timber from countries like Cameroon and the Democratic Republic of the Congo ends up, we’re shedding light on the global timber trade and exploring ways to put an end to the driving forces behind illegal logging.
HuggingFaceFW/fineweb-edu
default
0.333
Chứng khoán Mỹ giảm điểm, khép lại tháng Hai kém khả quan Hợp đồng tương lai chứng khoán giảm nhẹ kết thúc một tháng hoạt động kém hiệu quả. Hợp đồng tương lai chỉ số trung bình công nghiệp Dow Jones giảm 70 điểm, tương đương 0.2%. Hợp đồng tương lai S&P 500 trượt 0.3%, trong khi hợp đồng tương lai Nasdaq 100 giảm 0.4%. Các động thái diễn ra khi Phố Wall kết thúc một tháng Hai đáng thất vọng. chỉ số Dow Jones kết thúc tháng giảm 4.19%. S&P 500 và Nasdaq Composite lần lượt giảm 2.61% và 1.11%. Đà trượt dốc của tháng Hai đã kéo Dow Jones vào vùng tiêu cực trong năm, trong khi hai chỉ số còn lại vẫn đang giữ mức tăng Theo Keith Buchanan, nhà quản lý danh mục đầu tư cấp cao tại Globalt Investments cho biết: Sự sụt giảm này đánh dấu một bước ngoặt so với đợt phục hồi tháng 1 và một phần là do dữ liệu việc làm gây chấn động trong tuần đầu tiên của tháng. Theo báo cáo, bảng lương phi nông nghiệp đã có thêm 517,000 việc làm mới trong tháng 1, vượt xa mức 187,000 theo ước tính của các nhà kinh tế do Dow Jones khảo sát. “Từ dữ liệu về thị trường lao động mạnh mẽ, nhà đầu tư có thể nhận thấy rằng Fed sẽ còn tiếp tục mạnh tay thắt chặt trong tương lai”, ông Buchanan nói thêm. Các nhà đầu tư sẽ theo dõi dữ liệu kinh tế về xây dựng và sản xuất sau khi thị trường mở cửa vào thứ Tư. Các công ty tiêu dùng bao gồm Lowe's và Kohl's và các tập đoàn công nghệ Salesforce, Okta, Snowflake sẽ báo cáo thu nhập quý IV trong ngày hôm nay. CNBC
HuggingFaceFW/fineweb-2
vie_Latn
0.0775
Beyond VoIP TMC $80 to Replace a Headlight? Yeah, Right... September 8, 2006 A while ago, I blogged about how frustratingly technologically complex today's automobiles have become, and how difficult it's become to make even relatively minor repairs. Recently, a headlight lamp on my Audi A4 blew, and with much trepidation, I called my friendly thief of a dealer to inquire how much it would cost to replace. Since the car is off warranty (it has very low miles, so I figured "what could go wrong...)", I was told that the charge would be around $80. $80 for a headlamp bulb replacement? Are they nuts?? The explanation is that some parts have to be removed in order to change the bulb, and that it can be a tricky procedure to "do it yourself." Well, screw that! I found some handy, simple instructions on the Web, from similarly fed-up Audi owners who decided to tackle the repair themselves. In a nutshell, the repair took me all of 10 minutes, and cost just $12.95 for a new bulb. My wish is that today's auto companies will someday start to realize that not all of their customers are complete idiots when it comes to making minor repairs on their cars, and that a good many of them actually like to work on them. Supporting the do-it-yourselfer may not help drive the growth of service revenues, but it'll sure go a long way towards creating a fiercely loyal customer base that just might come back for a bevvy of performance enhancements and a new car when the time is right! I've received a number of emails from people wanting to tackle this minor repair on their own, but are having trouble find instructions. So here's the skinny on changing the headlight bulbs on a 2002 Audi A4 (other year models may be similar but I'm not sure): Here's what you'll need: -- Extra long, flat-edged screwdriver -- Replacement bulb (@$10-15 at an auto parts store) -- exact model number can be found in manual I believe or any self-respecting auto store will be able to look it up for you. -- A hex (star-head shaped) screwdriver -- bright flashlight 1. If you are changing the bulb on the passenger side, you first need to remove the air duct housing, since it gets in the way. Take out screws on top and squeeze/pull housing gently out of duct (easy to see how it fits together) 2. If changing driver's side, go to step 3 3. Remove 2 or 3 hex screws ontop of underhood lip that secures the top of the headlight assembly 4. Loosen but don't remove (2 turns at most) two screws at the base of the headlight assembly (way down inside the engine compartment) with the extra long screwdriver 5. Gently slide entire headlight assembly out so until you can reach the back of it 6. Release metal clip that holds headlight socket in place 7. Pull headlight socket free and remove bulb 8. Put in new bulb and slide socket back into assembly 9. Clip in place 10. Slide headlight assembly back into front end, making sure metal brackets are under base screws 11. Tighten base screws 12. Replace hex screws on top 13. Replace air duct housing (if you removed it) And you're done! This took me all of @10 minutes, working slowly and carefully FYI, you can buy extra bright bulbs (they come in a two-pack for about $35), so if you want more light you may want to take this opportunity to upgrade them both. Tags: , Search Technorati: , Listed below are links to sites that reference $80 to Replace a Headlight? Yeah, Right...: Trackback Pings TrackBack URL for $80 to Replace a Headlight? Yeah, Right...: Comments to $80 to Replace a Headlight? Yeah, Right... 1. RE: $80 to Replace a Headlight? Yeah, Right... B. Smithey : Instead of buying replacements you should try this website - Headlight restoration and cleaning kit - Melissa : I have the same problem! This is so frustrating! Do you remember where you found those instructions online? Thanks!
mlfoundations/dclm-baseline-1.0
default
0.37
What would you like to know? Share this Story Homemade doughnut recipes Yummy, Warm Doughnuts One of the simplest Sunday morning pleasures is biting into a warm, freshly fried doughnut while sipping a hot cup of coffee and reading the morning paper. Good news is, you don't even need to leave the house to sink your teeth into a delicious doughnut - you can easily make your own. This weekend, instead of running out to get a baker's dozen, stay in and prepare your own. Doughnut history Doughnut history Doughnuts have been around forever – just not in the shape we know them now. Believe it or not, archeologists have made discoveries of prehistoric fried cakes with holes in the middle. However, it wasn't until the mid-19th century that the first doughnut recipes, then called olykoeks or oily cakes, appeared in print in the Dutch language. Pilgrims from Holland were responsible for introducing doughnuts to the American people – but with no holes in the center. Then how did the hole get in the center? There are a few legends to explain how the hole was poked into the center of a doughnut, but we may never know the real truth. According to one story, Elizabeth Gregory, a New England housewife made the best olykoeks around, often filled with nuts or jams (she called them dough-nuts). One day, she sent her son off to sea with several dough-nuts, but while eating one, he lost control of the steering wheel and poked the dough-nuts onto the spokes of the wheel. Thus the doughnut as we know it was born. Doughnuts weren't always a breakfast food Doughnuts continued to gain popularity as snack foods, but were not associated with breakfast until the 1940s when Krispy Kreme Doughnuts and Dunkin' Donuts were opened. These bakeries often sold coffee in the morning accompanied by a freshly prepared doughnut. And, as you know, the rest is history. Ready to make your own doughnuts at home? Here are three sticky sweet recipes to try! Doughnut Recipes Sugar-glazed Doughnuts Makes 48 doughnuts 1/2 cup butter, softened at room temperature 2 cups scalded milk 2/3 cup sugar, divided 1 teaspoon salt 2 tablespoons yeast 4 eggs, beaten 1/4 teaspoon nutmeg 7 cups sifted flour Oil for deep frying 3 cups powdered sugar 1/2 teaspoon salt 1/2 teaspoon vanilla extract 1/2 cup cold water 1. In a large bowl, melt butter in hot milk. Let cool slightly then stir in yeast, 1 teaspoon sugar and salt. Beat in eggs and set aside. 2. In a medium-sized large bowl, whisk together nutmeg, remaining sugar, and 3 cups flour. Add flour mixture to milk mixture and beat to combine. Add the rest of the flour to form a sticky dough. 3. Knead dough on a lightly floured surface for 5 minutes then put dough back in bowl, cover with plastic wrap, and let rise for 1 to 1-1/2 hours. 4. Roll dough out and cut into circles with a hole in the middle using one large and one small biscuit or cookie cutter or use a doughnut cutter. Save the doughnut holes. Place doughnuts and doughnut holes on a large baking sheet, cover with a damp (but not wet) dish towel and let rise for 30 to 45 minutes. 5. Heat a pot or deep skillet of oil to 365 degrees F. Fry doughnuts on each side for 1 to 2 minutes or until golden. Fry holes in the same manner. Let doughnuts cool on a paper towel. 6. In a small bowl, combine ingredients for the glaze. When doughnuts are cooled, dip them in glaze. Serve warm. Chocolate Doughnuts Makes 30 to 36 doughnuts 2 eggs 1 cup sugar 2 ounces unsweetened chocolate 2 tablespoons vegetable shortening 1 cup cooked mashed potatoes 2/3 cup milk 3-1/2 cups sifted all-purpose flour 6 teaspoons baking powder 1 teaspoon salt Oil for deep frying Powdered sugar for dusting 1. In a large bowl, beat eggs and sugar until light and fluffy. 2. Melt chocolate and shortening together over a double boiler, stirring until smooth. Stir into sugar mixture. Add in potatoes and milk. 3. In a second large bowl, whisk together flour, baking powder and salt. Gradually add flour mixture into chocolate mixture, adding just enough to make a dough. 4. Chill dough then roll out on a floured surface. Cut doughnuts with one large and one small biscuit or cookie cutter or use a doughnut cutter. Save the doughnut holes. Place doughnuts and doughnut holes on large baking sheets. 5. Heat oil in a large pot or deep skillet to 370 degrees F. Fry doughnuts and holes 1 to 2 minutes per side or until golden. Let cool on paper towels. Dust with powdered sugar and serve warm. Jelly-filled Doughnuts Makes 12 to 15 doughnuts 1/2 cup scalded milk 1/3 cup granulated sugar 1 teaspoon salt 5 tablespoons butter 2 packages active dry yeast 1/2 cup warm water 3 egg yolks 3-3/4 cups sifted flour (sift before measuring) Raspberry or strawberry jam or jelly 1 egg white, slightly beaten Oil for deep frying Powdered sugar for dusting 1. In a small bowl, combine scalded milk, 1/3 cup sugar, salt and butter, stirring to melt butter. Set aside and let cool to room temperature. 2. In a large bowl, sprinkle yeast over warm water and stir to dissolve. Add the milk mixture, egg yolks and 2 cups flour and beat until smooth. Add in the rest of the flour and mix until a dough forms. 3. Cover dough with plastic wrap or a towel and let rise 1-1/2 hours or until dough is doubled in size. Use your fist to punch dough down. Transfer to a lightly floured surface and knead 10 times. Divide dough in half and roll out each half to 1/4-inch thickness. 4. Cut dough into 3-inch circles using a biscuit or cookie cutter. Spoon 1 teaspoon of jam onto the centers of half of the rounds. Brush egg white on outer edges of dough and top with the other half of the dough circles. Press edges to seal. Place on a large baking sheet, cover with a damp (but not wet) towel and let rise for 1 hour. 5. Heat oil in a large pot or deep skillet to 350 degrees F. Fry doughnuts 2 minutes per side or until golden. Remove doughnuts to paper towels with a slotted spoon. Let cool slightly and top with powdered sugar. For more DIY breakfasts and other dishes, check out the SheKnows.com Food and Recipes Channel. And if you are looking for ways to offset your Sunday morning doughnut indulgence, visit the SheKnows.com Diet and Fitness Channel for exercise tips, workouts and the latest health and diet news. Recommended for You New in Food & Recipes SheKnows is making some changes!
mlfoundations/dclm-baseline-1.0
default
0.37
Schools – Bullying & Discrimination Schools and colleges/universities have legal responsibilities to protect children and young adults. Unfortunately, schools and colleges often fail in this responsibility. There are a variety of reasons why schools and colleges fail in this responsibility. One of the biggest reasons seems to be a denial that problems exists. Another barrier is that schools and colleges often do not seem to know how to deal with problems when they occur. This is despite the fact that many of the laws and policies that have been on the books since the early 1970s. The primary goal at Justice & Equality Legal Services is assist parents and students in advocating with the school, school board, OSPi, etc. to ensure you or your child has a safe learning environment. Our focus is to resolve the issue administratively (i.e., not filing a civil lawsuit). The hope is that by pointing out the policies and procedures that schools should be following and helping parents identify solutions with the schools that adequate resolutions can be reached without the need for litigation. Litigation can take years and years, and by the time resolution is reached through litigation, students have graduated, transferred, or simply dropped out. This isn’t to say discourage parents and students from proceeding with litigation (a lawsuit). Unfortunately, all too often it seems that schools try to play a waiting game, they delay, claim they will take certain actions and then the actions never seem to occur. Sometimes what happens in school is so egregious that a the only appropriate response seems to be moving forward with a lawsuit or towards litigation. Sometimes, due to the schools delays, a lawsuit is the only way to achieve results. A core value of JELS is empowerment. JELS seeks to empower students and families by raising awareness of the laws and how to advocate for themselves. Whether or not you can hire an attorney, throughout this website, you’ll find a variety of resources. The laws addressing discrimination and bullying in schools can be found here and some explanation on what to do when you or your child is being discriminated against in school or college is available here. A FAQ by the Department of Education Office of Civil Rights regarding Title IX and sexual violence can be found here. It is critical to document efforts to address bullying, discrimination, harassment, or any other issue is in writing. While typically you start with the people in the building (principal, vice principal), you will often need to elevate to the district (HIB Compliance officer, Title IX Officer, Superintendent), if you aren’t getting any response, the next step would be to reach out to the school board. Some decisions decisions of the School Board may be appealed, either through a specific court process or to the Office of the Superintendent of Public Education (OSPI). However, you do not have to wait until things get far along to contact OSPI (Safety Center – Bullying & Harassment or Equity & Civil Rights) or the Governor’s Office of the Education Ombuds. They both have staff designed to help resolve concerns around issues of discrimination and bullying. In addition, you can reach out to the Wing Luke Civil Rights Division of Washington State Attorney’s General Office. The number of places to reach out to is a bit daunting and for the most part, their ability to help will depend on the School District’s willingness to come to the table.
HuggingFaceFW/fineweb-edu
default
0.333
Investment and Trading Fundamental Strategies Fundamental Analysis As A Trading Strategy Traders who follow this strategy believe the market is influenced by economic and political events and keep a close eye on the calendar as key dates and speeches are believed to have a significant impact. This compares to technical analysts who base their strategy on patterns of movement seen in the past, placing little or no importance on current and world events. Although forex trading covers a whole host of currencies from all over the world, the US dollar is one of the major forces in the market and even if you are trading something unrelated, movements in the US have a ripple effect. For this reason, regardless of what currency you opt for, it is essential to keep a close watch on key speeches, data and political events in the US if you are a fundamental trader. Other than the US, there are several key pieces of data that fundamental analysts believe influence the market significantly: Interest rates. A rise in rates usually prompts a currency to strengthen, as it will become more attractive to investors due to a greater rate of return. Likewise, a country that drops its interest rates can expect its currency to weaken as investors move their assets to alternative destinations where returns are higher. Gross Domestic Product (GDP). GDP is the way in which many countries measure the performance of their economy and is reported every three months. An increasing GDP is closely linked to a rise in interest rates, which in turn leads to a strengthening currency. Trade balance and Treasury budget. Any country that has a constant trade deficit will see its currency weaken due to increased commercial sales of the monetary unit. Employment figures. Payroll data is seen as another indication of the economic strength and viability of a nation. Decreases in the payroll figures are seen as a sign that the economy is weakening, which could lead to lower interest rates and ultimately a drop in the value of the currency. Fundamental traders keep an economic calendar that marks the release of data relating to all of the above, as well as regular speeches and forecasts produced by leading bodies and politicians. Because of the potential impact of the economic calendar, when a major event is due, activity in the forex market can be expected to increase. For this reason, it can be a good idea to try to open a position before the floodgates open. It is also essential to use a guaranteed stop loss to prevent any slippage caused by the volume of traders opening and closing positions. While forex is all about earning money saving money on losing trades is equally important. Using the above method of trading diligently has the potential to bring great returns but like any financial market, there is significant risk attached. It is therefore imperative to use all the tools at your disposal to make sure any position that doesn`t perform as well as you would have hoped is closed before any losses wipe out your account.
mlfoundations/dclm-baseline-1.0
default
0.37
Despite its monumental appearance, this standing elephant with its raised trunk is in fact a recipient, the hollow interior of was filled from a square opening on the animal’s back (the lid is now missing). On each flank, the long ribboned bas relief taotie ornamentation is submerged in a background of leiwen. Scales that extend as far as the ears, trunk, belly and feet have replaced these emphatically set back motifs. In places, the surface has been left smooth in order to highlight the mass. The pronounced zoomorphic volume and diminished ornamentation are characteristic of Southern Chinese production in Hunan. There, a determined spirit of regional autonomy developed at the expense of the conventions in force in the capital Anyang in Henan. Such visual features may well ultimately have influenced official northern productions, as illustrated by a jade elephant discovered in lady Fuhao’s tomb. This type of piece derived its form from the art of the potters and its ornamental decoration from that of the jade carvers. It was cast in a mould containing fire-clay sections. The intersection between the different compartments left distinct residual ridges running along the feet and abdomen of the animal. The generic term zun designates wine vessels that vary in shape from the simple chalice to more zoomorphic forms. As the master of official religious rites, the monarch paid tribute to heaven for the harmonious functioning of the universe. The Shang understood the magic potential of such sacrificial vessels cast in a rare material, and bronze sacrificial vases soon came to symbolize royal legitimacy. For this reason, an extremely wide variety of forms emerged in all fifty models reflecting the sumptuous nature of such rituals.
HuggingFaceFW/fineweb-edu
default
0.333
As Frank Lloyd Wright matured in his practice, he coined the term “organic architecture” to describe his increasing desire to integrate the manmade and natural environments. In 1937, his Fallingwater home epitomized the concept: a house literally integrated with a waterfall. But even that design found its form in a series of rectangular shapes. Later, he would achieve a less rectilinear vision with his 1959 Guggenheim Museum in New York. Thanks to computer-aided design and construction technologies, these architects are able to create structures with more biomorphic (life-shaped) forms, forwarding Wright’s vision of organic architecture to a curvaceous new plane. From sails to waves to nests, these forms bring the geometries of the natural world into our architectural landscape.
HuggingFaceFW/fineweb-edu
default
0.333
The word photography has its origins in ancient Greek. “Photo,” meaning light, and “graphe,” meaning picture. The actual meaning could translate into “to draw a picture with light.” The Photography Club studies the art of photography from its early days until the present. The club aims to encourage members to learn and practice the classic analog method. Interactive presentations are held in the Social Sciences Amphitheatre, and basic photography is taught with the help of instructors. Photo shoot trips to nearby neighborhoods are organized to places such as Galata, Cihangir, Beyoğlu, and Sultan Ahmet, enabling students to practice their skills in groups. The club places value on teaching students about the spirit of old school photography using analog cameras and a dark room at the school. Post-production tools in digital photography are also taught, such as Photoshop, in the school’s computer lab, with the support of the Computer Club. The pictures taken on trips are studied with a dual purpose: to make improvements to technique and to develop critical thinking. The work of the club is showcased twice a year at the Traditional Galatasaray Pilaf Day.
HuggingFaceFW/fineweb-edu
default
0.333
The rise in the mobile applications has benefited people from different occupations. From business executives to health enthusiasts and engineers to doctors, mobile apps are playing a vital role in learning new skills, boosting productivity, staying healthy and just about any other application you can think of. When it comes to doctors and medical students, the case is no different. According to one survey, more than 85% of physicians and practitioners were using mobile devices in the performance of their jobs. For doctors, it is important to manage their clinical settings while for medical students diagnosing the disease and treatment is quite challenging. When used properly, this kind of technology can play an important role in managing your work and improve your performance. Check out some of the mobile apps that would help doctors and medical students to facilitate them. Medscape is a useful app for doctors to identify drugs, supplements, and OTCs. The Pill Identifier tool integrated into this app will help doctors to pinpoint bills by color, shape, imprint or scoring. Moreover, Medscape also provides drug reference to find the most current prescribing and safety information. With the evidence-based Disease and Condition reference tool, doctors can find useful information for patient care through updated reference database. Some of the other useful features include medical calculators, formulary information, drug interaction checker and procedure reference. Meducation is a free resource for medical students to discover thousands of learning resources helpful in boosting their career as a health specialist. Students can find the latest news, interesting stories and helpful resources to become the best medic. Meducation contains sources of information that is not available in textbooks. You can find the solution to most difficult problems through podcasts, videos, slideshows, mind maps and mnemonics. The intelligent feature integrated in Meducation helps you learn from what students love and make the experience far better. Prognosis app is one of the most useful resources for doctors as well as students. With over 600 case scenarios across 30 specialties, you can examine your diagnostic skill through stimulated clinical cases. The cases are short but in-depth analysis of the diagnostic process along with the updated discussion on the specific condition. The cases contained in the app is thoroughly reviewed by specialist physicians to ensure that they are authentic and clinically relevant. Apart from using this app on your mobile device, you can also view it on your desktop. Diseases Dictionary is another useful app for students as well as doctors. The app contains a huge library of medical disorders and diseases with detailed info on causes, symptoms, treatment, and definitions. Doctors can use the app as a clinical advisor to check out for medical advice. It is free of cost and can be accessed offline. Students can benefit from detailed information on treatment of diseases, medical conditions, and symptoms. Moreover, medical reference book and thesaurus can facilitate students to learn medical terminologies and abbreviations. The MedCalX is a unique mobile app for health professionals helping them to use complicated formulas, classifications, and scores. The app comprises of more than 300 formulas, scores, and classifications with the ability to create your own series. You can also share the data via email, print it, saving to patient’s database and much more. MediBabble Translator is a free professional grade medical interpretation app available for healthcare specialists. The app is designed to improve the efficiency, safety, and quality of care for non-native English patients. The app contains thousands of translated questions and instruction all recorded in high-quality audio. The languages include Spanish, French, English, Haitian Creole, Mandarin, Cantonese, and Russian. Moreover, the app does work while you are offline. A panel of professional and experienced physicians review the content and two medically trained native speakers check the accuracy and authenticity. Whether you are an experienced doctor or a medical student, make sure to try these apps. While they are in no way meant to replace the proper diagnosis and treatment by a licensed medical professional, these apps are to not only enhance your skills and improve knowledge, but also ensure safety and accuracy required in the treatment of patients. About the author: Ray Parker is an entrepreneur and internet marketer with over 15 years of experience in Search Engine Optimization, Creative Writing and Digital Marketing with IQVIS. He has worked with several clients from all over the globe to offer his services in various domains with a proven track record of success.
HuggingFaceFW/fineweb-edu
default
0.333
5 Nanograms Of THC In Your Blood? You’re Legally Stoned This is the third year lawmakers have tried to pass the bill, and they watered it down this time to make sure it gets through. When it comes to alcohol, the law is clear. At .08 a person is too drunk to drive. But when it comes to marijuana, proving a person is too high to drive may be tougher. “So we’re saying you’re presumed to be under influence of marijuana at five nanograms,” said Rep. Mark Waller, R-Colorado Springs. Under a bill by Waller the DUI limit will be five nanograms of THC, the psychoactive ingredient in pot. But even if a driver reaches that limit, he or she could get off... [continues at CBS4 Denver] , , • LucidDreamR Being a medical marijuana patient; growing and smoking incredibly high quality bud each day- I’m willing to bet that even first thing in the morning after a full nights rest, and before smoking I already have 5 nanograms of THC in my blood. But I can assure you am far from “high”. Trying to enforce DUI in the same way as alcohol doesn’t make any sense whatsoever.... That said: even if they don’t find a better way to enforce this, I would gladly give up driving to be as close as I am now to this truly amazing plant. I’m reminded of the old hitchhiker’s adage: ass, gas or grass- yeah, I don’t think I’ll be stuck without transport anytime soon. ;) • Kevin Leonard double thumbs up, good sir • howiebledsoe The key word here is “blood test” Do you think it will be on the state’s dime? Think again. This is where they will ultimately make their money off of all of this. • http://www.ContraControl.com/ Zenc Yeah, this’ll be exactly the justification they need to do forced roadside blood-sample extractions. Unfortunately, that may be all the justification I need to do forced roadside blood-sample extractions. • Haystack As the article describes, it’s still unclear how this could be effectively implemented, but overall it seems like a pretty common sense step to me. You wouldn’t want someone driving who is baked out of his mind, but you also don’t want to be arbitrarily punishing pot users who are clearly safe to drive. You have to find some way to draw the line, no? • DeepCough I understand that legalization naturally entails regulation of this substance, but studies have shown that stoned smokers are safer drivers, so this bill should be reworked as “any who have 5 nanograms or more of THC in a given blood sample is off the hook.” • Itzmysoul How the heck is any normal person going to know when they get to 5 nano grams of THC? I feel like its going to end up being more of a judgment call from the officer to see if one is “too baked”. • dumbsaint Well, it’s a blood test so presumably to get to that point you’re probably too baked. Unless you’re being really obvious, or reek of pot, things may not get that far. • lazy_friend I am all for moderation when it comes to smoking pot. I’ve smoked tons of the stuff but have kicked the habit, leaving it just for especial occasions or medicine, instead of pot being my crutch. But if old ladies can drive, an intoxicated stoner should be able to drive without penalty. • rtb61 Driving is pretty dangerous shit. Hell, I got wiped out by a sober person paying too much attention to a mobile phone who just simply misinterpreted the lights, a simple split second error, that I will suffer from for the rest of my life. Anything that affects it needs to be legislated against. Better designed cities, more accessible and flexible public transport, better social services distribution, also should be implemented to reduce the need to drive. The reality is way, way, to many people suffer and die as a result of driving and lot’s of stuff needs to be done to reduce the harm. • lazy_friend I agree on improved public transportation, but more legislation does not accomplish anything unless money is involved or violence, and money is short these days, so that means more violence. No thanks. Stoners drive pretty slow compared to everyone else ( not that I care if they get to drive or not). If you want safer driving you need cars that drive themselves, thats the way of the future. Or have cops drive you home instead of driving you to the big house. Alcohol manufacturers should be held accountable for drunk drivers then, they make the stuff, its not like people are crashing over getting drunk on home made moonshine, they are using mass produced beer and spirits. Corporations are people too, and people go to jail or die when they fuck up even by proxy. I am responsible, I didnt get enough sleep tonight but I have a bunch of errands to run, I need some shut eye before I can drive, and that’s what I will be doing after this. The way the government sees it, all these accidents are just population control. • BuzzCoastin another good example of why they must call the place The Land of the Free • http://www.facebook.com/people/Louis-Arnold/1189489146 Louis Arnold 5 Nanograms is the US military threshold for using pot; which is discovered up to 18 days after inhaling marijuana at a party. These idiots are attempting to delete Medical Marijuana users from their cars and their driver’s licenses over 10 years after MMJ legalization in Colorado. It is a FARCE.
mlfoundations/dclm-baseline-1.0
default
0.37
Many alternative medicines lay tall claims in treating cancer. They may show some promise, but, on the whole, they remain unsuccessful. One such alternative is cannabis oil, derived from the cannabis plant. It claims to kill cancer cells while leaving healthy cells unscathed. What is cannabis oil? In its physical form, cannabis oil appears as a sticky and thin material. It is extracted from cannabis flowers using the solvent extraction process. It contains hundreds of cannabinoids, with THC (tetrahydrocannabinol) and CBD (cannabidiol) being the main ones. What makes cannabis oil so compelling is its efficacy in treating a host of diseases, such as Alzheimer’s disease, anorexia, asthma, Crohn’s disease, dementia, diabetes, etc. However, where it has shown great promise is in curing cancer. For this, it is essential to understand how cancer originates. How does cancer originate? Cells multiply and divide to make fresh tissues. This is triggered by oncogenes that regulate their division and growth. Another set of genes, called tumor suppressor genes, tell the cells to impede growing. If the oncogenes continue to signal the cells to grow and tumor suppressor genes fail to tell them to turn off, the cells will continue to multiply unabated and become cancerous, ultimately taking the form of a tumor. Since the mechanism to stop cell growth has failed, the tumor keeps growing in size till it starts interfering with the surrounding cells. Ultimately, if its growth remains unchecked, the cancerous cells expand into the blood vessels and then to other body parts. Once the cancerous cells reach other locations, their dividing cycle begins again. Why the cells continue to grow unchecked and why the genes responsible for stopping the growth fail to do so is not clearly understood. However, the causes have been traced to tobacco smoking, certain chemicals, ionizing radiation, and increased exposure to sunbeams, among others. How does cannabis oil combat cancer? Once THC connects to the CB1 and CB2 receptor site, it steps up the ceramide synthesis, thereby killing the cancerous cells. What’s most surprising is that the creation of ceramide only occurs in cancerous cells in the presence of THC and it does not affect normal cells – even if THC is near them. So, the key to the elimination of cancer cells is the accumulation of ceramide on the cancerous sites in the body. To bolster this claim, a research team from the Virginia Medical College conducted a study in 1974. It showed that cannabis oil inhibited the growth of malignant tumor cells in mice and cell cultures. As reported by The Huffington Post, despite the positive and heartening findings of the efficacy of cannabis oil in combating cancer that were published in the Journal of the National Cancer Institute, the US government did not authorize any follow-up research. It did, however, authorize the US National Toxicology Program to conduct a secret preclinical trial in the mid-1990s. What it found corroborated with other research. That is, it showed that mice and rats given high THC doses over a longer duration enjoyed superior protection against malevolent tumors, as compared to untreated controls. Strangely, the US government shelved these results without publicizing them. Taking a leap of faith Opting for cannabis oil is akin to taking a leap of faith. You must have faith in this alternate method of cancer treatment. Cannabis oil is certain to improve your well-being and the quality of your life, and at the same time help in managing cancer-induced pain. Let the success stories of other patients finding relief motivate you to submit to the curative powers of this oil. One thing that needs to be noted is it requires an extremely high dose of cannabis oil to treat cancer effectively. Is cannabis oil legal? The legality or the illegality of cannabis oil depends upon the amount of THC, the constituent of marijuana that gives a ‘high’, contained in it. To the question, “Is CBD legal?”, the answer is yes since it is derived from hemp that contains only 0.3% of THC. Marijuana, on the other hand, contains higher amounts of THC and is still illegal in most states. Click on a tag or post below to read more on this topic And not forget to sign up for our Newsletter
HuggingFaceFW/fineweb-edu
default
0.333
Take the 2-minute tour × SQL Server 2008 R2 SP1 My company uses the Great Plains (GP) financial system with several customizations. Our Value-Added-Reseller (VAR) for GP has setup most of these customizations in a separate database from the GP data called DYNCUSTOM. In the DYNCUSTOM database is a View that is merely a SELECT * FROM a table in the GP company database (called PARTS). I was approached by a user having problems trying to select on this view. The error he was getting: Msg 229, Level 14, State 5, Line 2 SELECT permission denied on object 'BM010415', database 'PARTS', schema 'dbo' I looked at how his login was mapped and he had a database user in the DYNCUSTOM database belonging to a database role that was granted select on the view. However, he did not have a database user in the PARTS database. Usually the intent of a view is so that one can grant select permissions on the view without exposing access to the underlying table. With SQL Server however, if the view crosses databases does it then change the security context being used? Therefore, the user would also need a database user in the PARTS database as well as granted select access to the underlying table? share|improve this question 2 Answers 2 Sounds to me like you have a case of ownership chaining. The link should provide you with the details on how to make sure the chain stays intact. share|improve this answer If you trust the database owners (which you should be able to if it's all just you), you should be able to turn on cross-database ownership chaining, which will allow security to chain in the way you want. Essentially, if the same login owns the dbo schemes in both databases, CDOC will allow someone to query the view without explicif permissions on the table. share|improve this answer Your Answer
mlfoundations/dclm-baseline-1.0
default
0.37
Studies are now finding that a drug defect means people who use proton pump inhibitors (PPIs) as a medicine for acid issues are more likely to catch infectious gastroenteritis, also known as the stomach flu. Popular PPIs include drugs like Prilosec and Nexium. These are some of the most common medications in the world. Roughly 20 million Americans take a PPI in the U.S. every year. Why PPIs Might Be Making You Sick These medications work by reducing the amount of stomach acid in your body, which can help fight some of the symptoms of ailments like heartburn. The issue is that cutting down the amount of stomach acid can also cause medical issues. Stomach acidity is one of the ways that your body protects you against bacteria that you swallow with your food. When a drug reduces acidity it also makes it more likely that bacteria can colonize in your body and cause an illness. Researchers confirmed this principle in a study published in the European Journal of Epidemiology. The study concludes that if you take these drugs, you are more likely to get the stomach flu. Have You Suffered from These Side Effects? Since this danger wasn’t properly communicated by the drugs’ manufactures, many might soon be taking legal action. In the past, victims of these kinds of drugs have taken their grievances to court. After all, if the drug has caused you to get illnesses it might also have meant medical expenses and time away from work. As with any drug defect, the manufacturer has a responsibility to let you know about any risks and dangers associated with their product.
HuggingFaceFW/fineweb-edu
default
0.333
Recent News & Events Membraneless Organelles Could Help Scientists Better Understand Incurable Diseases Princeton University and Washington University engineers have collaborated to create a new way to study the material structure of membraneless organelles and observe how they work. Their research could have a myriad of scientific applications and help scientists better understand incurable diseases such as amyotrophic lateral sclerosis or ALS, Huntington's disease and cancers. Membraneless Organelles and Ribonucleic Acid These miniscule structures are like droplet compartments, and they're present in all living cells. They use chemistry to regulate the inner workings of cells, including cell division, movement and self-destruction. The organelle droplets are the key to how cells process gene expression and stress response. What makes them different from other organelles is that they don't have a membrane to keep them from merging with the other molecules, nucleic acids and proteins in cells. Instead, they remain as self-contained structures. Washington University School of Engineering & Applied Science's Edwin H. Murty Professor of Engineering Rohit Pappu compares them to water droplets. However, he says that they're comprised of proteins that assemble with ribonucleic acid. RNA is a molecule formed from DNA that's responsible for synthesizing proteins and transferring genetic code. How the Organelle Droplets Form and Dissolve Scientists at The Scripps Research Institute have studied the formation and dissolution of organelle droplets and found that these processes occur as needed. This could be the key to cellular survival because it allows cells to adjust to cellular stress quickly. Research Associate Priya Banerjee notes that the negative charge of RNA molecules determines these processes. Overall, RNA has a negative charge. When it encounters positively-charged proteins, they're attracted to each other. This creates a molecular cluster and forms the organelle droplets. When the presence of RNA increases, it causes an imbalance between the negative and positive charges that quickly dissolves the droplets. Peering Into Organelle Droplets for the First Time Observing the inner workings of membraneless organelles has been difficult in the past because they're so small. However, Princeton University School of Engineering and Applied Science's Associate Professor of Chemical and Biological Engineering Clifford Brangwynne pioneered a technique with his team to probe the droplets. Called ultrafast scanning fluorescence correlation spectroscopy, the technique uses sound waves to control the ability of a microscope to obtain protein concentration measurements from inside the organelles. Using usFCS on cells from a roundworm, the researchers could measure the protein concentrations formed by the LAF-1 protein, which produces p-granules that polarize a cell before it divides. They were surprised that the organelle droplets weren't densely packed like they imagined. Instead, they're permeable, low-density structures. After reviewing the findings at Washington University, Pappu says that his lab team could basically swim inside the droplets to find out how much room was inside. Rather than finding a crowded swimming pool, so to speak, they found plenty of water and room. This made them realize that not all organelle droplets are the same. While studying the LAF-1 protein organelles, they also found that certain protein sequences are very floppy molecules like spaghetti that can't fold into defined structures. Other protein organelles, however, are more like ketchup or toothpaste and can fold into defined structures.Implications for Understanding How Diseases Develop This research can directly help scientists understand the biological functions of organelle droplets and how material changes cause diseases such as cancers and neurodegeneration. One day, Pappu says, scientists will be able to mimic the organelles, which can help them diagnose and understand a host of diseases. These advancements could be transformative in the health care industry.
HuggingFaceFW/fineweb-edu
default
0.333
Synthesising the multiple impacts of climatic variability on community responses to climate change Recent developments in understanding and predicting species responses to climate change have emphasised the importance of both environmental variability and consideration of the wider biotic community. To date, the interaction between the two has received less attention. However, considerable bodies of theory and empirical results suggest that multi-species consequences of variability can have strong impacts on range limits and the speed of range shifts. Here we demonstrate how biotic interactions and temporal variability can act together to influence range shift dynamics and highlight the need to understand these interactions in order to predict how species will respond to global change. We emphasise the value and utility of partitioning approaches applied to parameterised models to determine the direction and relative importance and direct of these forces in empirical systems. Authorship JCDT wrote the manuscript and built the models. All authors contributed significantly to the editing and manuscript development. Funding The work was supported by NERC grant NE/T003510/1 Data Sharing and Data Accessibility Code to generate all results is publicly available at https://github.com/jcdterry/ClimateVar_BioticInts and should the manuscript be accepted will be permanently archived. The paper contains no new datasets. Introduction 33 Climate change is forcing species across the world to either adapt to different environments in-situ 34 or shift their range to track moving climates. A signal of climate-change induced spatial displacement 35 is clearly visible in shifts in the observed distribution of species across the globe (Parmesan & Yohe 36 2003;Lenoir et al. 2020). Improving our understanding of how range shifts will progress is critical to 37 future conservation efforts and ecosystem management (Pecl et al. 2017). Here we argue that 38 multiple strands of ecological theory regarding the direct and indirect impacts of climate variability 39 in determining community-level responses can be informative to this wider effort. 40 Long-term climatic trends are accompanied by higher-frequency variation. This is partly cyclical 41 (seasonal and diurnal) but there is also a considerable stochastic element. Differences in mean 42 temperature between years are often comparable to decades of mean climate change (Huntingford 43 et al. 2013). It is well established that environmental variability can have far-reaching impacts on 44 populations (Coulson et al. 2004;Lawson et al. 2015;Boettiger 2018;Shoemaker et al. 2020b). On 45 top of this, interactions with other species strongly influence a species' range (Sexton et al. 2009;46 Kraft et al. 2015;Sirén & Morelli 2020). As much as climate driven range shifts are fundamentally 47 driven by the dependence of demographic rates on climatic variables, it is well recognised that the 48 response of an individual species to climatic change cannot be understood in isolation from the rest 49 of the community (Svenning et al. 2014;Davis et al. 1998;Araújo & Luoto 2007;Gilman et al. 2010;50 Urban et al. 201250 Urban et al. , 2016Ettinger & HilleRisLambers 2017;O'Brien et al. 2017;Legault et al. 2020). 51 Rather than environment and competition acting as independent determinants of range limits, their 52 combined effect is critical (Germain et al. 2018). 53 To date, the direct, systematic analysis of the effect of variability on extinction risk has been 54 dominated by single-species studies (Bennie et al. 2013;Renton et al. 2014;Vasseur et al. 2014;55 Lawson et al. 2015;Bernhardt et al. 2018). Likewise, the majority of existing approaches to 56 modelling the impact of climate change on multi-species distributions assume a smooth increase in 57 2011; Bailey & van de Pol 2016). Drawing a sharp line between long-term and discrete impacts of a 83 variable climate is challenging, as individual extreme events are ultimately part of the 'background' 84 variability observed over sufficient time. However, at the intermediate time scales of climate change 85 concern, valuable insights can be gained from considering both aspects. 86 Variability acts on each species in a community through numerous direct and indirect routes ( Figure 87 1a). We structure our discussion by dividing the diversity of processes into those impacting the long-88 term viability of a population (Figure 1b), and processes affecting colonisation of new areas, drawing 89 from ideas from invasion ecology. Using simple models, we then show how these processes can 90 interact at all levels and how recent developments in techniques to partition the impact of variability 91 can inform on the importance of different processes. We argue that the interrogation of 92 parameterised models can help overcome the challenge of synthesizing insights from across 93 ecological subfields. 94 95 Figure 1. a) Schematic of principal routes by which variability influences a focal species at a site (circled). 96 Climatic variability influences the focal population in three ways: directly influencing its reproductive rate, 97 varying propagule pressure and through impacts of competitors (here represented as conifers). Overall 98 competitive pressure can vary through fluctuating competitor numbers (which can be environmentally driven) 99 and by varying modulation of the impact of competition exerted by the competitor. Variability generating 100 processes internal to the focal population, such as demographic stochasticity, can interact with the externally 101 driven variation. b) Categorisation of impacts of variability in determining species ranges and response to 6 climate change discussed here. The mechanisms are categorised by organisational scale and whether they 103 influence the ability for a species to persist at a site (its population viability) or the capacity for the species to 104 establish new populations to shift their range. 105 Variability and single-population growth rate 106 The direct impacts of climate variability on the viability of individual populations are widely 107 appreciated (Lande 1993;Lawson et al. 2015), and so we only briefly review them here. Extensive 108 analytical and experimental work demonstrates a long-term impact of a fluctuating climate on 109 population growth rates (Ruel & Ayres 1999;Drake 2005;Melbourne & Hastings 2008;Thompson et 110 al. 2013;Vasseur et al. 2014;Lawson et al. 2015). Average growth rates over the long term may be 111 considerably different to population growth rates at average environmental conditions. Through 112 non-linear averaging, the net impact of a variable climate on an individual species' growth rate can 113 be either positive or negative ( Figure 2). The principal determinant of the direction of the effect is 114 the curvature of the growth rate's response to the relevant climate variable. However, higher-order 115 properties such as temporal autocorrelation can also play a role (Petchey et al. 1997;Heino et al. 116 2000). Many populations near the edge of their range show greater population variability and 117 climate sensitivity (Myers-Smith et al. 2015;Mills et al. 2017) and so may be expected to be 118 particularly responsive to these effects. 119 120 Figure 2 How the curvature of environmental performance curves (EPCs) affects average performance. a) The 121 black line shows a classically shaped environmental performance curve where the key environmental variable 7 is temperature, while the dashed lines show relative frequency of environmental conditions at two sites 123 (yellow and blue). b) shows histograms of observed performances across 1000 random environmental draws 124 at each site. At the yellow site, environmental variability is fairly large, but the curvature of the performance 125 curve is relatively shallow. The long-term average value (dashed line) is therefore very similar to the value 126 under mean conditions (solid line). At the blue site, although the variability is smaller, the local EPC curvature 127 is much larger and downwards. The long-term value average value is considerably lower than the value under 128 mean conditions. It also includes several instances that could be labelled extreme events, where the 129 performance is very markedly below average. 130 Individual extreme events have most commonly been associated with population declines and 131 heightened extinction risk, in particular when harsh climatic conditions push populations down to a 132 level where they are vulnerable to extinction (Lande 1993;Boyce et al. 2006;Jongejans et al. 2010;133 Nadeau et al. 2017;Maxwell et al. 2019). Extreme events have been associated with extinction risk 134 (Román-Palacios & Wiens 2020) and abrupt changes in community composition . 135 Range contractions and community shifts have also been seen in many communities, including 136 butterflies (de Palma et al. 2017), tropical fish (Lenanton et al. 2017), kelp (Smale & Wernberg 2013) 137 and bumblebees (Soroye et al. 2020). Extreme events have led to extirpations from newly colonised 138 areas, and it has been suggested that this may slow species responses to climate change (Nadeau et 139 al. 2017). For example, extreme cold events have been associated with range retractions of invasive 140 marine invertebrates (Canning-Clode et al. 2011) and fish (Rehage et al. 2016). 141 Taken together, both long-and short-term impacts of variability are more commonly viewed as 142 having negative consequences for populations of conservation interest. However, as we shall show, 143 when considering the wider community context in which species exist, this baseline assumption may 144 need to be adjusted. 145 Impacts of variability on longer term coexistence Where there is biotic control of species distributions, range limits become fundamentally a problem 147 of coexistence (Shea & Chesson 2002;Usinowicz & Levine 2018). This framing unlocks for climate 148 change research a long and rich history of work examining the influence of temporal environmental 149 variability on coexistence (Levins 1979). The potential for temporal variability to enhance 150 coexistence is well attested empirically (Adler et al. 2006;Tucker & Cadotte 2013;Tucker & Fukami 151 2014;Usinowicz et al. 2017;Hallett et al. 2019). Contrary to conclusions drawn in early and still 152 influential literature (e.g. Hutchinson 1961), environmental fluctuations themselves are not 153 sufficient to support coexistence of competing species (Chesson & Huntly 1997;Fox 2013) and can 154 hinder as much as facilitate coexistence. The framework of modern coexistence theory (MCT, 155 Chesson 2000) has been developed to robustly understand the impact of variability on coexistence. 156 However, this body of theoretical work examining the problem of species coexistence is only 157 recently being applied to climate change in the context of spatially heterogeneous environments 158 (Usinowicz & Levine, 2018). 159 At its core, MCT defines and investigates coexistence in terms of the capacity for populations to 160 grow from rare in the presence of competing species -the invasion criterion (Grainger et al. 2019b). 161 Where all species are able to meet this criterion, they can each resist exclusion by the other species. 162 In order to simplify the following discussion, we assume that only the ability of a particular focal 163 species to persist at a site alongside one or more competitor species is in question. Through MCT, 164 precise principles have been developed to identify how temporal variability influences coexistence 165 by quantifying the effects of temporal variability on the long-term average growth rate when the 166 focal species is at low densities ( , Chesson & Warner 1981;Chesson & Huntly 1997;Amarasekare 167 et al. 2004;Snyder 2008). 168 Since MCT is grounded in analytic results obtained for highly general models it can be applied to 169 most models of population dynamics. This requires conceptually separating direct impacts of the 170 environment on the focal population's growth rate from the impacts exerted by competitors. This 171 separation is delicate because competitors can affect the focal species directly, but also indirectly 172 through shared resources -defined broadly to include physical resources such as nutrients, water or 173 space, as well as through apparent competition mediated by natural enemy populations (Chesson & 174 Kuang 2008). These routes of impact are the 'limiting factors' of MCT and can realise considerable 175 analytical insight but also interpretational challenges (for a recent comprehensive review, see 176 Barabás et al. 2018). However, as we shall show below, essential insights can be gained directly 177 from a model that can describe how the growth rate of the population depends on the 178 environment ( ) and the competitive impact ( ), = ( , ), abstracting over the underlying 179 mechanisms (Ellner et al. 2019). Where direct effects of environmental drivers and impacts of 180 competitors contribute linearly and additively to the population growth of the focal species, any 181 variability averages out in the long term. MCT can be used to describe how deviations from the 182 linear, additive base case lead to long-term effects of variability on population growth. These 183 deviations can be understood in terms of two classes of impacts -temporal 'storage effects' and 184 non-linearity of competitive effects. 185 Temporal storage effects arise when the combined effects of the environment and competition 186 allow benefits accrued in certain years to compensate for losses in other years (Chesson & Warner 187 1981). For this to affect long-term persistence two conditions must be met. Firstly, there must be an 188 interaction between the direct impacts of the environmental conditions and the impacts of the 189 competitor on the growth rate (mathematically this is non-additivity, where the population is buffered in some way such that the combined effect of a harsh environment and 192 competitive pressure is capped (Fig 3ai). Secondly, the environmental variability must affect the 193 competitive impacts on the focal species (i.e. E and C must co-vary, Figure 3aii). In the classic case 194 with buffered (subadditive) population growth, the more negative this covariance, the greater the 195 beneficial effect to the focal population. However, it is worth noting that temporal storage effects 196 can be reversed if the biotic and abiotic impacts on the focal species growth rate are superadditive, 197 i.e. the adverse effects of competition are proportionally greater in a harsh year (e.g. Holt & Chesson 198 2014). In the context of climate change, there is a risk that the current patterns of covariation in 199 species' responses to the environment could change. For example, if climate change results in more 200 frequent universally 'bad' (or equally, universally 'good') periods, instead of a back-and-forth of 201 alternate species being favoured, overall covariance in species responses could become more 202 positive and further undermine coexistence. 203 The second mechanism arises directly from fluctuations in the impact of other species on the growth 204 rate of the focal species. Non-linear averaging of varying biotic impacts on the focal species' growth 205 can affect , analogous to the non-linear averaging of abiotic environmental fluctuations on 206 population growth rates described in the section above (Fig 2), and with matching consequences for 207 shifts in climatic variability patterns (Fig 3b). These fluctuations in the biotic pressure can be driven 208 by changes in the abundance of competitor species, or via varying per-capita intensity of the 209 competition exerted by other species through fluctuations in shared resources. When examining 210 coexistence, this is the mechanism of 'relative nonlinearity' that describes how species can 211 differentiate themselves through their capacity to take advantage of variable environments -'slow-212 and-steady' versus 'boom-or-bust' dynamics (Armstrong & Mcgehee 1980). Notably, in contrast to 213 temporal storage effects, this does not directly rely on correlations in species response to the 214 environment and can also derive from other fluctuation generating mechanisms. 215 The foundation for identifying the relative strengths of these processes in a real system is the 229 construction of a simple parameterised model. With that in hand, approaches such as that proposed 230 by Ellner et al. (2019) can be used to partition into contributions of different single-species and 231 multi-species aspects of variability, without the need for complex analytical work and overcoming 232 limitations incurred by approximations made in the analytic theory. We describe this approach in 233 Figure 4 and in SI 2. In a recent example, Armitage and Jones (2020) used a model of competition 234 between two species of duckweed to show that the inferior competitor's poleward range limit is 235 better predicted when taking into account the impact of temporal fluctuations. Using a partitioning 236 approach, they found this was dominated by nonlinearity in direct temperature responses, with a 237 smaller contribution of non-linearities in competition and minimal impact of temporal storage 238 effects attributable to the positively correlated species responses. 239 271 Earlier theoretical work placed greater emphasis on temporal storage effects, but in the small 272 number of empirical cases where the relative impact of the two effects on coexistence has been 273 directly compared, relative nonlinearity was found to have comparable, or greater, impact than the 274 more widely appreciated storage effects Hallett et al. 2019;Zepeda & Martorell 275 2019). Although to date the number of examples is small, it is clear that climate-driven shifts in 276 variability patterns could play a role in determining coexistence between competitors and range 277 limits in the future. While we have focussed here on competitive systems, consumer-resource 278 systems can be analysed in parallel ways (Dee et al. 2020;Shoemaker et al. 2020a). However, effects 279 identified in simple communities may not necessarily directly translate to more complex systems 280 (Barabás et al. 2018, Song et al. 2019. Species respond to different parts of the environment -with a 281 greater diversity of species, it is quite possible that species-level fluctuations in competition may 282 average out at the community level. For example, Clark et al. (2010) found that different tree species 283 responded to different aspects of the overall environmental fluctuations. This may suggest that 284 these mechanisms may be strongest where a limited pool of species are involved in constraining the 285 range of the focal species. 286 Community impacts of discrete events 287 A species' range expansion in response to climate change is effectively a series of invasions into new 288 communities (Wallingford et al. 2020). From this perspective, the significant body of work 289 investigating variability within invasion biology, going back to Elton (1958) and beyond, can offer 290 useful insights. The ability of a species to persist at a site is only one half of the picture -a species 291 must first arrive and establish itself. The spread of a species into new areas can be slowed or even 292 prevented by disadvantages that potential invaders face. For instance, positive density dependence 293 at low population densities (Allee effects, Courchamp et al. 1999;Kramer et al. 2018) can cause 294 leading range edges to appear 'pinned' in place (Keitt et al. 2001) and slow the rate of invasion into 295 newly suitable environments (Taylor & Hastings 2005). 296 Environmental variability can play a role in shifting a community from one state to another by 297 allowing species to overcome the challenges of Allee effects through intermittent boosts in 298 performance (Dennis 2002). Discrete extreme weather events can have marked influence on the 299 trajectory of species responses to climate change, but pose considerable challenges to investigation 300 and prediction (Bailey & van de Pol 2016). Although direct evidence is challenging to find (see later 301 sections), extreme climatic events have been associated with the arrival into marine communities of 302 species previously found in warmer areas . Dispersal is intrinsically episodic 303 (e.g. Kennedy et al. 2020) and short term spikes in the number of incoming colonist propagules may 304 help increase establishment compared to constant dispersal rates by overcoming thresholds induced 305 by Allee effects (Drake & Lodge 2006;Carr et al. 2019). 306 At the community level, biotic resistance from the resident species can slow or prevent a colonist 307 tracking its climatic niche (Urban et al. 2012;Legault et al. 2020). Whether this resistance is 308 considered a hindrance or beneficial will depend on the conservation status and impact of the 309 colonist and resident species concerned. Over longer time scales, a history of disturbance can shape 310 a community's biotic resistance through selective assembly (Miller et al. 2021) 317 Even without local adaptation, priority effects can give residents considerable advantages compared 318 to potential invaders. Where priority effects are strong, an invading species can colonise only if 319 either the density of the resident is brought down from equilibrium, or the invader is otherwise able 320 to reach sufficiently high densities to exert significant competitive pressure on the resident. 321 Individual disturbance events can temporarily break down blocking effects (Davis et al. 2000;322 Melbourne et al. 2007;Diez et al. 2012), for example in grasslands (Pinto & Ortega, 2016) and over 323 the longer term there is an expectation that in disturbed environments there are more unused 324 available resources for invaders to take advantage of (Davis et al. 2000;Diez et al. 2012). Tucker and 325 Fukami (2014) showed experimentally that temperature variability can allow priority effects to be 326 overcome in a nectar-yeast system. Ecological theory can play a key role in identifying cases where 327 individual key events could precipitate the establishment of a climate refugee species. The core 328 results of MCT can be applied to invasions (Shea & Chesson, 2002;MacDougall et al., 2009) and are a 329 useful guide to identifying where priority effects are impactful Grainger et al. 330 2019a;Uricchio et al. 2019) particularly where the growth rate of a colonising species can also be 331 affected by variability (Clark & Johnston 2011). 332 Interactions between influences of variability 333 There have been frequent calls to improve the representation of communities in ecosystem-change 334 models (Gilman et al. 2010;Angert et al. 2013;Urban et al. 2013). Synthesising the aggregate impact 335 of variability will require an expansion in the scope of models currently used (Felton & Smith 336 2017). When considering whole communities, the diversity of possible impacts on a focal species 337 due to variability is considerably larger than in the single-species case. The previous three sections 338 demonstrated the breadth of direct and indirect ecological impacts that variability can have on how 339 species will respond to climate change. Faced with such a diverse set of processes, reconciling the 340 assorted influences of variability and determining how they interact is central to determining their 341 influence in practice. There are fundamentally different scales and mechanisms at work, but bottom-342 up mechanistic modelling can illustrate the key interactions at play. Identifying and (equally 343 importantly) ruling out for practical purposes, interactions between stressors is crucial to meaningful 344 conservation interventions (Côté et al. 2016). 345 To this end, a number of modelling studies have explored the interface between local variability-346 mediated coexistence and extinction risk (Adler & Drake 2008;Gravel et al. 2011;Danino et al. 2018;347 Pande et al. 2019;Schreiber et al. 2019;Dean & Shnerb 2020). Populations at low densities may be 348 expected to benefit the most from variability-mediated coexistence mechanisms, but a low 349 population size is also risky if a single bad year could extirpate the population. In these models, the 350 relative strengths of stochastic extinction risk and competitive stabilization change across a gradient 351 of environmental variability. In the largest empirical analysis of this balance to date, Fung et al. 352 (2020) used forest plot data to quantify how variability leads to temporal niche partitioning and 353 extinction risk and found that the balance was uneven but more frequently detrimental to 354 coexistence. 355 A useful way of framing complex climate change responses into a single unified measure of impact is 356 through establishment and extinction lags -differences between climate change and species range 357 responses (Alexander et al., 2018). The core issues can be demonstrated in relatively simple 358 simulation models constructed to capture multiple processes and forms of variability 359 simultaneously. In Figure 6 we demonstrate the potential for complex interactions between 360 mechanisms using a simple model of competition (detailed in SI 3) between a resident species and a 361 climate migrant. As already shown in Figure 4, the response expected from a change in variability 362 due to one mechanism could be countered or even reversed in conjunction with other processes. 363 Given the multitude of theoretically and empirically identified effects of climate variability on 364 colonisation success under climate change, there is a need to develop and investigate such models 365 to understand when interactions between these effects are likely to be influential. 366 Building on small and focused models, larger, highly generalised and spatially-explicit 385 metacommunity models (O'Sullivan et al. 2019;Thompson et al. 2020) can also provide insight into 386 potential drivers of community change that emerge from combining processes at multiple scales 387 (Usinowicz & Levine 2018;Chase et al. 2020). However, interpreting such models to assess the 388 impact of variability poses distinct challenges, beyond parameterisation. It is rarely possible to 389 directly control multiple aspects of variability simultaneously, even in an artificial model. Temporal 390 variability is inherently multi-facetted and additional qualities beyond direct variance can have 391 significant impacts, e.g. autocorrelation (Levine & Rees 2004). To take one illustrative example, the 392 historical level of variability a community (real or in silico) experienced during its assembly 393 contributes to the capacity of the community to respond to future changes, whether that is through 394 direct adaptation of the species in a community to local levels of variability or by the extant species 395 having passed through a previous extinction filter during historical extreme events (Janzen 1967;396 Nadeau et al. 2017;Medeiros et al. 2020;Miller et al. 2021). 397 Identifying processes in the real world 398 The next frontier is directly assessing the magnitude of these effects in real systems. Understanding 399 which aspects of variability are most influential will be key to building models of minimal necessary 400 complexity. Determination of the relative contributions of dispersal, interspecific interactions and 401 environmental dependence has been identified as the key challenge to understanding the dynamics 402 of whole communities (Leibold et al. 2020). There is evidence that biotic resistance to invasive 403 species is widespread, but the global contribution of biotic resistance to climate refugee species is 404 challenging to measure (Levine & Rees 2004;Alexander et al. 2015Alexander et al. , 2016Louthan et al. 2015;405 Godsoe et al. 2017405 Godsoe et al. , 2018Beaury et al. 2020). 406 Direct observations demonstrate that species are on the move, but consistent patterns are difficult 407 to determine and influenced by concurrent land use changes (Lenoir et al. 2020). The observed rate 408 of movement of species is highly variable, with many species shifting their ranges considerably faster 409 or slower than the climate velocity and ultimately dependent on availability of habitat (Platts et al. 410 2019). Competitive exclusion at large spatial scales is often very slow (Yackulic 2017), while 411 extirpation by extreme events can be rapid, but not necessarily permanent. Any coupling between 412 species ranges and particular climatic events can be highly idiosyncratic, with multi-year effects of 413 weather events (Harley & Paine 2009). Coupled with the challenge of accurately identifying the pace 414 of range shifts (Bates et al. 2015), this makes directly discerning a signal of variability in movement 415 rates an imposing task. 416 Direct observations of variability in natural populations can highlight how species respond differently 417 to environmental variability (Palmer et al., 2017;Le Coeur et al. 2021). Evidence from global satellite 418 data shows that sensitivity to climate variability is itself variable across the globe (Seddon et al. 419 2016). However, to determine the impact in terms of long-term coexistence, model 420 parameterisation of some sort is required (e.g. Fung et al. 2020;Usinowicz et al. 2021). Species traits 421 hold some promise to identify likely temporal coexistence mechanisms (Adler et al. 2013). Life 422 history traits have been found to relate to sensitivity to climate anomalies in herbaceous perennials 423 (Compagoni et al. 2021) and amphibians (Cayuela et al. 2017), but much work remains to be done in 424 this area. 425 Mesocosm experiments with manipulation of variability can be illuminating -for example , Zander et 426 al. (2017) showed that lower trophic levels of a microbial food web were more strongly affected by 427 variability than top level consumers. However, such an approach is fundamentally limited since 428 variability can be manipulated in many alternative valid dimensions unless it is tied directly to 429 expected climate regimes (Thompson et al. 2013). Behavioural adaptation and the role of 430 microclimates pose further challenges to the interpretation of mesocosm work -the realised 431 variability of environmental variables relevant to species may differ from that measured by weather 432 stations (Bladon et al. 2020). 433 Alongside the highly generalised 'strategic' models demonstrated in the previous section, multiple 434 impacts of variability need to be tested for in focussed 'tactical' case studies of marginal populations 435 in order to build a picture of the real-world prevalence of these processes. Progress will require not 436 just more data, but a connected approach to synthesising the multiple impacts of variability, which 437 in turn requires a reliable model of community dynamics that can incorporate variable conditions. At 438 the core will be robust models of species performance and competitive impact under different 439 environmental conditions. This is no easy task -even in two-species systems with a single 440 environmental variable this requires fitting a multidimensional response surface. Given that species 441 respond to multiple environmental variables (Clark et al. 2010;Tingley et al. 2012) In support of this, a key line of future theoretical enquiry will be determining the minimum data 457 requirements to understand the impact of variability. It is not yet known how sensitive partitioning 458 of variability effects is to model misspecification. The higher-level properties of environmental 459 performance curves, such as their curvature, are considerably harder to estimate than first order 460 properties such as thermal optima. Empirical estimates for key parameters can be confounded, with 461 each other, with consequences for reliable estimation of species coexistence (Terry et al. 2021 467 Influential interspecific interactions are necessary for the multi-species process to be impactful. Particularly in 468 the multi-species processes, more research is needed to determine their prevalence and influence in real 469 systems. 470 In Table 1 we summarise the multitude of ecological routes by which underlying temporal variability 471 could influence how a species will fare under climate change. It is not currently clear whether the 472 difference in emphasis of the impact of variability in different ecological subfields represents a lack 473 of communication, publication bias, or if the relative neglect of the 'positive' aspects of variability 474 within global change biology is because they do not leave a widespread strong imprint on real-world 475 dynamics. It would be risky to assume that the impacts of variability are already 'baked-in' to current 476 observed species ranges, and so captured by existing distribution models. We reiterate that there 477 will also be evolutionary processes to consider -interactions between variability and adaptation with 478 positive and negative consequences for range shifts has been subject of extensive recent reviews 479 elsewhere (Vázquez et al. 2017;Nadeau & Urban 2019;Thompson & Fronhofer 2019;Coleman & 480 Wernberg 2020;Lyberger et al. 2021;Miller et al. 2020). 481 At this point in time, we simply do not know whether current assumptions of the impact of 482 variability based on single-population analyses are systematically over or underestimating risks at 483 the community level. What we do know is that climate change will present species with a bumpy and 484 obstacle-filled uphill ride, not a smooth escalator. No single simple theory can predict the effects of 485 climate variability -but, as we have shown, this does not prevent useful insights cannot being 486 synthesised. As with most areas of ecology, both complex and simple verbal and mathematical 487 models have their parts to play. By understanding the linkages between these models, detailed 488 insights can be gained without losing sight of the whole. More examples of quantification of the 489 impact of variability in real communities are needed -it is our belief that the simple modelling 490 frameworks discussed here can meet this need. Building strong bridges between climate change 491 ecology and coexistence theory has never been more possible, or more necessary. 492
allenai/peS2o
default
0.02
The Legendary Black Water River Rafting Co Down in a little place called Waitomo, which is Maori for 'water' and 'hole', the Legendary Black Water River Rafting Co is found. The reason why Waitomo is such a touristic place is the caves located in this area. The caves are wet, which means they have some water running through them. This makes them the perfect place for quite a remarkable animal to live, the 'glow worm'. Glow worms are extraordinary! The eggs are placed by the mother in a group of 20, the first one of the larvae that hatches eats its brothers and sisters. The larvae then creates a silk like spiderweb string hanging from the surface, by which is catches flies and other insects. The insects are attracted by their 'glow'. The larvae excrete glowing feces, which make the ceiling of the caves glowing a clear and magical blue. After nine months the larvae turn into an adult fly, which flies out of the cave. The adult has one disadvantage though, it does not have a digestive system. No mouth, no anus, so they only live for 2-3 days. So what do you do when you have little time? Yes, mate! The males search females and basically exhaust themselves in order to get new glow worms into the world, not the worst way to go I would say. The female then lays eggs in the caves and the cycle starts anew. All this is going on in the area of Waitomo. What the Legendary Black Water Rafting Co does, is heist people into a wetsuit and again a harness, and make a nice tour through the caves. First a 30 m abseil, then a flying fox beneath a sky of glow worms. Then a tube ride on the water, followed by a walking tour through the caves and after five hours climbing to the exit while being showered by underground waterfalls! What you see in the picture is the training ground just before we entered the caves, which shows me and three random British people at the start of our adventure.
mlfoundations/dclm-baseline-1.0
default
0.37
by Reinhard Siegel When you create a new design from scratch, you have complete freedom to shape the surfaces, and when all looks pretty and satisfies the design parameters you can quit. This is "freeform design". But if the objective is to reproduce a predefined shape it comes to “surface fitting”. The freedom to shape is replaced by the challenge to create a geometry which is close to the given data as well as being fair. In this article we will discuss the required steps how to savely overcome this exercise in MultiSurf. It is certainly an advantage if you have knowledge of the behavior of the various kinds of curves and surfaces, which are available in MultiSurf. What is ok for a slender sailing yacht hull does not work for a powerboat or a container vessel with its flat side and bottom. You are the master of the fitting process, you must decide what surface and curve type is suitable, and where all those supporting master curves and their control points have to go. cp: control point (support point) mc: master curve = support curve In the following the terms used for point, curve and surface types are those of MultiSurf. This may serve the understanding and traceability. The fitting process requires a permanent comparison of the given geometry against the created computer model. For this purpose MultiSurf offers the entity Wireframe. A WireFrame presents the content of a .3DA file (3D ASCII Drawing File). A .3DA file contains XYZ-coordinates of points in ASCII format. The Wireframe entity displays the 3DA file by a polyline passing through its points. It cannot be used in the construction of other MultiSurf entities. It is included in a model primarily for display. A 3DA file can be created in several ways: Step 1: create a picture file First we need to “get the paper picture into the computer”. One possibility is to scan the drawing using a printer-scanner-copy device. Another way is to take a photo with a digital camera. In both cases the result will be a picture file, typically in the popular JPEG format. As an example let us recreate the hull of the sailing yacht Dorade (design Sparkman & Stephens; 1929). Its set of lines is presented in the book Yachts classiques (Gilles Martin-Raget (text, photos), François Chevalier (drawings); Paris: Editions du Chêne-Hachette Livre, 1998). When the scan or photo is done load this initial picture into Windows Paint (or any other graphic editor) and save the body plan (stations) and the profile view (hull outline) into two separate files: body_plan.jpg and profile_view.jpg. These files must be digitized. Step 2: Digitize the picture Now the picture (the lines plan) is “in the computer”, but it consists of pixels. What looks like a red curve on a white background is a band of red pixels embedded in a sea of white ones. The pixel graphic (or raster image) must be transformed into vector graphic, i. e. into lines, polylines, arcs, etc. There are two ways to achieve this: There are image manipulation programs available which can vectorize pixel graphics. An automatic process “redraws” the content of the picture. The result can be saved in various file formats, for example as DXF. This file type is of importance for our purpose as MultiSurf can imported it. An automatic process has no “feeling” what content of the image is important. Either some edit work is necessary prior to the vectorization or when it is finished. Otherwise a good compromise between accuracy and simplicity cannot be obtained. It is certainly worthwhile to evaluate the variety of such tools since raster image vectorization can save time for an experienced user. Some process also PDF files. If the redraw is done manually in a Cad program it is up to the user which curves of the lines plan are selected and which are ignored. He also can immediately decide which one of the crossing curves belongs to which station when retracing a crowded body plan. So redrawing lines by hand in a Cad program is not that bad as it might sound at first sight. Let us consider this path in the following. Usually standard Cad programs can paste pictures into a drawing. To begin with open the body plan picture in Paint, select it completely and save it via Edit/ Copy to the Clipboard. Next change to the Cad program, start a new drawing and paste the picture from the Clipboard into the drawing. We will get this result: Now redraw the stations and the sheer line by Polylines in color red (for example). Redraw by Lines also the centerline, the design waterline and the lowest waterline. These objects will serve as frame of reference and for scaling to the full size dimensions. Then delete the pasted picture and edit the drawing: Now repeat the procedure described above for the profile view. Just the outline is of interest, so it is less work. Save the drawing as profile_view.dxf. After these 2 steps we have transfered the body plan and profile view from the paper medium into two Cad drawings (body_plan.dxf, profile_view.dxf). Step 3: Import DXF file into MultiSurf and save 3DA wireframe files Regardless of how DXF files of the graphical data in the lines plan were created – with the use of some software tool or by hand - the next step is to open a new model in MultiSurf. Set the model units to those used for the Cad drawings and import the DXF files (main menu: File/ Import/ DXF). Let us continue with our file profile_view.dxf and import it into MultiSurf. All curves and points will lie in the XY-plane (which was the drawing plane in the Cad program). To attain the usual orientation rotate all entities around the X-axis by 90° (Edit/ Transform/ Rotate/ X-Axis). Now it is a favourable moment to scale the heights. The draft of Dorade is given as 2.43 m. And our MultiSurf model shows, that the deepest points of the bottom of the keel is at z = -2.375 m. So Z-scale the whole model by a factor of 1.0232. Let us look at the waterline near the bottom of the keel. Select one of the control points of the line, its Z coordinate is -1.980 m. Keep this in mind. Before importing the DXF file of the body plan into the current model hide all entities in order to get them out of the way for the coming procedures. Then repeat the DXF import, now using the file body_plan.dxf. Again, all curves and points lie in the XY-plane. To obtain the usual orientation rotate all visible entities first around the X axis by 90°, then around the Z-axis, also by 90°. Again check the heights in the body plan. From the profile view we know, that the waterline near the keel bottom is at Z = -1.980 m. However, the control points of the same waterline in body plan are at Z = -1.937 m. Hence we perform with all entities of the body plan a Z-scale of 1.0222. Some more operations are needed to the entities of the body plan. So far all the digitized stations lie in the YZ-plane. We must shift each station to its true X-position. Since the length of the design waterline is given as 11.35 m, equally divided in 10 intervals, the station spacing is 1.135 m. Save the model as Dorade_preparation.ms2. Then select station 0 to 11 and save a 3DA wireframe file as stations.3da (main menu: File/ Export 3D/ 3DA wireframe). Next select the outline curves and save the 3DA file named profile.3da. Close the model. Now the digitizing work using method 1 is done. A different procedure to cast the graphical data into the numerical form of a 3DA wireframe file does not need a scanner or a Cad program, but just a ruler. For each station scale a series of point coordinates off the drawing and note them down on a sheet of paper. If the drawing is given at a certain scale use the appropriate measure. If the drawing is out of scale some proportional calculations are required to obtain the point coordinates in full size. When the measurements are finished open Windows Notepad (or any other text editor) and type in the coordinates into .3DA format. File specification: .3DA — 3D ASCII Drawing File: A .3DA file contains XYZ coordinates of points in ASCII format. It consists of an unlimited number of record lines each in the format: pen x y z pen = integer controlling pen color; 0 = pen up x, y, z = world coordinates of point Each line has 4 numerical entries: pen, x, y, z. ‘pen’ is the number of a color for drawing to the point (x,y,z). Each entry is separated by one or more spaces. If ‘pen’ is 0, the pen is up in moving to (x,y,z). Example: Suppose we measured the following offsets for a station: Then this lines must be typed in into Notepad and saved with the file extension .3DA: 0 2.350 1.200 0.820 7 2.350 1.190 0.600 7 2.350 1.180 0.400 7 2.350 1.150 0.200 7 2.350 1.070 0 7 2.350 1.000 -0.120 7 2.350 0.940 -0.200 7 2.350 0.800 -0.330 7 2.350 0.675 -0.400 7 2.350 0.600 -0.430 7 2.350 0.400 -0.490 7 2.350 0.200 -0.500 7 2.350 0 -0.505 The MultiSurf entity Wireframe displays the content of a 3DA file by a polyline passing through its points. It does not create Point entities in MultiSurf. If for some reason points are needed the command Import3DA should be used instead. This command imports a 3DA file into points and/or curves. See also the further explanations given below. So far we discussed redrawing curves and measuring points from a lines plan on paper. Often there is also a table of offsets for the hull to be fitted. How can that one be used? MultiSurf provides the command ImportTable. It inserts a table of XYZ coordinates of points into a model. ImportTable filename[.ext] [kind [layer]] Imports table file into points and/or curves. If no extension is given, .TXT is used. Kind = 0, points only Kind = 1, Type-1 BCurves Kind = 2, Type-3 CCurves Let us consider an example. In the book How to build a wooden boat (David C. McIntosh, WoodenBoat Publications, Inc., Brooklin, 1987) the author presents offsets for a 39 ft sloop of his design. When this table is entered into a spreadsheet program, and feet, inches and eighths of an inch are converted to meters, we obtain the following table: A table of point coordinates for use with the command ImportTable must be arranged in a specific order: The string y/x in the first cell indicates, that the remaining cells of the top row hold X-coordinates, while all remaining cells of the first column hold Y-coordinates. The interior cells define the corresponding Z-coordinates. For example, the point with x = 1.598 and y = 0.610 has a z-value of 0.743. Empty cells are not allowed. Back to the example. As first step let us import the buttock points. In the spreadsheet program add X-positions for stem, stations and transom, also y-values for the buttock locations. There are no entries for stem and transom so the essential table for the buttocks points will look like this: The grey cells indicate where zero values were entered to fill cells, for which the offset table shows no value. Select the table in the spreadsheet program, then insert its content via copy/paste into Notepad and save it as a text file (for example: butts.txt in the folder D:\temp). Now we can import this text file into MultiSurf: We will get the following screen image: The screen image will look confused due to the auxiliary points. We should correct this first: The screen will look much more orderly now: The data for the waterlines are entered in the same way. Again, the grey cells mark the auxiliary points to make the table complete. Save the table as wls.txt and use the command ImportTable wls 1 to insert the data. Again those additional zero-points should be deleted. So far we considered points of buttocks and waterlines given in the offset table in question. These are curves with points of constant Y- or Z-coordinate values. Only this kind of point data can be inserted via the command ImportTable. In contrast to buttocks or waterlines both the Y- and Z-coordinates for points of sheer and profile vary from station to station. It is simpliest to insert their data via the command Import3DA. Import 3DA filename[.ext] [kind [layers]] Imports 3DA file into points and/or curves. Kind = 0, points only Kind = 1, Type-1 BCurves Kind = 2, Type-3 CCurves All what is required is to re-arrange the offsets in a differently fashioned table in the spreadsheet program and add the column for “pen”. Then save the content (marked light blue) of each table into a text file and insert those via the Import3DA command. Now show all the imported data. For each station there are points from the insertation of the waterline table and the buttock table as well as points from the import of sheer and profile. So we can finally pass through all corresponding station points a B-spline Curve of degree = 1. Before closing the model save 3DA wireframe files (File/ Export 3D/ 3DA wireframe) for the stations (mw_stations.3da) and the outline curves(mw_profile.3da). A table of offsets saves a lot of measuring. But since all curve data is related to the grid of the stations, the table gives no information where a curve starts and ends, or how the profile runs between stem and station 1 or beyond station 8. Thus a table of offsets must be used in combination with the lines plan drawing to complete the data input. None of the presented methods is superior. The accuracy of redrawing curves depends on the quality of the scan. The imported DXF data needs some transformation editing. On the other hand it is easy to digitize a curve by as many points as appropriate (not limited to the grid of the lines plan). Measuring offsets is time consuming, but the possibility to measure to the nearest grid line reduces inaccuracy which might be caused by distortion of the drawing. Including additional points between the grid lines to cover details is laborious. Importing offset table data via special commands saves time, but requires further data that has to be picked off from the drawing (bow shape, stern shape). One should know both strengths and limits of the explained methods. Only then the best combination for the given task can be used. The surface fitting of an existing cargo vessel for CFD studies certainly requires more accurate templates than the re-design of a hull, which serves as a starting point for a related shape, which is going to be modified here and there. Let us continue with the Dorade example. So far all was about the various methods to translate the existing data on paper into the numerical form of 3DA files. For Dorade we obtained those files when discussing method 1 (stations.3da, profile.3da). There the model Dorade preparation.ms2 was created. However, we will not continue with this model, but start a complete new one. Why? It was the only purpose of this model to create 3DA files for comparison purpose. Once the 3DA files exist, the model paid off its debt. If we would continue with it and start the essential fitting process therein, unnecessary ballast in form of many points and curves would be carried on. So open a new MultiSurf model and begin by creating 2 Wireframe entities from the two .3DA files stations.3da and profile.3da. The wireframe entities are the "templates", to which we will fit and compare the re-design. The hull in question shows no rapid change of curvature in longitudinal direction, it leans to planking. Thus we decide to use a C-spline Lofted Surface, which interpolates its supporting master curves (mcs). Here the attempt is made to use a single surface for hull and keel. B-spline Curves are chosen for mcs; this type of curve is easy to bend, a complex shape needs a few control points (cps), it does not tend to oscillate. The further procedure is basically the same as with creating a model from scratch: MultiSurf offers a very useful feature to create B-spline Curves from 3DA Wireframe entities: B-spline Curve Fit (main menu/ Tools/ Special). So select the stations, then do a B-spline Curve Fit; set the number of cps to 8. In a swift each station in the Wireframe entity is fitted by a B-spline Curve. Certainly some adjustment is required to improve the fitting, but B-spline Curve Fit saves much time to create the lot of cps; further, the suggested positioning of the cps along the fitted B-spline Curve is a good starting point to achieve a harmonic curvature distribution. It is no good idea to use all of the generated B-spline Curves as mcs for the surface – the key to fairness is simplicity. On the other hand it is a re-design: the result should be close to the input. Thus it is a game between accuracy and fairness. Fitting can be more demanding than free-forming. In the Dorade example every second station is used as a mc; there is a total of 10 mcs. Mc2 to mc5 run transversely, mc6 to mc8 in the aftship run sloped. Mc7 is along the slanted end of the keel. The aft mcs run partly on the negative side of the center plane to form the stern overhang. Where mcs do not lie along digitized stations make Contour entities with cuts at the same stations as represented in the wireframe data. This gives a direct comparison, and shows where we agree and where we do not. Adjustment usuallly just means going to the nearest control point on the nearest master curve and move it a little, usually normal to the surface. In general, we will have to accept some deviations from the digitized lines, because digitizing is not exact and neither was the lines plan. The one for Dorade presented in the book Yachts classiques is 15 by 7 centimeters in size. The Ship Lines view below shows the result of our efforts. There is no rudder, no transom or deck, so there is still work left. But hopefully with the help of this article it will be no problem to fit surfaces also to those given shapes. Further strategies for modeling traditional hulls like the one of Dorade are discussed in the separate article On the Modeling of Classic Sailing Yacht Hulls.
HuggingFaceFW/fineweb-edu
default
0.333
Frequently builders since it is from youth become was fascinating. You may consider learning Python if You Like to check or apply new systems, heading the simple path. Selecting a route isn’t the simplest to “perspiration”, begin with Java or D. For that many eager, selecting the absolute most challenging route using the purpose of acquiring exceptional foundation to go (more) into another vocabulary, recommend to select C++. Among the common and simplest languages is Python, it will help new builders to comprehend the concepts of development, and skilled builders it’s often-used in complicated and big tasks. Applying Python using the Django construction that is common, a net software can be written by You. Instagram, employed Facebook, Spotify. Additionally, among the simplest for composing internet programs & most common languages is just a programming-language – PHP. Even though it is unknown and contrary, but backed from the website hosting all aside from cost. PHP is for making little internet applications, good. Utilized Wikipedia, in Flickr. Rubin made for particular reasons, created for effective and simple development. Additionally ideal for start-ups your personal tasks and quick development. Primarily recognized because of a construction that was very common ruby on rails. Employed Hulu, demonstration, Groupon. Among the highly-paid on the market of development and many required languages is Java. Remarkably popular on OS, all systems and products, because of its cross platform help. Utilized in Minecraft Gmail, most apps and applications. D is “Franca ” among all languages that are development. One of most and the earliest popular languages on the planet. Ideal for equipment and program development. It’s utilized in equipment and OS. D# was made about Microsoft’s system, but lately went open-source. D# is just a common selection of applications using framework and businesses for improvement of numerous the websites. for making the web sites utilizing a net framework # employed. Performance and its format is comparable to Java. Utilized in business applications for Windows. Objective c may be the main vocabulary utilized in IOS and Apple Macos x. Its value examining In The Event That You plan to create just for IOS and os-x. Understanding quick might be considered by you, what about the vocabulary that is following. Objective c, that will be utilized in section of Macos X and many iOS applications. C++ is just a more complicated edition of the programming-language, having a feature-set that is significantly extended. Popular for sport improvement, high and commercial performance programs. To understand C++ would be to learn gather, how to create and generate. This vocabulary isn’t suggested for home-research and demands a mentor’s clear presence. It’s popular in operating equipment system and surfers. Wherever You start Your trip within the it field actually, it generally does not matter. You have to understand systems and at least several main languages to understand all facets of development. & most significantly – begin! Frequently individuals therefore are scared of change within their spheres of exercise and wish to become builders in a far more Adult era. They worry as you will find newer and much more nimble rivals that it’s also late to begin understanding development. In this instance, you need to browse the subsequent data, which suggests that the typical age of employees at such technology businesses as Myspace, LinkedIn and salesforce, that will be 28-29 years of-age; Bing, Amazon, Apple, Tesla engines, Yahoo!, on eBay, Adobe, Microsoft, Intel and Cisco – 30-35 years old; Dell, IBM, Oracle, and “Hewlett Packard” – 37-39 decades.
HuggingFaceFW/fineweb-edu
default
0.333
Penthouse 2 là bộ phim mang nhiều tranh cãi bởi những pha bẻ lái cực gắt của biên kịch, mới đây bộ ba "tình cũ không rủ cũng đến" Oh Yoon Hee (Eugene), Cheon Seo Jin (Kim So Yeon), Ha Yoon Cheol (Yoon Jong Hoon) đã cùng xuất hiện hiện trên chương trình Never Stop Being A Fan, cả ba đã có những chia sẻ về bộ phim chưa từng được tiết lộ trước đây. Khác với trên phim, ở ngoài bộ ba khá "tình thương mến thương" Khi được MC đặt câu hỏi: "Có trường hợp nào mà sau khi phát sóng các diễn viên mới biết được câu chuyện không?", Eugene chia sẻ: "Không cần đợi đến lúc phát sóng đâu, cũng có lúc giữa chừng mọi người sẽ chèn thêm một vài thứ, lúc đó kịch bản sẽ được chỉnh lại. Nhưng mà chỉ có các diễn viên trong cảnh đó mới nhận được kịch bản đã sửa thôi, vì vậy những người khác chỉ biết được sau khi xem phát sóng". Nam diễn viên Yoon Jong Hoon chia sẻ thêm: "Để giữ bí mật các diễn viên đều không được nói ra". Liệu sắp tới biên kịch còn những pha cua gắt đi vào lòng người nữa không? Penthouse 2 lên sóng SBS vào thứ Sáu, thứ Bảy hàng tuần. Vietsub: Kim So Yeon Fanpage
HuggingFaceFW/fineweb-2
vie_Latn
0.0775
And that's very important, too, 'cause a lot of people just assume everyone's a Democrat, or everyone's a Republican or whatever, and they're not. And that's a really important thing to adhere to. Kurt Loder Quotes to Explore
mlfoundations/dclm-baseline-1.0
default
0.37
Best Agency 4 Dancers?agency good GDHSgirl150Grover appeared in the film "Half A Sixpence" 1966. Hi Grover, I recently auditioned for two agencies in LA. And, much to my surprise, both want to talk to me further. I haven't acted quickly to sign with either. DDO also has an audition coming up soon, and I would like to audition for them as well. I don't want to come off as having a "Diva Attitude"... however, I just want to weigh my options, and do what's best for me. Are all of the agencies in LA about the same? I mean, do they all share the same opportunities, are certain ones better for commercial and video, etc. I know that MSA is a great agency, but I keep hearing great things about DDO and Bloc, which I don't know much about. Can you help me out?
mlfoundations/dclm-baseline-1.0
default
0.37
The 1911 census was taken on Sunday, 2 April. Unlike earlier counts, when the enumerator books were shown, this census record displays the original householder’s schedules which were usually completed and signed by the occupant! In addition to the usual information, the census also included: the number of completed years marriages had lasted to date; the number of children born alive; those still alive and those who had died. One of the enumerators at Preston was Tom Ashton, the Preston had 332 inhabitants - fourteen more than in 1901 - although the population was swelled by eleven boarders who were part of the work-force engaged on new building projects. There were 182 males and 150 females which included 49 couples and 17 widows Villagers occupied 70 homes, eight more than ten years earlier, testament to the rebuilding of Preston in the early twentieth century especially in connection with the Temple Dinsley estate. The following were ‘core’ families in Preston making -up 27% of the population: Thirty-nine percent of Prestonians were aged twenty or younger. The oldest man was John Jeeves (79) and the oldest woman was Hannah Crewe (78) Census headings: Surname; Christian name; Position - Head, Wife, Son, Daughter, Visitor, Servant etc); age; marital status -Married, Single, Widow/Widower; years married; children born to marriage; children living; children who had died; occupation; rooms The occupations of the residents showed a greater diversity than in previous years. There were the ubiquitous farm workers: the labourers, horse- keepers, cowmen, groom, stockmen and woodmen who totalled forty-eight. The population was temporarily augmented by a builder’s foreman, bricklayers (5), carpenters (5), an electrician, hydraulic engineers and their fitters (4) and an agricultural engineer fitter who were helping with the construction work around the village. The usual trades-people remained - the baker, wheelwright, tailor and grocer. Law and order was upheld by the local police constable. The complete absence of straw plaiters among the women and children in 1911 points to the demise of the craft in the early twentieth century. Among the women there were the maids, nurses and domestic servants who were employed at the grand houses of Temple Dinsley and Poynders End. Signs of the modern age in the village were the two chauffeurs, the electrician and the post office messenger who resided there. A high proportion of villagers (45%) were born in Preston - 149. A further 64 were born in the local parishes of Kings Walden, St Pauls Walden, Kimpton, Hitchin and Great Wymondley. Seventeen more were born elsewhere in Hertfordshire.
HuggingFaceFW/fineweb-edu
default
0.333
Christophorus: Christ and the Holocaust (Art and Theology) Christopher J Wiles writer | speaker | servant Today was a day of remembrance for the liberation of holocaust survivors during the Second World War. A joyous day, but one marked by painful associations. According to an ABC News article, veterans recalled the absolute horror of the camps: The odor of death could be detected outside the camp,...Once you smell that odor, you will never forget it. (Sgt. Robert Patton, 88) It was just a shock to see skeletons walking,...The dead bodies and live bodies were together, and I saw one body moving. ... I asked him, ‘How long has this man next to you been dead?’ He said two days. I said, ‘Why didn’t you get out of bed?’ He said, ‘I don’t have the strength to get up.’ (Capt. Bernard Metrick, 94) The horrors of the war remain with us long after the cinders have fallen. To understand how such evil could exist is a profound challenge, one that has led many to abandon the possibility for God’s existence in the shadow of Auschwitz. The painting below is a familiar story, albeit re-appropriated for as a modern parable. I must admit that when I first saw the painting, I struggled to understand this depiction of Christ. But as I grew to understand the underlying story and theology, the painting took on new life and meaning. The story of “Christopher”(literally “Christ-bearer”) has undergone numerous incarnations. The most familiar version tells the story of Christopher, a young man bent on serving the rich and powerful of his day. Jesus appears in the story as a small child. Not recognizing him, Christopher bears the boy on his shoulders to help him cross the stream, but his weight proved to be too much. Halfway across, Christopher was forced beneath the surface, inadvertently receiving the church’s sacrament of baptism. The remainder of his life was lived in service to God until he was eventually martyred. Sakulowski’s painting is a re-telling of this familiar story, in the form of a holocaust prisoner and the Savior. Gerd Lindner writes: “A political prisoner from a concentration camp, with the marks of torture oh him and the distinctive red triangle on his prison trousers, laboriously drags Christ, scourged and crowned with thorns, through the marshy morass of a desolate world. There is no river bank, no rescue in sight. The mystically-illumined Christ alone brings a promise of hope. In an existential situation of need, the person persecuted because of politics is helping the central figure of Christianity, the ‘Man of Sorrows.’ This is the message: In spite of fundamentally different philosophies of life, a bond does exist. And further: In an age where moral values are under threat, this connectedness is our only prospect of continuing to live in an upright way.” (Gerd Lindner, quoted from Beyond Belief: Modern Art and the Religious Imagination) I would respectfully suggest that hope is found not merely in “connectedness” as Lindner writes, but on the very fact of Christ’s self-identification with humanity’s sufferings, a fact that challenges our understanding of God’s relationship to evil. Jurgen Moltman, a German theologian who came to Christ as an allied prisoner during WWII, relates this through the writings of another holocaust survivor: “A shattering expression of the [theology of the cross] ... is to be found in Night, a book written by E. Wiesel, a survivor of Auschwitz: “The SS hanged two Jewish men and a youth in front of the whole camp. The men died quickly, but the death throes of the youth lasted for half an hour. ‘Where is God? Where is he?’ someone asked behind me. As the youth still hung in torment in the noose after a long time, I heard the man call again, ‘Where is God now?’ And I heard a voice in myself answer: ‘Where is he? He is here. He is hanging there on the gallows...’” Any other answer would be blasphemy. There cannot be any other Christian answer to the question of this torment. To speak here of a God who could not suffer would make God a demon. To speak here of an absolute God would make God an annihilating nothingness. To speak here of an indifferent God would make condemn men to indifference.” (Jurgen Moltmann, 273-74) To be very specific, it is Christ who suffered on the cross – the righteous One crucified between sinners. Luke’s account makes special note of this injustice – that One declared innocent by so many around Him would be forced to suffer such a shameful death (cf. Lk 23:4,14,15, 22,41,47). While His disciples stood at a distance, Christ entered into the worst form of suffering the culture had to offer. The cross is therefore a paradox, one in which the righteous God is identified with what is naturally alien to Himself – sin and human suffering. Therefore we may rightly understand Jesus as one who shares in humanity’s sufferings. The death camps of Auschwitz, the killing fields of Cambodia, and even the abuses found in Abu Ghraib take on new light as Christ came to redeem both victim and victimizers of profound abuse. But in contrast to Moltmann and many who embrace a “liberation theology,” the work of Christ is more than the mere identification with sinners, but the full redemption and restoration of humanity. In the early twelfth century, Hugh of St. Victor connects Christ’s humanity with the necessity of the cross: “From out nature, he took a victim for our nature, so that the whole burnt offering which was offered up might come from that which is ours. He did this so that the redemption to be offered might have a connection with us, through its being taken from what is ours. We are truly made to be partakers in this redemption if we are united through faith to the redeemer who has entered into fellowship with us through his flesh.” (Hugh of St. Victor, ca. 1100) Therefore Christ identifies Himself with the unrighteous not merely to demonstrate solidarity, but to demonstrate God’s love for a helpless world and to pay the necessary price for its iniquity. Sakulowski painted this piece in 1987. In contrast to the German political machinery of the 1980’s Sakulowski emphasized mutual understanding. I confess that I do not share the optimism that mutual respect can solve the moral problems of our age. But I remain confident that the message of the cross has a power even in weakness, and that those who would take up this cross would follow a Savior to the riverbank of His coming Kingdom. Christopher J Wiles writer | speaker | servant Chris is a writer and speaker from the Charlottesville area. He regularly serves as a research writer for Docent Research Group in addition to doing some guest speaking.
HuggingFaceFW/fineweb-edu
default
0.333
THE small town of Montbrandon, Italy, in the Marca of Ancona, gave birth to this Saint, also known in English as James of the Marches. When young he was sent to the University of Perugia, where his progress in learning soon qualified him to be chosen preceptor to a young gentleman of Florence. Fearing that he might be engulfed in the whirlpool of world excesses, St. James applied himself to prayer and recollection. When travelling near Assisi he went into the great Church of the Portiuncula to pray, and being animated by the fervor of the holy men who there served God, and by the example of their blessed founder St. Francis, he determined to petition in that very place for the habit of the Order. He began his spiritual war against the devil, the world, and the flesh, with assiduous prayer and extraordinary fasts and watchings. For forty years he never passed a day without taking the discipline. Being chosen Archbishop of Milan, he fled, and could not be prevailed on to accept the office. He wrought several miracles at Venice and at other places, and raised from dangerous sicknesses the Duke of Calabria and the King of Naples. The Saint died in the convent of the Holy Trinity of his Order, near Naples, on the 28th of November, in the year 1476, being ninety years old, seventy of which he had spent in a religious state.
HuggingFaceFW/fineweb-edu
default
0.333
Bankruptcy is a legal process that allows consumers and business entities to eliminate some, or all, of their debts by order of a federal court. While bankruptcy gives individuals and businesses a fresh start, as the court forgives debts that cannot be paid, it also gives creditors an opportunity to get at least partial repayment, based on what assets the individual or business has available. To explore this concept, consider the following bankruptcy definition. Definition of Bankruptcy 1. A state of utter ruin, failure, or depletion 2. The state of being bankrupt 1690-1700 English bankrupt + -cy What is Bankruptcy Bankruptcy can be a powerful tool for an individual or business facing severe financial distress. There are several types of bankruptcy in the U.S., each referred to as a “chapter,” and described and governed by Title 11 of the United States Code. In certain circumstances, an individual’s debts may be completely wiped out in bankruptcy, while in others, an individual’s or business’ debts are restructured, to be paid on a very specific payment plan. In either case, once bankruptcy has been filed, the collection process stops, providing a great deal of stress relief. History of Bankruptcy The history of bankruptcy in the U.S. dates back to the 1700s, having been adopted from English common law. Originally, bankruptcy was viewed as a quasi-criminal act. In 1789, Congress was given power to legislate bankruptcy laws according to the Constitution. Congress eventually adapted bankruptcy law to helping individuals and businesses suffering severe economic losses to solve the problem and repay their debts. When the Bankruptcy Act of 1800 was enacted, it was limited to initiating involuntary bankruptcy proceedings against traders. It was repealed three years later, and U.S. bankruptcy law gradually adopted the concept of voluntary bankruptcy. The Bankruptcy law of 1898, also known as “the Nelson Act,” finally established a workable relationship between creditors and debtors. In 1938, the Chandler Act expanded the concept of voluntary bankruptcy, giving authority over bankruptcy to the Securities and Exchange Commission. In 1978, the Bankruptcy Reform Act of 1978, now referred to as the Bankruptcy Code, overhauled the system. This act established the structure of U.S. bankruptcy courts, and gave judges discretion when it comes to deciding bankruptcy cases. Types of Bankruptcy The U.S. Bankruptcy Code specifies five different bankruptcy types: chapter 7, chapter 13, chapter 11, chapter 9, and chapter 12. Each type is intended for specific circumstances, depending on whether the bankruptcy is filed by a person or a business, and the value of their assets, earning capacity, and the debt-to-income burden. Chapter 7 Bankruptcy Chapter 7 bankruptcy is available to individuals and married couples. Also referred to as a “straight bankruptcy,” Chapter 7 allows the debtor (the person or entity that files bankruptcy) to keep his household and personal items, such as furnishings and clothing, and usually his home and car, as long as they do not exceed a certain value. If the debtor owes money on the home or car, he can keep them only if he works out a deal with the loan and mortgage companies to continue making payments until they are paid off. Not all people are eligible for Chapter 7 bankruptcy, which has certain income and net worth requirements. Chapter 7 bankruptcy takes about 6 months to complete. At the end of the case, the bankruptcy judge issues a Discharge Order, which can be used to prove the debtor’s debts have been wiped out. Chapter 13 Bankruptcy Chapter 13 bankruptcy is a form of bankruptcy that may be filed by individuals and married couples, and involves the creation of a debt repayment plan. In Chapter 13, unsecured creditors, meaning those who have extended credit without requiring any property or assets as security, generally only receive a small percentage of the amount owed by the debtor. The debtor must, however, pay past due taxes in full, and keep them current, and he must pay the full amount past due on any secured debt, such as a car loan or home mortgage. At the end of a Chapter 13 bankruptcy, most of the debtor’s debts are wiped out by a discharge, some creditors having received partial payment, others receiving none. To be eligible for Chapter 13 bankruptcy, the debtor must have a reliable source of income, as he will be required to repay at least some of the debt he has accrued. Additionally, the federal government sets the amount of debt allowed under Chapter 13 bankruptcy. As of 2015, a person cannot have more than $1,149,525 in secured debt and $383,175 in unsecured debt. Chapter 11 Bankruptcy Chapter 11 bankruptcy is generally reserved for large corporations with heavy debt burdens, though it can be used by small businesses also. Chapter 11 bankruptcy, also referred to as “business reorganization,” provides a way for businesses to pay off, or pay down, their debts over time, without losing their possessions. Chapter 11 bankruptcy is more costly for the debtor, and takes quite a bit more time to complete than Chapters 7 and 13. Chapter 12 Bankruptcy Chapter 12 bankruptcy is similar to Chapter 13 in structure, but provides additional advantages for family farmers and family fishermen, meaning those whose families engage in farming or fishing as a business. These small family farmers and fishermen are the only ones eligible for Chapter 12. Chapter 12 bankruptcy has a very limited scope, and so anyone considering filing this type of bankruptcy must meet certain strict income and conduct requirements. Chapter 9 Bankruptcy Chapter 9 bankruptcy is available only to municipalities, meaning the governing bodies of cities, towns, and districts with a corporate existence. Chapter 9 bankruptcy is another form of reorganization bankruptcy, though because a municipality is an entity of a state government, the bankruptcy court is limited in what it can order. Municipalities that have filed Chapter 9 bankruptcy: The collapse in real estate values in 2008 put the city of Stockton, California in a serious financial crisis. On June 28, 2012, Stockton became the largest city in U.S. history to file for Chapter 9 bankruptcy protection. While the Stockton bankruptcy case gained nationwide attention and lasted more than two years, its scope was surpassed by the city of Detroit, Michigan, only one year later. What Bankruptcy Cannot Do Bankruptcy can be a lifesaver for someone struggling with an overwhelming amount of debt. There are, however, limitations as to what bankruptcy can do. A few of the things bankruptcy cannot do for a person include: • Prevent Repossession of Secured Property – Bankruptcy typically does not eliminate liens, and if a debtor has a secured debt, the creditor can legally repossess the property regardless of whether bankruptcy has been filed. • Eliminate Child Support Obligations – Child support obligations are not eliminated when a person files bankruptcy. The same is true of alimony. If a court has ordered either of these, the debtor remains responsible for paying them in full, regardless of his bankruptcy status. If filing Chapter 13 bankruptcy, the debtor must include child support and alimony payments in his q repayment plan. • Eliminate Student Loans – It is very rare for bankruptcy to eliminate student loans. In fact, the only time this might occur is when the debtor can prove that repaying the loans would cause him an undue hardship. This requires very strict criteria to be met. • Eliminate Tax debts – Delinquent taxes are rarely discharged in bankruptcy, though it is possible in some circumstances. If the debtor has older, unpaid income taxes, and meets certain specific requirements, such a debt might be discharged. Elimination of Non-dischargeable Debts There are a number of non-dischargeable debts not listed above, that cannot be eliminated by bankruptcy. These include: • Debts not listed in the bankruptcy papers (with certain exceptions in Chapter 7) • Debts for damages awarded in a personal injury or wrongful death cases caused by the debtor’s driving while intoxicated • Fines for criminal convictions • Debts that a creditor proves should survive bankruptcy Involuntary Bankruptcy Involuntary bankruptcy is filed by a creditor against a business in an attempt to recover some of their money. This is more common when a creditor knows that the business can pay, but refuses to do so. While involuntary bankruptcy against an individual is permitted, it is rare, as creditors do not typically recoup their losses. Involuntary bankruptcy is initiated when one or more creditors files a petition with the bankruptcy court. The debtor then has 20 days to respond and if they fail to do so, the court grants the bankruptcy. When this occurs, the debtor is forced to participate. In most jurisdictions, at least three creditors must join the petition, and the amount of combined unsecured debt must be at least $14,425. Filing Bankruptcy The first step in filing bankruptcy is finding an experienced attorney. An attorney is not required, but the bankruptcy process is complex, and using an attorney helps ensure it is done correctly. The attorney will help the debtor complete the necessary forms, including the petition, schedules, and creditor lists and notifications. All of the forms must be filled out completely and honestly. An individual considering filing bankruptcy on his own, or who would like to review the requisite forms, may find bankruptcy forms as the federal bankruptcy website: Once the necessary forms have been filed, the debtor must gather together supporting documentation, and modern bankruptcy laws require each debtor to obtain official credit counseling, and provide proof of completion. The court assigns a bankruptcy trustee to each case. This is the person who will review all of the documents, investigate the debtor’s financial circumstances, and determine whether any debtors are objecting to the bankruptcy. In many cases, the bankruptcy is approved at this stage, by the trustee, with the debtor never appearing at court. In complex cases, however, such as those involving large corporations, large debt burdens, and other special circumstances, at least one hearing will be held. Bankruptcy Court Every state in the U.S. has a bankruptcy court within its judicial district. Every state has at least one district, with larger states having more. Judges presiding over bankruptcy hearings have the authority to make judicial decisions, though much of the proceedings take place outside of the courtroom, under the direction of a bankruptcy trustee. It is common for bankruptcy applicants to never set foot in court unless there are objections by creditors to their proposed plan. Bankruptcy Attorney Since bankruptcy is a complicated process, and certain specific forms need to be completed properly, most individuals and businesses seeking to file bankruptcy turn to experienced bankruptcy attorneys to help them with the process. Some bankruptcy attorneys even specialize in the more complex types of bankruptcy, such as chapters 12 and 9. It is recommended by many legal professionals that the bankruptcy attorney hired have experience with situations similar to the debtor’s issues. Related Legal Terms and Issues • Unsecured Debt – A debt for which no property serves as collateral of, or guarantee for, repayment.
mlfoundations/dclm-baseline-1.0
default
0.37
An “extreme heat belt” that stretches across the center of the US is expected to emerge over the next 30 years, subjecting millions more Americans to dangerously hot days. That’s according to new research published today by the nonprofit research group First Street Foundation. The belt is expected to extend from Texas and Louisiana all the way up to Wisconsin. Along the belt, extremely hot days could feel brutal, reaching temperatures that feel hotter than 125 degrees Fahrenheit. About 107.6 million Americans across 1,023 counties will experience that level of extreme heat at least one day a year by 2053. That’s compared to just 8.1 million residents in 50 counties who can expect to suffer through such high temperatures in 2023, according to First Street’s analysis. “We need to be prepared for the inevitable, that a quarter of the country will soon fall inside the Extreme Heat Belt with temperatures exceeding 125°F and the results will be dire,” Matthew Eby, founder and CEO of First Street Foundation, said in a press release. That figure, 125 degrees, is a measure of heat and humidity called a heat index. It’s often referred to as what the temperature “feels like.” Anything 125 degrees Fahrenheit or higher falls into the National Weather Service’s highest heat index category — signaling “extreme danger” when heat stroke is “highly likely.” Even if you don’t live within that extreme heat belt, you can expect temperatures to rise higher than what your community has experienced in the past, the research warns. “Virtually the entire country is subject to increasing perils associated with heat exposure,” the report says. That’s no surprise, of course — climate change is pushing the weather to extremes across the world. What’s cool about this new research is that you can zoom in to see the changes that your home might have to adapt to in the future. Just plug your address into Fist Street’s “Risk Factor” search tool online. That’ll pull up information on how many more hot days the location is expected to experience in 30 years. I searched for my childhood home in Southern California and found that it might see 11 days a year with a heat index above 99 degrees Fahrenheit compared to just four days this year. (You’ll also see wildfire and flood risk when you search for an address on the Risk Factor tool.) To figure out how much each location will bake in the future, the researchers first looked at the heat index for the seven hottest days it experienced this year. Then, using federal government datasets and other publicly available resources, it built a model to estimate how often the location would experience days that hot three decades from now. Miami-Dade County in Florida is on track to experience the biggest increase in the frequency of its hottest days. Currently, the heat index here reaches 103 degrees Fahrenheit during the seven hottest days of the year. By 2053, more than 30 days a year would feel that hot, according to First Street’s research.
HuggingFaceFW/fineweb-edu
default
0.333
A Young Mark Hamill In "Corvette Summer": What Did We Just Watch? Star Wars, this is not. Frankly, we don’t know what this is, or why we watched it, or really anything else anymore. Our worldview has cracked, but honestly, it hasn’t necessarily cracked in a bad way. It’s just that our eyes now only see all the hair, glitter, and absurdity of the late 1970s that could have led to this film. Corvette Summer proves that you can take massive stars like Mark Hamill and Annie Potts, and drop them into the strange Smokey and the Bandit-inspired films. We wish there was a sequel. RELATED: Every Stormtrooper Needs a ‘Star Wars’ Lamborghini Huracan After Star Wars made Mark Hamill a mega star in the industry, every studio in Hollywood wanted him to be the lead of their next project. That just so happened to be Corvette Summer. The plot, as much as we can understand it, details the life and times of Mark Hamill's character after discovering his prized custom Corvette was stolen and his trip to find it and love begins. Annie Potts, which you may know as the voice of Bo Peep from Toy Story, is the freethinking and extremely amorous love interest that Hamill discovers he needs through the film. RELATED: Uber Dressed Up Dodge Chargers like Star Wars Stormtroopers When the movie debuted, it wasn’t exactly a hit, but it was eventually profitable, generating $15,500,000 after costing an astronomical $9,000,000 to produce. Part of that cost went into the design and building of the cars involved in the picture. Dick Korkes of Korky’s Kustom Studios built the hero car, Hamill’s Corvette, for MGM. Korkes built a total of two cars for the movie, a hero car, and a backup car in case things went all Dukes of Hazzard. Thankfully, both Hamill and Potts bounced back from this train wreck of awesome mediocrity, with Hamill set to reprise his role as Luke Skywalker when Star Wars: The Force Awakens opens this weekend. However, that doesn’t exactly answer any of the questions we have regarding this movie, which will likely never go answered, as the people that originally wrote it are probably still high on...something. RELATED: Star Wars Fans, Your Stormtrooper Fiat 500e Awaits Be part of something big
mlfoundations/dclm-baseline-1.0
default
0.37
Live-well is a leading healthcare company that is actively involved in dissemination of information to the community. Our articles are published in Malaysia's leading dailies. Learn more about joint health on world arthritis day Odds are we seldom think about our joints. Most of us are likely to be more concerned about our risk of heart disease, cancer or diabetes. Many of us don’t think about arthritis until we start to feel those first aches and pains. World Arthritis day falls on October 12, 2015, and this year’s theme is “It’s in your hands. Take action today!” But in order to really take charge of our joint health, we first need to know what arthritis is. First observed in 1996, World Arthritis Day serves as a focus for organisations and individuals to work toward increasing awareness of arthritis and other rheumatic conditions worldwide. Thousands of arthritis sufferers around the world are able to live life to the fullest because of the positive actions they have taken. In December 2012, a study on the global burden of disease and the worldwide impact of all diseases and risk factors found musculoskeletal conditions such as arthritis and back pain affect more than 1.7 billion people worldwide. That’s how common arthritis is! Read on to know more about arthritis and what we can do to help our joints cope with this potentially debilitating condition. Arthritis literally means inflammation within the joint, but inflammation may also affect the tendons and ligaments surrounding the joint. Arthritis is very common, but is not well understood. Arthritis-related problems include pain, stiffness, inflammation and damage to joint cartilage (the tissue that covers the ends of bones, enabling them to move against each another) and surrounding structures. This can result in joint weakness, instability and deformities that can interfere with the most basic daily tasks such as walking, driving a car and preparing food. Actually, “arthritis” is not a single disease; it is an informal way of referring to joint pain or joint disease. There are more than 100 different types of arthritis and related conditions. Common forms of arthritis include osteoarthritis, rheumatoid arthritis, gout, ankylosing spondylitis and juvenile arthritis. The most common is osteoarthritis. Hence, this article will focus specifically on osteoarthritis. What is osteoarthritis? Osteoarthritis (OA) can be caused by ageing joints, injury and obesity. It is a condition that affects your joints. The surfaces within your joints become damaged so the joint doesn’t move as smoothly as it should. This condition is sometimes called arthrosis or osteoarthrosis. The older term used is degenerative joint disease, or wear and tear. OA symptoms include joint pain and stiffness. When a joint develops osteoarthritis, some of the cartilage covering the ends of the bones gradually roughen and thin, and the bone underneath thickens. All the tissues within the joint become more active than normal – as if your body is trying to repair the damage. The synovium (the inner layer of the joint capsule which produces synovial fluid) may thicken and make extra fluid. This causes the joint to swell. The capsule and ligaments (tough bands that hold the joint together) slowly thicken and contract. The bone at the edge of the joint grows outwards, forming bony spurs called osteophytes. Sometimes, the body’s attempts at repair are quite good, and the changes inside the joint won’t cause pain or problems. But in severe osteoarthritis, the cartilage can become so thin that it doesn’t cover the ends of the bones. The bones then rub against each other and start to wear away. The loss of cartilage, the wearing of bone and the bony spurs can change the shape of the joint, forcing the bones out of their normal position. Pain, pain go away Cartilage is the part of the joint that cushions the ends of the bones and allows easy movement of joints. The cartilage in the joint plays an important role because it helps avoid joint friction, allowing the joints to move smoothly. Conventionally, painkillers are prescribed to dull the pain associated with osteoarthritis. This is only a temporary relief and in the long run, it is far more vital to deliver ongoing essential nutritional support to the body to help rebuild the worn-out cartilage. Some good supporting joint health supplements include glucosamine, chondroitin and methylsulfonylmethane (MSM). Glucosamine and chondroitin Glucosamine exists naturally in cartilage, and functions to repair and stimulate new cartilage regrowth by stimulating the production of collagen and glycoaminoglycans (GAGs), the building blocks of cartilage. However, the amount of glucosamine produced in the body diminishes with age. Chondroitin is a natural compound that is found in the joint cartilage. It provides shock absorption and promotes cartilage elasticity. This GAG, found in abundance in the articular cartilage, lubricates the joint by acting like a water magnet to attract fluid into the connective tissues. MSM – an excellent companion MSM is a natural form of sulphur found in living tissues. It delivers sulphur to the body in a useable way to help strengthen join connective tissues. MSM reduces discomfort and pain arising from osteoarthritis. Many developed nations across the globe, such as the US, UK, Europe and Australia, regularly observe very positive results in the treatment of osteoarthritis when oral MSM supplement is added to glucosamine and chondroitin nutritional therapy. Supplemental MSM helps to support healthy joints and connective tissues such as tendons, cartilage, ligaments and muscle. Studies have also found that MSM helps to reduce stiffness and swelling, thus reducing pain and improving flexibility. So, if you are currently taking only glucosamine and chondroitin, you may be missing out on the synergistic benefits of MSM. The Arthritis Foundation of America recommends starting with a low dosage of 500mg twice a day, and increasing gradually to 1,000mg twice a day. After starting MSM, allow a reasonable amount of time to notice any benefits. There are a sea of joint supplements out there in the market, so how does one choose a glucosamine plus chondroitin supplement for your aching joints? There was a recent consumer satisfaction survey conducted with 200 over Malaysians who were taking a clinically-tested glucosamine plus chondroitin where 98.2% of the users agreed that the supplement relieved their joint pains effectively; and also improved their flexibility and mobility. The users did not take any painkillers or drugs for treatment when they were taking this supplement, yet they felt the improvement of their joint health. These results are exciting because they could feel the benefits from taking the supplement in as early as four months.
HuggingFaceFW/fineweb-edu
default
0.333
Fast Variational Inference in the Conjugate Exponential Family We present a general method for deriving collapsed variational inference algo- rithms for probabilistic models in the conjugate exponential family. Our method unifies many existing approaches to collapsed variational inference. Our collapsed variational inference leads to a new lower bound on the marginal likelihood. We exploit the information geometry of the bound to derive much faster optimization methods based on conjugate gradients for these models. Our approach is very general and is easily applied to any model where the mean field update equations have been derived. Empirically we show significant speed-ups for probabilistic models optimized using our bound. Introduction Variational bounds provide a convenient approach to approximate inference in a range of intractable models. Classical variational optimisation is achieved through coordinate ascent which can be slow to converge. A popular solution [King and Lawrence, 2006, Teh et al., 2007, Kurihara et al., 2007, Sung et al., 2008, Lázaro-Gredilla and Titsias, 2011 is to marginalize analytically a portion of the variational approximating distribution, removing this from the optimization. In this paper we provide a unifying framework for collapsed inference in the general class of models composed of conjugate-exponential graphs (CEGs). First we review the body of earlier work with a succinct and unifying derivation of the collapsed bounds. We describe how the applicability of the collapsed bound to any particular CEG can be determined with a simple d-separation test. Standard variational inference via coordinate ascent turns out to be steepest ascent with a unit step length on our unifying bound. This motivates us to consider natural gradients and conjugate gradients for fast optimization of these models. We apply our unifying approach to a range of models from the literature obtaining, often, an order of magnitude or more increase in convergence speed. Our unifying view allows collapsed variational methods to be integrated into general inference tools like infer.net [Minka et al., 2010]. The Marginalised Variational Bound The advantages to marginalising analytically a subset of variables in variational bounds seem to be well understood: several different approaches have been suggested in the context of specific models. In Dirichlet process mixture models Kurihara et al. [2007] proposed a collapsed approach using both truncated stick-breaking and symmetric priors. Sung et al. [2008] proposed 'latent space variational Bayes' where both the clusterparameters and mixing weights were marginalised, again with some approximations. Teh et al. [2007] proposed a collapsed inference procedure for latent Dirichlet allocation (LDA). In this paper we unify all these results from the perspective of the 'KL corrected bound' [King and Lawrence, 2006]. This lower bound on the model evidence is also an upper bound on the original variational bound, the difference between the two bounds is given by a Kullback Leibler divergence. The approach has also been referred to as the marginalised variational bound by , Lázaro-Gredilla and Titsias [2011]. The connection between the KL corrected bound and the collapsed bounds is not immediately obvious. The key difference between the frameworks is the order in which the marginalisation and variational approximation are applied. However, for CEGs this order turns out to be irrelevant. Our framework leads to a more succinct derivation of the collapsed approximations. The resulting bound can then be optimised without recourse to approximations in either the bound's evaluation or its optimization. Variational Inference Assume we have a probabilistic model for data, D, given parameters (and/or latent variables), X, Z, of the form p(D, X, Z) = p(D | Z, X)p(Z | X)p(X). In variational Bayes (see e.g. Bishop [2006]) we approximate the posterior p(Z, X|D) by a distribution q(Z, X). We use Jensen's inequality to derive a lower bound on the model evidence L, which serves as an objective function in the variational optimisation: For tractability the mean field (MF) approach assumes q factorises across its variables, q(Z, X) = q(Z)q(X). It is then possible to implement an optimisation scheme which analytically optimises each factor alternately, with the optimal distribution given by and similarly for Z: these are often referred to as VBE and VBM steps. King and Lawrence [2006] substituted the expression for the optimal distribution (for example q (X)) back into the bound (1), eliminating one set of parameters from the optimisation, an approach that has been reused by Titsias [2011]. The resulting bound is not dependent on q(X). King and Lawrence [2006] referred to this new bound as 'the KL corrected bound'. The difference between the bound, which we denote L KL , and a standard mean field approximation L MF , is the Kullback Leibler divergence between the optimal form of q * (X) and the current q(X). We rederive their bound by first using Jensen's inequality to construct the variational lower bound on the conditional distribution, This object turns out to be of central importance in computing the final KL-corrected bound and also in computing gradients, curvatures and the distribution of the collapsed variables q (X). It is easy to see that it is a function of X which lower-bounds the log likelihood p(D | X), and indeed our derivation treats it as such. We now marginalize the conditioned variable from this expression, giving us the bound of King andLawrence [2006] &. Note that one set of parameters was marginalised after the variational approximation was made. Using (2), this expression also provides the approximate posterior for the marginalised variables X: and e LKL appears as the constant of proportionality in the mean-field update equation (2). Partial Equivalence of the Bounds We can recover L MF from L KL by again applying Jensen's inequality, which can be re-arranged to give the mean-field bound, and it follows that L KL = L MF + KL(q * (X)||q(X)) and 1 L KL ≥ L MF . For a given q(Z), the bounds are equal after q(X) is updated via the mean field method: the approximations are ultimately the same. The advantage of the new bound is to reduce the number of parameters in the optimisation. It is particularly useful when variational parameters are optimised by gradient methods. Since VBEM is equivalent to a steepest descent gradient method with a fixed step size, there appears to be a lot to gain by combining the KLC bound with more sophisticated optimization techniques. Gradients Consider the gradient of the KL corrected bound with respect to the parameters of q(Z): where we have used the relation (5). To find the gradient of the mean-field bound we note that it can be written in terms of our conditional bound (3) as thus setting q(X) = q (X) not only makes the bounds equal, L MF = L KL , but also their gradients with respect to θ Z . Sato [2001] has shown that the variational update equation can be interpreted as a gradient method, where each update is also a step in the steepest direction in the canonical parameters of q(Z). We can combine this important insight with the above result to realize that we have a simple method for computing the gradients of the KL corrected bound: we only need to look at the update expressions for the mean-field method. This result also reveals the weakness of standard variational Bayesian expectation maximization (VBEM): it is a steepest ascent algorithm. Honkela et al. [2010] looked to rectify this weakness by applying a conjugate gradient algorithm to the mean field bound. However, they didn't obtain a significant improvement in convergence speed. Our suggestion is to apply conjugate gradients to the KLC bound. Whilst the value and gradient of the MF bound matches that of the KLC bound after an update of the collapsed variables, the curvature is always greater. In practise this means that much larger steps (which we compute using conjugate gradient methods) can be taken when optimizing the KLC bound than for the MF bound leading to more rapid convergence. King and Lawrence [2006] showed empirically that the KLC bound could lead to faster convergence because the bounds differ in their curvature: the curvature of the KLC bound enables larger steps to be taken by an optimizer. We now derive analytical expressions for the curvature of both bounds. For the mean field bound we have In this result the first term is equal to (10), and the second two terms combine to be always positive semi-definite, proving King and Lawrence [2006]'s intuition about the curvature of the bound. When curvature is negative definite (e.g. near a maximum), the KLC bound's curvature is less negative definite, enabling larger steps to be taken in optimization. Figure 1(b) illustrates the effect of this as well as the bound's similarities. Relationship to Collapsed VB In collapsed inference some parameters are marginalized before applying the variational bound. For example, Sung et al. [2008] proposed a latent variable model where the model parameters were marginalised, and Teh et al. [2007] proposed a nonparametric topic model where the document proportions were collapsed. These procedures lead to improved inference, or faster convergence. The KLC bound derivation we have provided also marginalises parameters, but after a variational approximation is made. The difference between the two approaches is distilled in these expressions: where the left expression appears in the KLC bound, and the right expression appears in the bound for collapsed variational Bayes, with the remainder of the bounds being equal. Whilst appropriately conjugate formulation of the model will always ensure that the KLC expression is analytically tractable, the expectation in the collapsed VB expression is not. Sung et al. [2008] propose a first order approximation to the expectation of the form E q(Z) f (Z) ≈ f (E q(Z) Z ), which reduces the right expression to the that on the left. Under this approximation 2 the KL corrected approach is equivalent to the collapsed variational approach. Applicability To apply the KLC bound we need to specify a subset, X, of variables to marginalize. We select the variables that break the dependency structure of the graph to enable the analytic computation of the integral in (4). Assuming the appropriate conjugate exponential structure for the model we are left with the requirement to select a sub-set that induces the appropriate factorisation. These induced factorisations are discussed in some detail in Bishop [2006]. They are factorisations in the approximate posterior which arise from the form of the variational approximation and from the structure of the model. These factorisations allow application of KLC bound, and can be identified using a simple d-separation test as Bishop discusses. The d-separation test involves checking for independence amongst the marginalised variables (X in the above) conditioned on the observed data D and the approximated Figure 1: (a) An example directed graphical model on which we could use the KLC bound. Given the observed node C, the nodes A,F d-separate given nodes B,D,E. Thus we could make an explicit variational approximation for A,F, whilst marginalising B,D,E. Alternatively, we could select B,D,E for a parameterised approximate distribution, whilst marginalising A,F. (b) A sketch of the KLC and MF bounds. At the point where the mean field method has q(X) = q (X), the bounds are equal in value as well as in gradient. Away from the this point, the different between the bounds is the Kullback Leibler divergence between the current MF approximation for X and the implicit distribution q (X) of the KLC bound. variables (Z in the above). The requirement is to select a sufficient set of variables, Z, such that the effective likelihood for X, given by (3) becomes conjugate to the prior. Figure 1(a) illustrates the d-separation test with application to the KLC bound. For latent variable models, it is often sufficient to select the latent variables for X whilst collapsing the model variables. For example, in the specific case of mixture models and topic models, approximating the component labels allows for the marginalisation of the cluster parameters (topics allocations) and mixing proportions. This allowed Sung et al. [2008] to derive a general form for latent variable models, though our formulation is general to any conjugate exponential graph. Sato [2001] and Hoffman et al. [2012] showed that the VBEM procedure performs gradient ascent in the space of the natural parameters. Using the KLC bound to collapse the problem, gradient methods seem a natural choice for optimisation, since there are fewer parameters to deal with, and we have shown that computation of the gradients is straightforward (the variational update equations contain the model gradients). It turns out that the KLC bound is particularly amenable to Riemannian or natural gradient methods, because the information geometry of the exponential family distrubution(s), over which we are optimising, leads to a simple expression for the natural gradient. Previous investigations of natural gradients for variational Bayes [Honkela et al., 2010, Kuusela et al., 2009] required the inversion of the Fisher information at every step (ours does not), and also used VBEM steps for some parameters and Riemannian optimisation for other variables. The collapsed nature of the KLC bound means that these VBEM steps are unnecessary: the bound can be computed by parameterizing the dis-tribution of only one set of variables (q(Z)) whilst the implicit distribution of the other variables is given in terms of the first distribution and the data by equation (5). Riemannian Gradient Based Optimisation We optimize the lower bound L KL with respect to the parameters of the approximating distribution of the non-collapsed variables. We showed in section 2 that the gradient of the KLC bound is given by the gradient of the standard MF variational bound, after an update of the collapsed variables. It is clear from their definition that the same is true of the natural gradients. Variable Transformations We can compute the natural gradient of our collapsed bound by considering the update equations of the non-collapsed problem as described above. However, if we wish to make use of more powerful optimisation methods like conjugate gradient ascent, it is helpful to re-parameterize the natural parameters in an unconstrained fashion. The natural gradient is given by [Amari and Nagaoka, 2007]: where G(θ) is the Fisher information matrix whose i,j th element is given by For exponential family distributions, this reduces to ∇ 2 θ ψ(θ), where ψ is the lognormaliser. Further, for exponential family distributions, the Fisher information in the canonical parameters (θ) and that in the expectation parameters (η) are reciprocal, and we also have G(θ) = ∂η/∂θ. This means that the natural gradient in θ is given by The gradient in one set of parameters provides the natural gradient in the other. Thus when our approximating distribution q is exponential family, we can compute the natural gradient without the expensive matrix inverse. Sato [2001] showed that the VBEM algorithm was a gradient based algorithm. In fact, VBEM consists of taking unit steps in the direction of the natural gradient of the canonical parameters. From equation (9) and the work of Sato [2001], we see that the gradient of the KLC bound can be obtained by considering the standard meanfield update for the non-collapsed parameter Z. We confirm these relationships for the models studied in the next section in the supplementary material. Steepest Ascent is Coordinate Ascent Having confirmed that the VB-E step is equivalent to steepest-gradient ascent we now explore whether the procedure could be improved by the use of conjugate gradients. Conjugate Gradient Optimization One idea for solving some of the problems associated with steepest ascent is to ensure each gradient step is conjugate (geometrically) to the previous. Honkela et al. [2010] applied conjugate gradients to the standard mean field bound, we expect much faster convergence for the KLC bound due to its differing curvature. Since VBEM uses a step length of 1 to optimize, 3 we also used this step length in conjugate gradients. In the natural conjugate gradient method, the search direction at the i th iteration is given by s i = − g i + βs i−1 . Empirically the Fletcher-Reeves method for estimating β worked well for us: where ·, · i denotes the inner product in Riemannian geometry, which is given by g G(ρ) g. We note from Kuusela et al. [2009] that this can be simplified since g G g = g GG −1 g = g g, and other conjugate methods, defined in the supplementary material, can be applied similarly. Experiments For empirical investigation of the potential speed ups we selected a range of probabilistic models. We provide derivations of the bound and fuller explanations of the models in the supplementary material. In each experiment, the algorithm was considered to have converged when the change in the bound or the Riemannian gradient reached below 10 −6 . Comparisons between optimisation procedures always used the same initial conditions (or set of initial conditions) for each method. First we recreate the mixture of Gaussians example described by Honkela et al. [2010]. Mixtures of Gaussians For a mixture of Gaussians, using the d-separation rule, we select for X the cluster allocation (latent) variables. These are parameterised through the softmax function for unconstrained optimisation. Our model includes a fully Bayesian treatment of the cluster parameters and the mixing proportions, whose approximate posterior distributions appear as (5). Full details of the algorithm derivation are given in the supplementary material. A neat feature is that we can make use of the discussion above to derive an expression for the natural gradient without a matrix inverse. In Honkela et al. [2010] data are drawn from a mixture of five two-dimensional Gaussians with equal weights, each with unit spherical covariance. The centers of the components are at (0, 0) and (±R, ±R). R is varied from 1 (almost completely overlapping) to 5 (completely separate). The model is initialised with eight components with an uninformative prior over the mixing proportions: the optimisation procedure is left to select an appropriate number of components. Sung et al. [2008] reported that their collapsed method led to improved convergence over VBEM. Since our objective is identical, though our optimisation procedure different, we devised a metric for measuring the efficacy of our algorithms which also accounts for their propensity to fall into local minima. Using many randomised restarts, we measured the average number of iterations taken to reach the best-known optimum. If the algorithm converged at a lesser optimum, those iterations were included in the denomiator, but we didn't increment the numerator when computing the average. We compared three different conjugate gradient approaches and standard VBEM (which is also steepest ascent on the KLC bound) using 500 restarts. Table 1 shows the number of iterations required (on average) to come within 10 nats of the best known solution for three different conjugate-gradient methods and VBEM. VBEM sometimes failed to find the optimum in any of the 500 restarts. Even relaxing the stringency of our selection to 100 nats, the VBEM method was always at least twice as slow as the best conjugate method. Topic Models Latent Dirichlet allocation (LDA) [Blei et al., 2003] is a popular approach for extracting topics from documents. To demonstrate the KLC bound we applied it to 200 papers from the 2011 NIPS conference. The PDFs were preprocessed with pdftotext, removing non-alphabetical characters and coarsely filtering words by popularity to form a vocabulary size of 2000. 4 We selected the latent topic-assignment variables for parameterisation, collapsing the topics and the document proportions. Conjugate gradient optimization was compared to the standard VBEM approach. We used twelve random initializations, starting each algorithm from each initial condition. Topic and document distributions where treated with fixed, uninformative priors. On average, the Hestenes-Steifel algorithm was almost ten times as fast as standard VB, as shown in Table 2, whilst the final bound varied little between approaches. RNA-seq alignment An emerging problem in computational biology is inference of transcript structure and expression levels using next-generation sequencing technology (RNA-Seq). Several models have been proposed. The BitSeq method [Glaus et al., 2012] is based on a probabilistic model and uses Gibbs sampling for approximate inference. The sampler can suffer from particularly slow convergence due to the large size of the problem, which has six million latent variables for the data considered here. We implemented a variational version of their model and optimised it using VBEM and our collapsed Riemannian method. We applied the model to data described in Xu et al. [2010], a study of human microRNA. The model was initialised using four random initial conditions, and optimised using standard VBEM and the conjugate gradient versions of the algorithm. The Polack-Ribiére conjugate method performed very poorly for this problem, often giving negative conjugation: we omit it here. The solutions found for the other algorithms were all fairly close, with bounds coming within 60 nats. The VBEM method was dramatically outperformed by the Fletcher-Reeves and Hestenes-Steifel methods: it took 4600 ± 20 iterations to converge, whilst the conjugate methods took only 268 ± 4 and 265 ± 1 iterations to converge. At about 8 seconds per iteration, our collapsed Riemannian method requires around forty minutes, whilst VBEM takes almost eleven hours. All the variational approaches represent an improvement over a Gibbs sampler, which takes approximately one week to run for this data [Glaus et al., 2012]. Discussion Under very general conditions (conjugate exponential family) we have shown the equivalence of collapsed variational bounds and marginalized variational bounds using the KL corrected perspective of King and Lawrence [2006]. We have provided a succinct derivation of these bounds, unifying several strands of work and laying the foundations for much wider application of this approach. When the collapsed variables are updated in the standard MF bound the KLC bound is identical to the MF bound in value and gradient. Sato [2001] has shown that coordinate ascent of the MF bound (as proscribed by VBEM updates) is equivalent to steepest ascent of the MF bound using natural gradients. This implies that standard variational inference is also performing steepest ascent on the KLC bound. This equivalence between natural gradients and the VBEM update equations means our method is quickly implementable for any model where the mean field update equations have been computed. It is only necessary to determine which variables to collapse using a d-separation test. Importantly this implies our approach can readily be incorporated in automated inference engines such as that provided by infer.net [Minka et al., 2010]. We'd like to emphasise the ease with which the method can be applied: we have provided derivations of equivalencies of the bounds and gradients which should enable collapsed conjugate optimisation of any existing mean field algorithm, with minimal changes to the software. Indeed our own implementations (see supplementary material) use just a few lines of code to switch between the VBEM and conjugate methods. The improved performance arises from the curvature of the KLC bound. We have shown that it is always less negative than that of the original variational bound allowing much larger steps in the variational parameters as King and Lawrence [2006] suggested. This also provides a gateway to second-order optimisation, which could prove even faster. We provided empirical evidence of the performance increases that are possible using our method in three models. In a thorough exploration of the convergence properties of a mixture of Gaussians model, we concluded that a conjugate Riemannian algorithm can find solutions that are not found with standard VBEM. In a large LDA model, we found that performance can be improved many times over that of the VBEM method. In the BitSeq model for differential expression of genes transcripts we showed that very large improvements in performance are possible for models with huge numbers of latent variables. Supplementary Material for: Fast Variational Inference in the Conjugate Exponential Family This supplementary material accompanies the NIPS paper on Fast Variational Inference in the Conjugate Exponential Family. Its purpose is to provide details of how our very general framework applies in the case of the specific models described in the paper. First we briefly mention the form of the three conjugate gradient algorithms we used in optimization. Conjugate gradient algorithms There are several different methods for approximating the parameter β in the conjugate gradient algorithm. We used the Polack-Ribière, Fletch-Reeves or Hestenes-Stiefel methods: where ·, · i denotes the inner product in Riemannian geometry, which is given by g G(ρ) g Mixture of Gaussians A MoG model is defined as follows. We have a set of N D-dimensional vectors Y = {y n } N n=1 . The likelihood is where L is a collection of binary latent variables indicating cluster membership, L = {{ nk } N n=1 } K k=1 and η is a collection of cluster parameters, The prior over L is given by a multinomial distribution with components π, which in turn have a Dirichlet prior with uniform concentrations for simplicity: with α representing a K dimensional vector with elements α, and R D being the normalising constant for the Dirichlet distribution, R D (α) = Γ(Kα)Γ(α) −K . Finally we choose a conjugate Gaussian-Wishart prior for the cluster parameters which can be written where R GW is the normalising constant, and is given by Γ((ν + 1 − d)/2)) −1 . Applying the KLC bound The first task in applying the KLC bound is to select which variables to parameterise and which to marginalise. From the graphical model representation of the MoG problem in Figure 2, we can see that we can select the latent variables Z = {L} for parameterisation, whilst marginalising the mixing proportions and cluster parameters (X = {π, η}). We note that it is possible to select the variables the other way around: parameterising π and η and marginalising L, but parameterisation of the latent variables makes implementation a little simpler. We use a factorised multinomial distribution q(L) to approximate the posterior for p(L|Y), parameterised using the softmax functions so We are now ready to apply the procedure described above to derive the KLC bound. where H L is the entropy of the distribution q(L). We expand to give where r k = N n=1 r nk , C k = N n=1 r nk y n y n , andȳ k = N n=1 r nk y n . The conjugacy between the intermediate bound L 1 and the prior now emerges, making the second integral in the KLC bound tractable. After exponentiating this expression and multiplying by the prior, p(η)p(π), we find that the integrals with respect to both η and π are tractable. This result means that the only variational parameters needed are those of q(L). The integrals result in where we have defined and α represents a vector containing each α k . Some simplification of (24) leads to where const. contains terms independent of r. Equations (25) are similar to the update equations for the approximating distributions in the VBEM methodology [see e.g. Bishop, 2006]. However, for our model they are simply intermediate variables, representing combinations of the true variational parameters r, the data, and the model prior parameters. When optimizing the model with respect to the variational parameters, the dependency of these intermediate variables on r is not ignored as it would be in MF variational approach. The gradient of the MV bound (26) with respect to the parameters r is given by Taking a step in this direction (in the valiables γ) yields exactly the VB-E step associated with the mean-field bound. the gradient in r is the natural gradient in γ (see paper section 4.1). Latent Dirichlet Allocation Latent Dirichlet allocation is a popular topic model. See Blei et al. [2003] for a thorough introduction. Suppose we have D documents, K topics and a vocabulary of size V . The d th document contains N d words W d = {w dn } N d n=1 , and each word is represented as a binary vector w dn ∈ {0, 1} V . Each word is associated with a latent variable dn , which assigns the word to a topic, thus dn ∈ {0, 1} K . We'll use W to represent the colletion of all words, W = {W d } D d=1 , and L to represent the collection of all latent variables L = {{ dn } N d n=1 } D d=1 . Each document has an associated vector of topic proportions, θ d ∈ [0, 1] K , and each topic is represented by a vector of word proportions φ k ∈ [0, 1] V . We assume a symmetrical prior distribution over topics in each document p(θ d ) = Dir(θ d |α), and similarly for words within topics. p(φ k ) = Dir(φ k |β). The LDA generative model states that for each word, first the associated topic is drawn from the topic proportions for the document, and then the word is drawn from the selected topic. The collapsed bound To derive the colapsed bound, we use a similar d-separation test as for the mixture model to select the latent variables as the parameteriser (non-collapsed) nodes. See Figure 3. To proceed we assume a factorising multinomial posterior for L: subject to the constraint K k=1 dnk = 1, which we enforce through a softmax reparameterisation We proceed by deriving the conditional bound (31) To marginalise the variables θ, φ, we exponentiate this bound and take the expectation under the priors. This results in Careful inspection of the above reveals that the two integrals separate as expected, and result in the normalizers for each of the independent Dirichlet approximations. Taking the logarithm reults in Topics found by LDA For completeness we show here some topics found by LDA on the NIPS conference data. BitSeq Model The generative model for an RNA-seq assay is as follows. We assume that the experiment consists of a pile of RNA fragments, where the abundance of fragments from transcript T m in the assay is θ m . The sequencer then selects a fragment at random from the pile, such that the probability of picking a fragment corresponding to transcript T m is θ m . Introducing a convenient membership vector n for each read, we can write where nm ∈ {0, 1} is a binary variable which indicates whether the nth fragment came from the mth transcript ( nm = 1) and is subject to M m=1 nm = 1. We use L to represent the collection of all alignment variables. Both θ and L are variables to be inferred, with θ the main object of interest. Writing the collection of all reads as R = {r n } N n=1 , the likelihood of a set of alignments L is where T m represents the mth transcript, T represents the transcriptome. The values of p(r n |T m ) can be computed before performing inference in θ since we are assuming a known transcriptome. We compute these values based on the quality of alignment of the read r n to the transcript T m , using a model which can correct for sequence specific or fragmentation biases. The method is described in detatil in Glaus et al. [2012]. We specify a conjugate Dirichlet prior over the vector θ. The collapsed bound whereˆ m = N n=1 n and we also have that the approximate posterior distribution for θ is a Dirichlet distribution with parameters α o m +ˆ m .
allenai/peS2o
default
0.02
Based on the author’s successful courses and workshops, Painting for the Absolute and Utter Beginner really does start at the beginning, helping new painters find "what works" while providing information on all the necessary tools, tips, and techniques they’ll need to create a representational painting. The chapters follow a progressive sequence that teaches basic skills through practical, accessible exercises–how to handle a brush, achieve the right paint consistency, mix color, and create dimension–building a solid foundation that readers can rely on as painting projects grow more challenging. A special feature is the artwork and commentary of real students, which helps beginners set realistic goals and shows them how other artists at the same level of experience have worked through inevitable setbacks to achieve success. The acrylics of today have grown into the most adaptable art material of the modern age. Focusing on a popular art medium that has been around for over 50 years, The New Acrylics illustrates how artists can create lush textures, color, and luster with the modern acrylics readily available in any art supply store. These are nontoxic, environmentally sound, and exist in the most dazzling array of chemical formats—from the most fluid to the highly viscous. Not only do artists paint with acrylics these days, they can create rich metallic effects, or even 3-dimensional sculptures. Traditional technique based books on acrylics cover traditional methods of painting. However, The New Acrylics is geared toward more nonconventional ways in which to manipulate modern-day acrylics, and demonstrates new applications such as glazing, textured effects, soft sculpture effects, or staining, thus reinventing the old way of handling acrylics, and revealing a fabulous new artistic medium. The underlying theme of this dazzling and sophisticated book is to encourage artists to interpret and handle acrylic paints in a vibrant, fresh, and above all, individualistic style.
HuggingFaceFW/fineweb-edu
default
0.333
The Objective of the ‘Connecting Aboriginal People’ in the RiverConnect Project is: To provide programs, activities and facilities so that the whole community can understand and better appreciate the important historical and cultural significance this area holds for its traditional owners. There is a strong desire from many people in the community to better understand the rich and diverse legacy of Aboriginal history and culture unique to the region. The RiverConnect strategic plan sets out a range of activities intended to share this knowledge both within the Aboriginal community and with the wider community. Activities include oral history sessions, schools activities, corroboree events and development of an historic trail in the Flats area and guided tours. Traditional Song and Dance Workshop As part of the Activities in the Park and RiverConnect programs, locals, visiors and on-lookers were treated to a traditional song and dance event on the foreshore of Victoria Park Lake. RiverConnect - An Aboriginal Oral History An oral history of the Yorta Yorta people who could remember their families’ arrival in Mooroopna after the ‘walk-off’ from the Cummeragunja mission was conducted by Lois Peeler, a Yorta Yorta Elder. Lois interviewed 28 Elders who had strong memories of living in makeshift accommodation on the banks of the Goulburn River and from these recollections, Lois published RiverConnect - An Aboriginal Oral History. In this Section Aboriginal Action Working Group The function of the Aboriginal Action Working Group is to advise on programs, activities and facilities that will enable the whole community to understand and better appreciate Aboriginal history and culture in the RiverConnect area. The Flats is a significant cultural area located on the floodplain between Shepparton and Mooroopna.
HuggingFaceFW/fineweb-edu
default
0.333
Manta rays – the diver's dream. Belonging to the family Mobulidae, there are two species of Manta Ray often seen and admired by divers: Manta birostris – the giant of the seas and Manta alfredi - resident to local reefs. These giant fish are entirely harmless to humans and only consume vast amounts of plankton every day to sustain their enormous bodies. They belong to the same family as sharks and other rays - elasmobranch. They do not breathe air but use their gills to breathe underwater and, like other species in the family, they cannot stop moving or they will drown, so there is no rest for mantas, ever. Their wings are in fact triangular pectoral fins with broadheads and “horns” on either side of their always open mouths; their large eyes observe the world around them. They are horizontally flattened with gill slits positioned on the underbody. They feature a shortened tail in relation to their body size which is lacking in skeletal support. They are covered in mucus to prevent infections. They move by moving their fins up and down, driving water backwards and supplying oxygen to their gills which also act as a sieve to collect food. Their colouring makes them unique, and individuals can be recognised by comparing their distinctive markings. The two species differ slightly in their appearance with the larger Manta birostris having more angular shoulder marking and larger dark spots on their belly, dark outlines on their wings and dark coloured mouths. The smaller reef species have more rounded shoulders with pale and white spots underneath. Even though they vary in colour with either dark or white colouring, genetic studies have shown that the different colours do not separate them into different species, but their genetical differences as well as size and behaviour do. Manta birostris are oceanic mantas which grow up to 9m and can weigh up to 2 tonnes. Manta alfredi, reef mantas, are smaller, growing up to 5.5m in wingspan and weighing up to 1.5 tonnes. The wingspan of mantas is around 2.2 times the length of their body. They are long-living creatures estimated to live between 50 and 100 years in the wild. Both species visit reefs to be cleaned and to feed, but they live in the pelagic zones in the oceans. The bigger oceanic mantas cover a broader geographic region and migrate between temperate and subtropical and tropical water. They venture further out to sea, feeding on areas of upwellings and seamounts. The reef mantas prefer to stay closer to home, often seen among coral reefs, atolls, bays, tropical islands and upwellings associated with coastlines. It takes a lot of zooplankton to feed such a giant organism, and this seems to be the reason for the larger species to migrate, chasing after food while swimming with their mouth always open and continuously filter feeding; taking advantage of highly productive areas. They are known to eat 13% of their body weight each week. It takes mantas 10-20 years to reach sexual maturity. Female mantas play hard to get, and it can take up to a few weeks for the males to compete with each other to succeed in securing a mate. Their “mating train”, where the males follow the female around the reef, is exhilarating to watch, with the female picking and choosing the perfect mate. She does not make it easy, flying around the reef testing the fitness and agility of the prospective suitor. The fastest and fittest male wins the race and bites onto the females left wing to secure him into the mating position. Successful mating leads to a fertilised egg growing inside the female uterus and producing a mini-manta (sometimes two) which is born after around 1 year. The “little one” measures 1.5m at birth and is usually born at night in shallow waters. They give birth every 2-5 years on average, are slow to reproduce, have a long gestation period, a small litter size and late sexual maturity - all features that can lead these majestic creatures to extinction when overexploited. In the wild, their main predators include sharks and killer whales, but the biggest threat to these amazing creatures are humans. They become entangled in fishing lines and nets as they cannot stop and keep moving, entangling themselves even further, sustaining fatal injuries and drowning. They are also actively hunted for their meat, oil and on a large scale for Chinese medicine that uses vast amounts of manta gills, leading to diminishing numbers of mantas present in the oceans. Both species are classified as “vulnerable” by the IUCN Red List and protected by international regulations; research is ongoing to get to know mantas better and to protect these gentle giants more efficiently. As recently calculated, a live manta is worth much more in revenue from tourism over its lifetime (around $1 million) whereas a dead manta is worth only $40-$500. This should be even more of a reason to protect these amazing creatures, every divers dream. Text: Bogna Griffin , BSc Applied Freshwater and Marine Biology , GMIT, Ireland Photo: Ivana Orlovic Kranjc, Stuart Ireland
HuggingFaceFW/fineweb-edu
default
0.333
Bệnh mắt cá chân là 1 trong số những bệnh dịch thường xuyên gặp gỡ ngơi nghỉ fan trung tuổi hoặc bự tuổi. Tuy không khiến nguy nan mà lại lại ảnh hưởng mang lại chất lượng cuộc sống của rất nhiều người. Bạn đang xem: Chữa bệnh mắt cá chân Bệnh mắt cá chân là 1 trong những trong số những dịch hay gặp sống người trung tuổi hoặc bự tuổi. Tuy không gây nguy nan cơ mà lại tác động mang lại quality cuộc sống đời thường của không ít người. Bệnh mắt cá chân chân là một trong tổn thương thơm dày sừng quần thể trú sinh hoạt lòng bàn chân. Vị trí hay mở ra sinh sống gần như vị trí nhưng mà xương cẳng bàn chân xúc tiếp với giày dxay như: khía cạnh lòng của ngón chân thiết bị 5, cạnh cẳng bàn chân, gót chân, lô chiếc lòng bàn chân. Biểu hiện tại là trung trung tâm tròn cất hóa học sừng, da bao quanh bao gồm viền dày sừng, color xoàn vào, ấn vào thì đau. Mắt cá có Khi phẳng, có lúc lồi lên ngoài khía cạnh da, bề mặt láng tuyệt gồm vảy. Mắt cá thường hết sức nhức bởi vì sống phần đa địa chỉ dễ kích say mê rửa tiếp giáp. Mắt cá không lây truyền cơ mà có khả năng bị nhiễm trùng. Đôi khi chỉ tất cả 1-2 chiếc. Mụn cóc lòng bàn chân: Mụn cóc lòng cẳng bàn chân thường ở sâu rộng, ít đau, khô rộng, lộ diện thường xuyên có khá nhiều loại, chú ý kỹ bao hàm gai bé dại với thường có số đông chấm Black. Vị trí không tuyệt nhất thiết yêu cầu sinh sống vùng tỳ ép. Mụn cóc lòng bàn chân có thể lan truyền sang trọng các vùng không giống trên khung người với có thể lây truyền cho người khác.Cnhì chân: vốn là tổn thương thơm dày sừng hay lộ diện vì chưng sự ma gần kề, tỳ đnai lưng kéo dài; tổn định thương là đám domain authority dày màu ngả vàng, tương đối cộm lên, hình vào tốt bầu dục, sờ cứng, không đau hoặc nhức không đáng chú ý, không có nhân sinh hoạt giữa Điều trị như vậy nào? Có thể điều trị theo các phương pháp sau:Thuốc lột bạo dạn (salicylic acid), liệu pháp ướp đông (cryotherapy), đái phẫu hoặc đốt bởi laser. Bạn bắt buộc được Bác sĩ tư vấn, đi khám cùng mang đến hướng chữa bệnh cân xứng với bạn nhất. lúc vạc hiện new mắc bệnh mắt cá thì cần điều trị sớm để sở hữu công dụng giỏi hơn. quý khách rất có thể tương tác Phòng Da liễu, TT Chăm sóc SKSS Bình Dương để được cung ứng.Làm vậy như thế nào để chống phòng ngừa chống dự phòng bệnh mắc cá chân? Giữ chân thật sạch sẽ và khô mát.Tránh đi chân è cổ.Cần đề xuất tránh mang những một số loại giày thừa chật, tránh có guốc gót cao. Nên mang các các loại dnghiền để thông thoáng hơn.Nếu cần mang giầy hay giỏi rửa xát với cẳng chân thì nên cần dùng thêm vớ, hoặc sử dụng thêm miếng đệm, tấm lót giầy./. Tlỗi viên ảnh Giới thiệu Khoa phòng Tin Tức Vnạp năng lượng phiên bản Đào sinh sản & hướng dẫn
HuggingFaceFW/fineweb-2
vie_Latn
0.0775
You are here How Politics and Travel Money are Related 26th July 2018 You’re two weeks out from your dream holiday to the USA. After months of saving you have calculated that you can finally afford to see your dream show on Broadway (or buy 83 cheeseburgers from In-N-Out... no one is judging). However, your dreams of Broadway and burgers are cut short after a recent drop in exchange rates has left a rather unwelcome dent in your spending money. Volatility in exchange rates can catch even the most researched traveller unawares, especially when the changes can be a result of political activity across the globe. For many people, the happenings of politics are often tuned out as background noise on the TV when we eat our breakfast or wait for our favourite prime time program to start. Whilst it isn’t the sexiest of topics, it certainly pays to understand how political decisions can affect you and, in this instance, your travel money. At Travel Money Oz we understand that this can sometimes be pretty boring, and often quite complicated. With this in mind, we have simplified how political decisions lead to exchange rate volatility, as well as compiled some tips and tricks to help minimise the effect on your travel money the next time you head overseas. So, how do exchange rates affect travel money? The amount of money you have for your holiday is ultimately determined by two things: 1. How much money you have saved 2. The exchange rate You can find some tips for saving for your holiday here. Though, that part is largely determined by you. Your savings are then multiplied by the current exchange rate to determine the equivalent in your destinations currency. For example: If you are going to New York you need to convert your Aussie Dollars (AUD) to US Dollars (USD). So, if you have saved $2000 AUD and the exchange rate is 1AUD = 0.75USD, you need to multiply your $2000 by 0.75. If this was the case, you would be hitting the Big Apple with $1500USD. The foreign exchange market is traded on 24/7. This means the exchange rate can change by the second. Each morning, rates in stores are set for the day (so don’t stress about losing money by the second), and will determine how much 1AUD can get you in each currency. As you can see by the below graph, the daily changes can be quite dramatic and, depending on how much you are exchange, can result in a significant difference to your spending money. AUD to USD fluctuations from June 23 2018 to July 23 2018 Stock market fluctuations Exchanging $2000AUD on the 2nd of July 2018 (1AUD = 0.7133USD) would have given you $1426.60USD. The same amount on 9th of July 2018 (1AUD = 0.7255USD) increased your spending money to $1451. A seemingly small difference in the exchange rate gave customers an extra $25USD in their back pocket. Perfect for a classic I <3 NY t-shirt, or an upgrade to your seat at a Broadway show. Why do exchange rates move up and down so much? A key reason currencies fluctuate so much is because they are traded using a ‘floating exchange rate’. Long story short, it means currency is bought and sold on the international market using a supply and demand system. The more a currency is in demand the more it will cost an investor to purchase. In return, if a currency is less sought after, the price will drop to try and entice investors. Crazy, hey? Wait though, there is a bit more to it. Investors are there to make money, so they forecast and buy currencies they predict will go up in value. Signs that a currency will increase in value include economic growth of a country, a country’s economic environment (like consumer behaviour and trading partners), trading data on stock exchanges, and government policy. The cumulative changes (good and bad) to a country’s economy can trigger an increase or decrease in the value of their currency, and it is these triggers that investors look out for. Changes are often reflected in economic data, which then impacts exchange rate forecasts and, in turn, demand for the currency. For example: If the Australian government was to release figures showing unemployment rates had increased in the past 5 years, it may signal to investors that there has been an economic slowdown. The resulting lack of confidence in the strength of the Australian economy may mean investors are less likely to buy AUD. This decreased demand can then lead to a decrease in value, meaning your online shopping cart for that US website will get a whole lot more expensive. Making sense so far? Either way, grab some half time oranges; we are getting to the pointy end. Where do politics fit in to all of this? So, we can see that economic data can affect the perceived value of a currency, and how this value is then reflected in the exchange rate between currencies (and your travel money). Politics – a-k-a government policy and decisions - feed into this economic data and can further affect movements in currency. The government can have both a direct and indirect influence on exchange rates. 1. When a government policy has been established with the intention of achieving one goal, but has indirectly affected the value of a country’s exchange rate. Two of the biggest political events that have indirectly impacted foreign exchange markets in recent times are the presidential election in the USA, and the decision by the UK to exit the European Union (also known as Brexit). The likelihood of these events occurring were considered so unlikely by the general public, and the voting so close, that they produced very large swings in the foreign exchange markets. 1 AUD purchased around 0.7700 USD in the lead up to the US election, but as the results started trickling in, that rate increased to 0.7775 then moved down to 0.7575, back up to 0.770, down to 0.7640, back up to 0.7750 and then back down to 0.7560. In other words, hold on to your toupee because these rate swings will give you whiplash. Trump election.png Fluctuations in the USD as a result of President Trump's election on 8 November, 2016 USD currency fluctuations during Trump election Brexit was considered by most to be extremely unlikely, but much to everyone’s surprise the unthinkable happened. In the lead-up to the vote, 1 AUD bought around 0.5100 GBP. As voting results came through and it became apparent that the ‘Leave’ vote was going to win, the GBP lost value rapidly, trading as high as 0.5525 later on in the day. In the space of a day, $2000AUD went from getting Aussies $1020 to $1105. That’s an extra $85 to spend on scones and Royal Wedding memorabilia – party time. GBP fluctuations during Brexit GBP fluctuations during Brexit 2. When governments work to indirectly manage the exchange rate due to a vested interest in setting a ‘target’ value of the currency. Australia relies heavily on exports for economic growth. It’s not beneficial to have a super strong Aussie dollar as it will push up the price of exports; countries buying our coal, iron ore and wheat will consequently go elsewhere to find these goods at a cheaper price. The Reserve Bank of Australia (RBA) will work towards a target value for AUD that supports economic growth through exports. Whilst this is great for the Australian economy, it’s not so great for Aussies heading overseas wanting a good rate on their foreign exchange. Governments can directly manage exchange rates by ‘pegging’ the rate of their currency to that of another. For example, Argentina has pegged the value of their Peso to the value of the USD. This means the value of the Peso is fixed to the USD and is affected by its rises and falls. How can you protect your travel money from economic and political effects? As you can see, there are a number of factors at play that can impact the amount of spending money you have on your next holiday. Short of becoming the Prime Minister and making your own political decisions, there isn’t a lot you can do besides planning ahead to minimise your exposure to market volatility. As Travel Money experts, we have a few tips and tricks to make this process easier for you: 1. If you are super budget conscious, consider picking a location where the exchange rate is strong and you can get more bang for your buck. 2. Plan out your holiday budget well in advance with our budget tool in store or online 3. Once you have booked your holiday, set up currency alerts and let us do the monitoring for you. We will send you an email when the currency you need reaches a certain rate against the AUD. 4. By planning ahead and exchanging your travel money regularly rather than in one lump sum, you can minimise the amount of money that is exposed to potential market volatility at any one time. 5. Protect your money with Rate Guard, which is our way of protecting you against exchange rate movements. Simply purchase with us in store, and if the retail exchange rate improves within 14 days we will pay you the difference. 6. Load your money and lock in your exchange rate with a Travel Money Oz Currency Pass. 7. Save some money and ensure your currency exchange provider doesn’t charge fees or commissions (hint hint, nudge nudge, Travel Money Oz doesn’t charge any fees or commissions). So there you have it, a crash course in exchange rates and how economic and political factors can effect the amount of spending money you have for your next holiday. For more information, or to get your holiday cash, speak to one of experts in store today
mlfoundations/dclm-baseline-1.0
default
0.37
HTML is a powerful markup language for individual Web pages, but it has some serious limitations for maintaining entire Web sites (i.e. a collection of Web pages which needs to be kept consistent). GTML is an HTML pre-processor which adds some extra features specially designed for maintaining multiple Web pages. How does it work? GTML commands among the HTML in your source files. GTML reads and processes the GTML commands, but leaves the rest of the text unchanged (so it will work straight away on your existing Web pages). HTML files generated by GTML are just like any other HTML files. GTML doesn't attempt to interpret your HTML commands in any way, it's fully compatible with all versions of HTML, and doesn't require any specific browser or server. GTML for you? If you write the HTML in your Web pages by hand using a simple text editor, then you'll find GTML useful. If, on the other hand, you use a sophisticated graphical tool to generate your HTML, you probably won't be able to use GTML. There are three reasons for this: - Your sophisticated tool won't understand the commands, and might even complain violently about them. GTML operates in a command-line batch mode, and your sophisticated tool probably operates from a graphical environment. - The source for GTML is in files ending in .gtm (or .gtml), and it generates the .html files. Your sophisticated tool probably generates the .html Here are some of the things you can do with - Create a project file with the names of all your Web pages, so you can update them all with one simple click or command. - Process only files which sources have changed directly, or with the help - Generate a makefile to control the process of your Web pages, based on their dependencies. - Give a specific alias to a filename, useable as constants, so that it is easy to move files and have links preserved. - Specify a tree-like hierarchy of Web pages, so you can add Next, Previous and Up links automatically to your site. - Automatically generate a map of your site, with the possibility of customizing the way this table of contents will look like. - Use named constants for HTML fragments to save typing, ensure consistency and make changes easily. - Use environment variables as named constants. - Include header, footer and other common files into all your HTML files. This doesn't require Server-Side Includes. - Include timestamps (in any format you like) to show the time of last process, or of last modification. - Use conditional commands to create different versions of the output under different circumstances. - Generate output to different directories to generate different versions of your site (for example, a Frames version and a non-Frames version). - Change extensions of output files from .html to whatever you want, so that you may, for instance, use MultiViews options of Apache server, or create non-HTML files. - Guard special characters `' and `&' in normal text so that they don't get confused with HTML commands. - Define your own characters translations, so that you may easily input your non-ASCII characters into shell code into your source, so that you may easily generate pages with computed information. - Generate pages with all superfluous HTML code removed, so that readers retrieve them faster and may save bandwidth. GTML features and commands are described on the GTML Reference page. GTML is written in Perl. If you don't have Perl, it's easy to obtain it on There are two methods to download GTML Perl script, and save it to a file called gtml.pl. If you're running this under UNIX, edit the first line to point to the location of your version of Perl, and give the execute right to the file. GTML archives containing Perl script as well as documentations. Archives are available in or gzipped tar format. The home page of GTML is at GTML source files end in .gtm (or .gtml), not .html. If you're using GTML on existing HTML files, simply rename them with the ending .gtm (or .gtml). GTML is run from the command line, like this: perl gtml.pl fred.gtm harry.gtm bill.gtm (The UNIX version won't need the Perl at the front, so long as the script is executable.) The output of this command will be in If you have a GTML project file, you include this on the command line. In this case, it's not necessary to list any of the files in the project as well. Remember that you can use -D on the command line to create named constants. You can have as many options as you like. Make sure they appear before the file names to which they apply. For example, if you say: perl gtml.pl -DNAME=Fred fred.gtm harry.gtm -DTYPE=car bill.gtm NAME is defined for all three files and is defined for GTML will try to process some project file. It will look at these configuration files in this order: Thoses files, if they exist, are parsed before command line is processed. You may have a look at the source of the documentation pages of the source directory. The project file is called Other HTML pre-processors Here is a list of other HTML pre-processors that I know of, in GTML will not satisfy your needs: GTML is distributed under the GNU General Public License Copyright © 1996-1999 Copyright © 1999 This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. For a long time I was looking for a way of maintaining some of the sites that I set up for the research team which I am member of. I wanted to get a tool which would enable me to easily change the look and feel of all pages of a site, and easily move pages of a site from one location to another. I then used all my favourite search engines on the web, and found some pages, describing such tools. I tried some. They all missed something, from ease of use, to important features. When I tried GTML, from Gihan, I found it was pretty cool, but lacking some important features that I needed. I wrote to Gihan asking him to add those features which I needed, since I had no Perl programming skill. He told me that he doesn't have time to do that in the near future. So I decided to read his code, and to learn Perl with his The script was pretty well written and I learnt Perl, or at least understand GTML worked very fast. It was then easy to add the features that I needed in it. I then asked Gihan if he would mind if I distribute GTML under the GNU General Public License, since his license policy was not as open as GPL, and he accepted. Then I just updated some of the docs, prepared an archive, in the GNU spirit, and that was it. My biggest question was to understand where the name of the tool come from, and after some reflexions I got two possible answers: G' is the letter just before H, and GTML source production comes just before HTML file one. G' is the first letter of Gihan's first name. Well this is not a question anymore, Gihan told me the truth. Guess what? I found it: the first of my two previous hypotheses is the right one. (Well I hope that as time goes by it will be interpreted as GNU After I distributed it on my web pages, and after announcing it only on I got some feedback from users coming from all around the world. I added some features which were asked of me, but realized that the source of the script needed some reorganization, and that there were some bugs in I have done this source reorganization, and so have been able to fix bugs, and add a lot of fancy features. So now I'm waiting for users' feedback, in order to verify that I did not add bugs :-), and that GTML is now In one month or so I hope to be able to say that it does. So I really need your help for that, please give me some feedback!. I will not add any features, before the next stable release. I hope my version of GTML will help you as it helps me. --Bruno, 31 August 1999
HuggingFaceFW/fineweb-edu
default
0.333
‘Age Tax’ backed by Rep. Walters attacks health care of older residents Congresswoman Mimi Walter Congresswoman Mimi Walters voted for a provision the AARP dubbed an "age tax," which allows insurers to charge older constituents more for health insurance. Congresswoman Mimi Walters (R-Irvine) cheerfully supported a Republican bill to make health insurance significantly more expensive for older residents, an effort the AARP calls an “age tax.” While the House bill failed in the Senate, Republicans are once again seeking to pass a major health care repeal bill, which means Walters may vote again to make insurance more expensive for her constituents. In describing the “age tax,” the AARP says the Walters-backed bill would allow “insurance companies to charge people between the ages of 50 and 64 (those too young for Medicare) five times what they can charge younger consumers.” Under current law, insurance companies are limited to charging older customers three times as much. Along with other provisions, the AARP estimated premiums for people in this age bracket could increase by up to $8,400 per year. “It’s an outrage that anyone in the U.S. Congress could expect people over age 50 to pay thousands more for health coverage,” the AARP said before Congress voted on the bill, describing it as “even worse than we expected.” In a letter to Congress, the AARP warned, “In addition to these skyrocketing premiums, out-of-pocket costs could significantly increase under the bill.” But Walters ignored the AARP (and virtually every health care-related association) and voted exactly the way Trump asked Republicans to. After voting to make health care dramatically more expensive for her constituents, Walters beamed in a celebratory selfie next to Trump at a White House ceremony. Republicans managed to tuck some health care provisions into the unpopular tax bill, and those changes are already having a negative impact for Californians. Covered California is seeking to increase premiums by 11 percent next year, and more than half of the increase is because of policy changes championed by Trump and Republicans. But the ultimate GOP goal of repealing the Affordable Care Act is still unfulfilled. So a group of conservatives and Republicans are plotting to reintroduce another repeal bill in the near future. The final text of the bill is not yet available, so there is no way to tell if this bill would also impose an “age tax” or weaken protections for people with pre-existing conditions. Walters voted for these provisions last time around, and may get a chance to vote again to make health care more expensive for older residents. Shortly after winning her second term in Congress, Walters promised a “better way” on health care. But for her constituents aged 50-64, the bill Walters supported could be more aptly described as a “more expensive way.”
mlfoundations/dclm-baseline-1.0
default
0.37
Stories of the Amautalik Stories of the Amautalik This unit is geared towards grades 4, 5, and 6 primary students. It consists of pre-reading, reading, and post-reading activities that focus on the student’s comprehension of the two stories. Students will learn the elements of a folktale, create character webs, learn the five W’s of story comprehension, and create an event timeline. In addition, students will explore three of the major themes including shamanism, survival, and bullying.
HuggingFaceFW/fineweb-edu
default
0.333
GERST 4375 GERST 4375 Course information provided by the Courses of Study 2017-2018. In the last decades, "Holocaust Studies" witnessed an extraordinary expansion, covering different fields of scholarship, from history to literature, from philosophy to aesthetics. This seminar will retrace the major steps of Holocaust history writing. It will analyze the classical debates between "intentionalism" and "functionalism," the discrepancies between the analytical approaches focused on the perpetrators and those focused on the victims, the inscription of the Holocaust into the broader context of war violence, and its comparison with the genocidal violence of colonialism. Finally, it will investigate some methodological problems concerning the place of testimony in history writing and the permanent connections, both fruitful and problematic, between history and memory. This means taking into account the entanglement of the most productive areas of Holocaust scholarship (Germany, France and the United States) as well as the relationship between the historiography of the Holocaust and other disciplines (memory studies, postcolonial studies, etc.). When Offered Spring. Distribution Category (HA-AS) View Enrollment Information Enrollment Information Syllabi: 1 available • 15872GERST 4375 SEM 101
mlfoundations/dclm-baseline-1.0
default
0.37
When a consumer takes out a loan, the lender will often require some form of security to ensure repayment of the loan. Collateral is the term for the property the debtor offers as security. A lien is a legal claim placed on property that gives one party the right to take the property as payment for debt or services. There are many types of liens; those with specific lien questions should seek professional advice. Many lenders demand security in case a borrower fails to repay a loan. Practically any item can be used as collateral, depending on the lender's requirements, but usually collateral refers to large-value items, such as real estate or vehicles. A loan given based on collateral is known as a secured loan, while non-collateral loans are unsecured. In the event that the borrower fails to repay the loan, the lender has the right to take the property and liquidate it in order to pay off the debt. Because the lender must typically accomplish the liquidation himself, the law values the collateral based not on its actual market value but rather on its liquidation value. While liens and collateral often go together, they are not the same thing. A lien is a legal instrument that grants one party the right to property until that party is paid or otherwise reimbursed for something owed. A lien is a security interest, but unlike collateral, a lien may be imposed by law after the loan or other transaction is made. A lien doesn't grant actual ownership of the liened property, but it does give the lien holder the right to seize the property if the owner doesn't meet his obligations. Common Lien Types The law recognizes more than 30 types of liens. Some of the more common include judgment liens, in which a plaintiff who wins a court judgment may place a lien on the defendant's property in order to receive payment; mechanic's and accountant's liens, in which parties who render services to others can place liens on the serviced party's property until they receive payment; and landlord's liens, in which a landlord who has the right to back-owed rent can seize the renter's property until he receives payment. Mortgages as Liens In a mortgage, a party buying property borrows money and gives the lender a security interest in the property in the event that he fails to make payments on the loan. U.S. jurisdictions are split on whether a mortgage acts as a lien. Most jurisdictions consider a mortgage to act as a lien, a simple security interest in the event of non-repayment. However, a few jurisdictions do find that a mortgage actually gives the lender ownership of the property until repayment.
HuggingFaceFW/fineweb-edu
default
0.333
Space travel may be bad for your brain – here’s why I really hope this is the right flag. NASA/flickr, CC BY There is bad news for those planning to go to Mars in the near future: a study in mice has suggested that radiation in space could cause cognitive decline in astronauts. However, we know from past research that mental, social and physical exercise can boost cognitive functions. With planned Mars missions moving ever closer, it might be be worth exploring activity as a way to counter radiation damage. There are many hurdles to overcome to get to Mars. The obvious one, of course, is the amount of time it takes – about eight months. But for those brave enough to attempt such a journey, this may well be acceptable. What could be harder to accept, however, are the harmful galactic cosmic rays you’d be subjected to, produced by supernovae far away from Earth. This is a form of radiation that we already know damages the body and increases the risk of cancer. Mouse maze Worse still, a new report suggests that this type of radiation also damages the brain. In this report, scientists exposed mice to charged particles at the NASA Space Radiation Laboratory. Six week later, they tested the memory abilities of these animals. Unfortunately for those eager to go to Mars, the news was not good. The scientists used two tests of memory. The first is perhaps the simplest test available for mice: novel object recognition. Mice spontaneously explore new objects placed in their environment, but eventually get used to the objects and spend less time near them. The task exploited this tendency by first presenting the animals with two identical objects, such as small statues. After the mice had spent some time exploring these, the researchers replaced one of the statues with a different object, for example, a salt shaker. If the mice had remembered exploring the statue, they would show a spontaneous preference for the salt shaker. But after receiving a dose of radiation, the mice showed significantly less preference for a new object compared to mice who had not been irradiated. Indeed, depending on the type of charged particles that the mice were exposed to, in some instances they had next to no preference for the new object. This indicates that they did not remember the object they explored first. Can I get directions, please? Duncan Hull/flickr, CC BY The researchers also did a second test placing one object in one location and a different object in another, within the same environment. The experimenter then moved one of the objects to a new location, while the other remained in the same place. Ordinarily, the mice would have spent more time exploring the displaced object compared to the object that had not moved. This indicates that it had learnt the location of each of the two objects. Again, following radiation, the mice did poorly. For several of the doses of charged particles, the animals showed essentially no preference for the displaced object. In short, they either failed to learn the association between the object and the location in which it was initially placed, or they were unable to remember it. Staying fit So what was the underlying reason for these results? The scientists conducting this work next looked in detail at the neurons – the brain’s functional units – of the mice. In specific regions of the brain, the dendrites – the part of the neurones receiving inputs from other neurons – were less branched. In addition, the specific portions of the neurons where communication takes place, the dendritic spines, were also reduced following exposure to radiation. Mice with the largest loss of spines had the worst memory. Neuron in the brain. Mike Seyfang/flickr, CC BY Of course we already knew that radiation is bad for you and that cell death is related to memory problems. While the research may have implications for astronauts, it is most likely not as simple as humans getting their neurons fried on the way to Mars and inevitably ending up demented. We now know that the brain is plastic and that the relation between brain damage and memory loss is complex. So while the loss of connections between neurones likely contributes to the cognitive deficits in dementia, the rate at which this occurs may depend on what you do with your brain, how active you are and what experiences you seek. Current dementia research, for example, indicates that activity is protective. It seems that any activity – be it mental, physical, or social – to an extent protects the brain from succumbing to the worst ravages of dementia. In particular, learning and meeting new people are effective. Ultimately, memory is based on synaptic plasticity, which is the ability of synapses to strengthen or weaken over time. By introducing extra activity which encourages such plasticity it may be possible to counter radiation damage to the synapses. What would be really useful is research that looks at the consequence of radiation in mice that are staying active and training their memory. If activity does indeed counteract the effects of radiation we could perhaps safely travel to Mars anyway – as long as we play chess and go on Skype dates on the way there.
mlfoundations/dclm-baseline-1.0
default
0.37
thumbnail Hello, The centre-back has played just 10 minutes of football since November, but the Stamford Bridge manager has admitted his side has lacked character in recent weeks Chelsea manager Rafa Benitez has hinted he may risk club captain John Terry against Arsenal on Sunday. The centre-back was an unused substitute in the 2-2 draw against Southampton at Stamford Bridge in midweek, and made his first appearance since suffering injury against Liverpool in November in a 10-minute cameo against Stoke on January 12. Benitez indicated after Wednesday's disappointing draw that he would not take any risks with team selection, but the defender has been training and may have done enough to convince the former Liverpool boss he is worth gambling on. "I said I wouldn't take a risk but you have to see your players in every training session and afterwards decide," Benitez told reporters. "On Friday we had a light session because of the the snow, and he was training with the team, doing everything. He had his specific programme and was doing well. "If I ask him, or anyone, they would say they want to play. I prefer not to ask and decide at the end. "It's difficult to say what a leader is. You can say that someone might be but but the players decide who the leaders are. It's something that happens. "We have some good players. Some of them show more character. Others show different qualities. "We have a group of players with quality and sometimes we miss these things. Experience doesn't mean you are a leader. It means you have experience." From the web
mlfoundations/dclm-baseline-1.0
default
0.37
On September 29, 2020, the United States Department of Education released additional guidance, in the form of a question and answer sheet, regarding the responsibilities of school districts in maintaining student privacy when releasing information related to cases of COVID-19. Specifically, the Department responded to four of the most common questions posed by schools regarding their reporting obligations and their duties under the Family Educational Rights and Privacy Act (FERPA). Those questions and answers are summarized as follows. 1. May a school disclose the number of students who have COVID-19 to parents and students in the school community without prior written consent? The disclosure of student information in a non-identifiable form does not require prior written consent from a student or the student’s parent under FERPA. A school district may safely disclose the number of students diagnosed with positive cases of COVID-19, so long as the school does so in a way that does not allow for any individual student to be identified. For example, a school district which reports that a certain number of students tested positive must ensure that a sufficient number of students are absent for non COVID-19 related reasons so that the students testing positive cannot be easily identified due to their (presumed) absences. School districts must weigh the benefits of releasing information related to positive tests and ensure that, in doing so, they do not provide enough information to allow anyone identify the afflicted students. 1. May a school identify a particular student who has COVID-19 to parents and students in the school community without prior written consent? A school district may only do this in the event that disclosure of the student’s identity is absolutely necessary to protect the health of another student or students. The example used by the Department of Education is where a student-athlete testing positive for COVID-19 was in close contact with other students who had higher health risks and disclosing the identity of the infected student was necessary for the others to take protective measures. In these limited circumstances, it may be appropriate for the school district to disclose identifiable student information. School districts are cautioned that this step should only be taken in consultation with health officials and after discussion with legal counsel. 1. May a school disclose the number of students who have COVID-19 in order to provide general health data to the public (including the media) without prior written consent? Yes, similar to the advice offered in the response to Question Number 1 above, the school district may release this information to the public so long as doing so does not allow anyone to ascertain which students were infected. As stated in the new guidance, similar to sharing information with the school community, if a school discloses information about students in a non-identifiable form, then consent is not needed under FERPA. 1. May a school identify a particular teacher or other school official as having COVID-19? Although FERPA would not apply in this situation, as this question deals with the release of non-student information, school districts are cautioned against disclosing health-related information which could result in the personal identification of a teacher or other school employee. Privacy laws such as HIPAA would still apply in this case, and it is inadvisable to release such specific information prior to consulting with both public health officials and legal counsel. School districts are encouraged to work closely with outside entities such as a local or state health department when determining how and when to release information. In addition, our public sector team can provide further guidance to school districts to ensure that they are disseminating information appropriately and without violating any applicable privacy laws.
mlfoundations/dclm-baseline-1.0
default
0.37
Jerry Hastings sword-devel@crosswire.org Fri, 02 Feb 2001 14:23:44 -0700 I agree. There is no need to prove that you have perfect security when no one does. And because no other software is perfect there are probably already ways that those that want a copy of a text can get a copy for free. The Sword will not change either of those things. Security is not preventing copyrighted Sword modules, money is. At 06:17 AM 2/2/2001 +1000, Paul Gear wrote: >That's right. We can't protect against fraud at the application level. >None of the commercial software houses can, either. We shouldn't waste >time trying to do so. People should be expected to provide correct >information when they unlock, especially in a Christian program. Maybe >if we were Stephen King trying to hock our novel on the Internet things >might be different, but remember that we're talking about _Bible_ >software here. :-) >The difference between software and books is important, though. If you >break software, you have access to a perfect digital copy of the text. >If you photocopy a book, you have access to a much poorer copy. Even >with OCR software, this is still the case, although it does change the >game a little.
mlfoundations/dclm-baseline-1.0
default
0.37
Have a dinner as a family and enjoy the benefits of your child’s healthier adolescent development. Families that eat together have teens that are less likely of smoking, drinking, using drugs, getting depressed, developing an eating disorder, delay having sex, are more likely to do well in school, and less likely to consider suicide. The National Center on Addiction and Substance Abuse at Columbia University (CASA) reported that compared to teens who eat dinner frequently with their families (five or more family dinners per week), those who have infrequent family dinners (fewer than three per week) are: - three and a half times likelier to have abused prescription drugs. - three and a half times likelier to have used an illegal drug other than marijuana or prescription drugs. - three times likelier to have used marijuana. - more than two and a half times likelier to have used tobacco. - one and a half times likelier to have used alcohol
HuggingFaceFW/fineweb-edu
default
0.333
| Share News Feeds: Lyson Archival Inks and other kinds of ink for fine art giclée prints Print E-mail Horror stories of fading colors have made it difficult for reputable companies even when their inks offer longevity. One review indicated that Epson prints, if put outside in the sun, fade within a day. This is something Epson ads in popular magazines do not tell you about. Lyson, Lysonic inks for archival fine art printsPrinter companies claim the warranty is voided if you use other inks. Yet this claim is illegal in the United States. It is illegal, against several Federal statutes, to require that a buyer have to buy a certain product in order to maintain a warranty. A variety of experienced companies made inks that last for decades. Unfortunately few independent ink testing centers have developed means for testing longevity that people take seriously. For example, people who have actually produced ink for centuries (such as VanSon and other European companies) get a smile on their face when you mention the claims of ink longevity that are popular in trade magazines. If an ink manufacturer pays someone to test their inks, this "test" is not considered independent. Second, none of these companies nor testing places have, so far, actually guaranteed their inks with a written contract (3M being the notable exception; when 3M says their inks last x-years outdoors, you get an actual guarantee). But, why worry? How do you know the paper itself will last a hundred years? As long as the original image is safe, the picture can easily be reprinted. What is important is that the image last the lifetime of a normal viewer, say 50 to 75 years. True archival inks will become available in the meantime, after all, there are billions of dollars available for whomever develops such an ink. In the meantime, Lyson, Ilford, VanSon, and American Ink Jet offer inks you can use for your exhibits, fine art, and giclée prints. Many of these inks will outlast a conventional color photograph. FLAAR tested Epson inks on desktop printers about five years ago (they failed all the tests). We then tested Encad GA inks and were pleasantly surprised when the images still held their color after four years. A bit different than the day the prints rolled off the printer, but still acceptable for the museum where these prints still hang. This January we initiated tests of Encad GO inks (which are rated to outlast GA inks). Ilford is preparing to send us a variety of inks. Whenever we receive other inks, such as Lyson, etc., we will be glad to test them as well. We tested the new UV pigmented inks for the Hewlett-Packard DesignJet 5000 and were surprised at the wide color gamut, definitely better than any previous HP pigmented inks. If you need to find a place to buy inks for fine art giclée printing, you should select a place where the people are specialists in fine art giclée printers, media, and inks. Last Updated Jan. 15, 2003 Previouly updatedr June 9, 2001 (EM), May.17, 2000. linkedin logo twitter logo twitter logo Join the over one thousand wide-format inkjet, digital imaging, signage, and related individuals worldwide who are linked to FLAAR Reports via Dr Nicholas Hellmuth. We have two sets of Tweets: digital imaging tweets (printers, inks, media, etc) Mayan studies tweets (archaeology, ethnobotany, ethnozoology of Guatemala) Copyright © 2018. Powered by FLAAR video porno porno izle porno hentai
mlfoundations/dclm-baseline-1.0
default
0.37
Artificial Neurons Can Now Be Used To Replace Human Brain Cells Jun 30, 2015 23:33 EDT Scientists have built the world’s first artificial neuron which has the capability of doing the exact same function as that of an organic brain cell. The neuron is able to translate chemical signals into electrical impulses and also has the ability to communicate with other human cells. Right now they are the size of a fingertip and have no ‘living’ parts in them but the team of researchers is working on shrinking them to a size which will allow them to be implanted in humans. This would be a very effective replacement for damaged nerve cells and will also help treat neurological disorders or injuries. "Our artificial neuron is made of conductive polymers and it functions like a human neuron," lead researcher Agneta Richter-Dahlfors from the Karolinska Institute in Sweden said in a press release. This will prove to be a breakthrough in the field of neuroscience Up until now scientist were only able to stimulate brain cells by electrical impulses, which is the information is transmitted within the cells, but in the human body cells are stimulated by chemical signals and that is how they transmit information to other neurons. By joining electronic ion pumps to enzyme based bio-sensors the team was able to create an artificial neuron that copied the human cell stimulation by chemical signals. The team showed that the newly created neurons can communication with other organic brain cells over large distances using this function. "The sensing component of the artificial neuron senses a change in chemical signals in one dish, and translates this into an electrical signal," said Richter-Dahlfors. "This electrical signal is next translated into the release of the neurotransmitter acetylcholine in a second dish, whose effect on living human cells can be monitored." What this means is that these artificial neurons can be in theory integrated into complex systems such as the human body, and this would allow scientists to replace any damaged nerve cells such as those in paralyzed patients or other damages such as healing brain damage. "Next, we would like to miniaturize this device to enable implantation into the human body," said Richer-Dahlfors. “We foresee that in the future, by adding the concept of wireless communication, the biosensor could be placed in one part of the body, and trigger release of neurotransmitters at distant locations." "Using such auto-regulated sensing and delivery, or possibly a remote control, new and exciting opportunities for future research and treatment of neurological disorders can be envisaged," she added. We look forward to this research being held at full scale, we need more of this stuff in the world as this could help thousands of people who are laying helpless right now because of paralysis or a neurological disorder, I hope this project brings better results in the near future. Image Source
mlfoundations/dclm-baseline-1.0
default
0.37
Saturday, January 7, 2012 I like the show "Parenthood," but one thing about it creeps me out One of the few network TV shows that I regularly watch is Parenthood on NBC. I'm not exactly sure why or how this happened, but it's in my DVR every week and I like it. There is one tiny detail from this season that is REALLY irritating me though: the Braverman brother and sister (Max and Haddie) have exactly the same haircut. It's weird and disturbing, and it bothers me that the show's producers aren't doing anything about it. These pictures don't quit do it justice because they both have longer hair on the show now, but trust me, it's identical. I'm pretty sure no high school girl would ever go out in public knowing she had the same haircut as her little brother. Friday, January 6, 2012 The Bruins are officially out of hand I think I need a team of statisticians working for me in order to figure out just how good the Bruins really are right now. For example, they have won their last two games by a combined score of 15-1; and their last two home games by a total of 17-0. When was the last time an NHL team did that? Boston has played 37 games this season, and has scored exactly twice as many goals as they have given up (138-69). Has any team previously ever been able to do that this late in the season? And of their 26 wins, 15 of them are by 3 goals or more. Is that a record? I have no idea how to put these numbers in any sort of historical context, but I'm guessing they all rank near the best of all time. The Bruins also continue to dominate the league in a number of categories that I mentioned last week, most notably both total goals scored AND fewest goals allowed. So if anybody out there reading this has access to some sort of all knowing NHL statistical database, please let me know how this Boston team currently stacks up against the all time greats. Thursday, January 5, 2012 Boston smoked New Jersey last night Boston put a bit of a beat down on New Jersey last night. The game was fairly close early on, but about 2/3 of the way through Boston pulled away and won easily. Despite the fact that New Jersey was missing a few key players due to injury, this game was about a proven playoff tested team grinding out a win vs an inferior opponent. The final score was 6-1; or was it 89-70? Wait, what sport am I talking about? I didn't see a second of the Bruins win over the Devils last night because I was at the Celtics/Nets game. The Celtics did what they often do, which is play to the level of their opponent for most of the game. In this case it was a Nets team that fielded a starting five indistinguishable from their bench. It's also crazy how much the Garden crowd loves Greg Stiemsma already. With Football season coming to a close, I'm thinking of creating some sort of "Stiemsma-meter" to replace the Tebometer. Wednesday, January 4, 2012 I watched the CRAZIEST thing I have ever seen on "60 Minutes" the other night First, I want to say that I have always disliked 60 Minutes, for one simple reason: Going back to when I was a kid in school, that stupid ticking clock was the last thing I wanted to hear on a Sunday night. It's the perfect metaphor for the weekend ending, and the dreaded Monday morning fast approaching. My entire life, nothing has ever made me change the channel faster than hearing that stopwatch. I can't believe the show's creators never thought of this a million years ago in 1968 when it first went on the air. So the only reason I saw this feature in the first place is because a friend of mine texted me on Sunday night "Put on CBS now!" I had recently been talking to her about my Citibank rock climbing commercial blog, and how what the girl does in the ad looks totally unsafe to me. Now watch this: (I know it's long and goes completely against my theme of brevity, but I PROMISE it's worth it. As I watched it live my jaw continued to drop further and further.) After seeing this, my conclusion is that this kid Alex Hunnold has got to be completely insane. I am more shocked than I am impressed by it. I can comprehend how a person can physically do what he does, but not mentally. Which is why "raving lunatic" is the only logical explanation. UPDATE 1/9: So it has just come to my attention that Alex Hunnold is the guy in the Citibank commercial. Weird. Tuesday, January 3, 2012 Awesome Old Song of the Week: "She Drives Me Crazy" by Fine Young Cannibals This song made it big in 1989, but to tell you the truth I honestly can't remember if I liked it in then or not. It's one of those songs that has sort of lived on and lasted as a kind of cheesy retro pop radio hit. A little known fact about the Fine Young Cannibals: they got their name from a 1960 film entitled All the Fine Young Cannibals, staring Robert Wagner and Natalie Wood. Those two were married by the way; twice actually. It's pretty weird to think of Maria from West Side Story being married to Number 2 from Austin Powers. Monday, January 2, 2012 Tebometer Monday: The end of the road. When I first created it 9 weeks ago, the number stood at 48.1%. At this point I think there is a definite possibility that Tebow will never get to 50%. The improbable run of fluky victories has already allowed him to keep his job much longer than he should have. If other teams have 2nd string QB's capable of putting up numbers like the Packer's Matt Flynn did, it makes no sense for someone as inaccurate as Tebow to be given the opportunity to keep starting. Unless something astounding happens vs the Steelers next week, it's looking more and more like we've already seen the best of what Tebow has to offer as an NFL quarterback. Sunday, January 1, 2012 Happy New Year My favorite New Years Day ever was 5 years ago, 1/1/07, back in my NYC days. The entire staff of Buddakan hung out at closed Union Bar all day. I know this is meaningless to most people, but if you were there that day you know what I'm talking about. Ever since then I've thought that New Years Day is way better than New Years Eve. Back to homepage
mlfoundations/dclm-baseline-1.0
default
0.37
Our services Audit and tax consulting Preparation for tax inspection Objectives of the preparation. The tax law provides for possibility of release of a taxpayer from liability for making changes in tax declarations provided that such changes were made before the date when taxpayer became aware of on-site tax inspection assignment. Usually, large companies become aware of forthcoming tax inspection before the date of written order on such inspection. Therefore, there is enough time to prepare for the visit of tax inspector thoroughly. Thorough preparation for tax inspection results in: • Significant reduction of tax compliance risks. • Making of necessary amendments to primary accounting documents and accounting/ financial statements. • Minimization of potential financial losses, such as penalties and financial sanctions (fines). • Preparation of the arguments against any potential claims from tax inspectors. In the course of preparation for tax inspection, we can check the client’s tax liabilities in the whole or any particular tax liabilities, for example, liabilities related to income tax or value added tax. Scope of work: • Analysis of existing tax payments. • Check of tax base determination correctness, tax and duty calculation correctness, tax rate and tax privilege application correctness. • Check of correctness of the tax declarations prepared. • Identification of additional measures (including partial reconstruction of financial and tax accounts) to be taken for the purpose of reduction of potential financial sanctions. • Identification of existing tax risks or any errors associated with calculation or payment of taxes including critical errors. • Elaboration and legal reasoning of arguments pertaining to existing tax risks. In addition to the scope of work above, we can provide the following services: • Support in the course of on-site tax inspections. • Preparation and presentation of claims against tax inspection outcomes. • Appeal of tax authority decisions in a higher tax authority. • Appeal of non-regulatory acts of tax authorities in the court of arbitration. Work product: Written report containing the following information: Identified breaches and deviations, and recommendations on their elimination. Identified risks, overpayments and other problems related to calculation of taxes and duties. Comprehensive plan of actions proposed to taxpayer before assignment of on-site tax inspection.
mlfoundations/dclm-baseline-1.0
default
0.37
In 2009, President Obama pledged to reduce America’s greenhouse gas emissions by 17 percent from 2005 levels by 2020. Thanks to several factors, the country is halfway there. On Monday, Mr. Obama announced the appointment of two seasoned officials who could fulfill that pledge — but only if the president himself helps them navigate the formidable political obstacles ahead. Mr. Obama nominated Gina McCarthy, an experienced clean air regulator, to run the Environmental Protection Agency, and Ernest Moniz, an M.I.T. physicist and strong advocate of natural gas and nuclear power, to run the Energy Department. Both believe global warming is one of humanity’s most pressing challenges. Both have deep experience — Ms. McCarthy as an assistant administrator at the E.P.A. and an adviser to Republican governors in Connecticut and Massachusetts, Mr. Moniz as an under secretary of energy in the Clinton administration. Both will be required to use their regulatory authority creatively and aggressively. There is zero chance that Congress will enact the “bipartisan, market-based solution to climate change” that Mr. Obama called for in his State of the Union address. This means that his second-term agenda on climate change will run through Ms. McCarthy’s and Mr. Moniz’s agencies, and will depend almost entirely on executive actions that do not require Congressional approval. Here are three strategies that could make a big dent in carbon emissions. ¶Invoke the E.P.A.’s authority under the Clean Air Act to limit pollution from stationary sources, chiefly fossil-fuel power plants that account for almost 40 percent of the country’s carbon emissions. The agency has already proposed strict standards requiring new power plants to capture their emissions, an untested technology. The bigger problem is what to do with existing plants, which provide a big chunk of the nation’s electricity and which cannot be shut down quickly or by fiat. Devising a gradual phaseout will require ingenuity and persistence in the face of what are sure to be strong legal and political challenges from industry. ¶Make natural gas safer. Thanks to hydraulic fracturing, the country is now awash in natural gas. One major reason for the unexpected decline in national carbon emissions is that many power plants have switched from coal to natural gas, which emits only half as much carbon dioxide. But there is a downside: drilling for and transporting natural gas can produce methane leaks, and methane is a potent greenhouse gas that can cancel out whatever carbon advantage gas has over coal. Much tougher restrictions must be imposed throughout the system, including on thousands of miles of pipelines. ¶Improve energy efficiency across the board. One of the success stories of the last 30 years has been the increase in energy efficiency in appliances, new commercial buildings, and cars and light trucks. But there is plenty of room for improvement. The task of designing ever-stricter standards will fall largely to Mr. Moniz. There is obviously more: finding new refrigerants to replace climate-warming hydrofluorocarbons, investing not only in familiar renewable energy sources like wind and solar power but also in basic research, next-generation nuclear plants and experimental technologies that could smooth the path to a low-carbon economy. Little of this will happen without a good deal of push-back from industry and its Congressional allies. From start to finish line, Ms. McCarthy and Mr. Moniz will need the president at their back.
HuggingFaceFW/fineweb-edu
default
0.333
An Interactive Storytelling Tool for Primary Students that Inspires Open-ended Creativity This app has a lot of potential for creativity and creation. I would recommend it for preschool - 3rd grade, but it is really geared toward young children. Because it is a paid app, it may not be something to purchase for an entire school, but it is a great app for creation in the primary grades. How I Use It This is a great platform for young learners to use with digital storytelling. Students can record a screencast while they animate stickers, record their voice, and draw & color in this app. It can also be saved to the camera roll, which allows for the possibility of app smashing (combining 2 or more programs into one project). I used this app to create commercials for word chunks. For example, the letter 'y' can make different sounds in a word. It sounds like 'e' in the word "money," and it sounds like "i" in the word "sky." I made a commercial called "The Sneaky Spy Detective Y." I animated the commercial with a sticker that was "detective Y," giving examples of different sounds that 'y' made in different words. I will have the students create their own commercials like this for word chunks such as digraphs, r controlled vowels, and irregular plurals. This app allows you to add your own picture as the background, so it would also be a good program to use for creating short directions at a math station, literacy station, or to give instructions for procedures in the classroom. For example, if students take a picture of the drinking fountain, they can animate and record what the procedure is for getting a drink in the classroom. If the recording is uploaded to google drive or dropbox, it can be turned into a QR code and posted in the room for reference.
HuggingFaceFW/fineweb-edu
default
0.333
Thursday, May 22, 2008 Christians Where You Wouldn't Expect Them It's all good I guess. His music still sounds great. Jason said... A couple of books that might be interesting in light of your discovery of Cherone's faith: Hungry for Heaven by Steve Turner and The Rock & Roll Rebellion by Mark Joseph. Turner is a Christian and I don't recall if Joseph is or not, but they both discuss rock music and religious faith from interesting perspectives while sharing many artists' stories. Turner argues that much (if not all) of rock music can be seen as a yearning for some connection to a spiritual something or other, while Joseph holds that Christians turned their backs on the rock market - to the detriment of their own careers as well as to rock music as a whole - and essentially formed the CCM ghetto because of a silly sacred/secular polarizing mentality. (That sentence was tedious - sorry) You'd likely be surprised by other artists who are Christian... or were... depending on your perspective. My house wasn't too keen on the "secular" music either, but my parents were oftentimes lax, allowing me to amass decent CD collections. But alas, I would inevitably be convicted during a youth group sermon and decide to destroy all of my ungodly music. Being a pretty big music fan in general though, I couldn't go too long without good tunes. I'll bet there are close to 75 CDs that I've purchased a minimum of three times due of the youthful oscillations of piety and worldliness. There's that waste of money you mentioned... Jon said... That's hilarious. As a cheapo I'd find friends with the good tapes and I would copy the song to a blank tape. I think I destroyed one out of piety. I then re-made most of it as I suppressed or rationalized the guilt feelings.
mlfoundations/dclm-baseline-1.0
default
0.37
Tuesday, June 23, 2015 simplicity wins again. I know it's part of who I am. I cycle. I imagine we all do to some degree in similar yet different ways. The ebb and flow of life. It is just one of those parts of the journey that gets a little steeper, a little more up hill, if you will. Consciously I know that, ebb and flow baby, means we'll get to glide down hill again.... The deal is consciously you can know one thing but your emotions pretty much tell your conscious off. In the shuffle of, wake up, get everyone out the door, get to work, work, go home, make supper,run to baseball, run to softball, run in circles, do baths, put kids to bed, laundry, oh the laundry.....under the weight of what needs to be done every day, the person I think that I am, the person I want to be is being swallowed up. I am kind of "lost" in a sink hole and the more I fight the daily grind the further I sink it seems. I've been pondering lately our need as a society to offer kids the biggest and best of all things. Is that really what is best? I just can't think that it is. Why do we need a grand water park with water slides, gadgets and gizmos galore when I know kids can play for hours in just a pool? One thing I've learned thus far is: Life is not made of big scale events. Sometimes our greatest skill (and greatest challenge) is finding the good to cling to when the daily grind is sucking you in. Are we setting our kids up to fail? My kids brought me back to reality at least for one night. For that, my inner soul, my "who I want to be" thanked them. I was sitting inside looking up a "good" "fun" "family" hotel in KC for vacation, because let's be honest its hard not to get drawn into needing to find the "next best thing" so its the greatest thing ever....kind of thinking. When after the fourth *cough* maybe fifth time of begging me to come out and look at what they had done, I drug myself outside and well..... Cohen dug a fire pit in the corner of our backyard lot..... maybe not what I was thinking Kadence picked up all the sticks she could find in our yard and maybe some neighbor's yards.... ...and ta da fire pit a cool summer night. On our way in after changing the supper plan to "ot dogs" (per Benson) and marshmallows, Cohen said "this was the best night all summer" and there it was.... the reminder I have so desperately needed. It doesn't have to be complicated to be great. I just need to remember to be present.... I could have spent most the night looking up "fun family hotels." How's that for irony?! and sometimes that is the hardest part of all. simplicity wins again. It always does. "Be as simple as you can be. You'll be astonished to see how uncomplicated and happy your life will be."~Yogananda It's probably past time to put together my annual summer bucket list. Tell me: What's on your summer bucket list? How do you remember to keep it simple?! How do you remember to actually be present? No comments:
mlfoundations/dclm-baseline-1.0
default
0.37
According to the Journal of Urology, researchers have successfully developed a new prostate cancer screening test. Researchers at New York Presbyterian Hospital and Weill Cornell Medical Center tested a combination of a new drug therapy and PSA level changes over time. This identifies men with a high PSA count with a higher risk of developing prostate cancer even though they might have had negative biopsies. Prostate cancer screening can identify prostate cancer at an early stage before it causes symptoms and is easier to treat. Abnormal tissue or cancer found early increases the chances of recovery because by the time symptoms appear, cancer may have advanced and begun to spread to other parts of the body. This is why men 40-years-old and over should have annual prostate cancer screenings. If you need to schedule your next (or first) prostate cancer screening and you live in the Los Angeles area, contact the La Peer Department of Urology to schedule an appointment with our expert urologists. WHAT IS PROSTATE CANCER? Prostate cancer is cancer is the most common type of cancer among men. The prostate is a gland in the male reproductive system that secretes the liquid portion in semen. Prostate cancer usually does not present any symptoms in its early states. This is why it is vital to schedule regular cancer screenings. When symptoms of prostate cancer present themselves, they can include: - Blood in urine - Blood in ejaculate - Hip or lower back pain - Lower back pain - Frequent, difficult, or painful urinating The risk factors for developing prostate cancer include genetics, environment, diet, and infections. The treatment for prostate cancer will involve some combination of the removal of the prostate gland, removal of the tumor, chemotherapy, or radiation. Two other tests have been most commonly used to screen for prostate cancer. During the first, a digital rectal exam (DRE), the examiner will insert a gloved, lubricated finger into the rectum to examine the adjoining prostate, estimate its size, feel for any lumps, and check for other abnormalities. The other test, a prostate specific antigen (PSA) blood test, measures the concentration of PSA in the blood. PSA is a chemical produced in the prostate. Higher levels of PSA in the blood can indicate men with prostate cancer. Since other factors (age, race, medications, prostate infection, and an enlarged prostate) can affect PSA levels, a urologist will need to interpret your PSA test results to determine if it is related to prostate cancer. STUDY: PSA LEVELS MORE EFFECTIVE WITH DRUG THERAPY This study found that PSA test can be much more effective when used in concert with drug therapy. By using PSA with some specific drugs, prostate cancer can be differentiated from benign prostate disease in patients that have previously proven difficult to diagnose. The drugs that were used were 5-alpha-reductase inhibitor drugs (finasteride and dutasteride) that are designed to decrease the size of an enlarged prostate. In the presence of cancer, PSA levels would remain persistently high despite prostate shrinkage. This combination screening method helps doctors gain better insight into the risk of prostate cancer in men with abnormal PSA readings despite a negative biopsy. Despite the fact that biopsies are more and more effective at detecting prostate cancer, a significant amount of patients with prostate cancer turn in negative biopsies. Researchers successfully detected cancer in men who took part in the phase of the study that involved the combined drug therapy and evaluation of PSA trends. This study could show that these drugs may be very helpful in diagnosing currently undetectable forms of prostate cancer. Better forms of detection will greatly aid in the treating of prostate cancer. For more information about prostate cancer screening, contact our Beverly Hills urologists at 855.360.9119 to schedule an appointment. Next, read about Vasectomy.
HuggingFaceFW/fineweb-edu
default
0.333
Dismiss Notice Fisher spare parts? Discussion in 'Commercial Snow Removal' started by alfman, Nov 8, 2001. 1. alfman alfman Member Messages: 32 I bought a F-350 PSD with a Fisher minute mount plow and tailgate spreader (both new).There is not a dealer close by and I like to keep spare parts for unseen breakdowns late at night. I have plenty of spares for my other setup, but no spares (yet) for my fisher.My question is what parts should I keep in the shop? Or should I just buy one of those emergency kit packages and play the rest by ear. 2. John DiMartino John DiMartino PlowSite.com Veteran Messages: 2,154 Id buy the emergency kit and maybe an extra hydro hose.Ive had ny oldest minute mount since 93,it has never ,ever given me any trouble,ever,until last season.I was plowing ,almost done,I noticed a drip of ATF on the ground,the base lug came loose,If i would have checked it,I would have caught it .Anyway,I had to replaces a 30 cent O-ring,about 20 minutes of time,and 2 quarts of ATF.I was able to toighten the nolts while plowing,and finish my run,it leaked a tiny bit still.The new minute mounts have a different ram setup,and no base lug,so no more problems. 3. CT18fireman CT18fireman Banned Messages: 2,133 I would buy spare hoses, couple, pivot pins and keep some various size half inch bolts handy. Along with this make sure you have extra fluid. I also have a couple extra controllers, harnesses, lights, extra powerpack and parts and more. This stuff I have accumulated and I keep because I run multiple trucks. There may be more that you will want but this is a start of what I would recommend. 4. Jay ALC Jay ALC Senior Member Messages: 124 I dexvided to bring this thread back up because I was thinking about this very same subject. I have a brand new Blizzard 800 (not the expanding one) and they do not offer an emergency kit from Blizzard. So I was curious exactly what these kits from different companys are made up of, what exact parts? Also what types of other spare parts should I keep on hand? My dealer is very close and open 24/7 during storms but I would rather not have to wait around if there were many people needing something or in case he runs out of some basic part. Thanks in advance for any help. 5. wxmn6 wxmn6 PlowSite.com Addict Messages: 1,037 You should stock up yourself with some spare parts that get worn out the most. Usually the parts that constantly is in work when plowing. I never seen a Blizzard snowplow, so I cannot tell which parts you would want to stock up. But I'm fairly sure that you would want to get spare hoses, fittings, pivots pins, bolts, and fluids. Pretty much like what CT18fireman posted earlier. Maybe you can ask your dealer what he think you should stock up with. 6. plowking35 plowking35 2000 Club Member from SE CT Messages: 2,923 The fisher and western kits have spare solenoid, spare trip sprind in the western kit for full trip plows. Many pins and clips to hold the various links together, and a hose. They also give you a winter hat and 1qt of plow fluid.
mlfoundations/dclm-baseline-1.0
default
0.37
Knowing how to think is not a given. Children need to be taught how to think critically, logically and with reason. This is because being intelligent and knowing how to think are not the same thing. The problem is called dysrationalia – the inability to think and behave rationally despite having adequate intelligence. Why is it so important to teach children to think? - Why children need to be taught thinking skills - Teach your child how to think – Edward de Bono - Critical thinking is vital in the age of the Internet Help Your Child Build Thinking Skills - Critical Thinking Skills Success in 20 Minutes a Day – learn the skills to recognise and define problems and sort out unnecessary information to make smart decisions. - Reasoning Skills Success in 20 Minutes a Day – sharpen the skills for inductive reasoning, logic, and validity of evidence. - 501 Challenging Logic & Reasoning Problems – offers a series of logic and reasoning problems. - Thinking Games to Challenge the Mind - miniLUK Complete Brain Challenger Set - Brain Games Activity Books - Thinking Games by Smart Games - Apps for Developing Problem Solving Skills - Apps for Developing Logical Deduction, Creative Thinking and Problem Solving Skills - Apps for Teaching Physics, Creative Thinking and Problem Solving
HuggingFaceFW/fineweb-edu
default
0.333
DNA fingerprinting is the technique of finding the difference between the satellite DNA regions in the genome. These regions are stretches of repetitive DNA which do not code for any specific protein. These non-coding sequences form a major chunk of the DNA profile of humans. They depict a high level of polymorphism and are the basis of DNA fingerprinting. These genes which show a high level of polymorphism in all kind of tissues as a result of which they prove to be very useful in forensic studies. Any piece of DNA sample found Any piece of DNA sample found at a crime scene can be analysed for the level of polymorphism in the non-coding repetitive sequences. After the DNA profile is traced, it becomes easier to find the criminal by performing the DNA fingerprinting for the suspects. Apart from crime scenes, Fingerprinting applications also prove useful in finding the parents of an unclaimed baby by conducting Paternity test on a DNA sample from the baby. Alec Jeffreys developed this technique in which he used a satellite DNAs also called VNTRs (Variable Number of Tandem Repeats) as a probe because it showed the high level of polymorphism. The process includes - Isolating the DNA. - Digesting the DNA with the help of restriction endonuclease enzymes. - Separating the digested fragments as per the fragment size by the process of electrophoresis. - Blotting the separated fragments onto synthetic membranes like nylon. - Hybridising the fragments using labelled VNTR probes. - Analysing the hybrid fragments using autoradiography. As discussed earlier the technique of fingerprinting is used for DNA analysis in forensic tests and paternity tests. Apart from these two fields, it is also used in determining the frequency of a particular gene in a population which gives rise to diversity. In case of the change in gene frequency or genetic drift, Fingerprinting can be used to trace the role of this change in evolution. To get a complete understanding of this technique, visit Byju’s
HuggingFaceFW/fineweb-edu
default
0.333
1937 - The hearing loop The hearing loop was invented by Joseph Koliakoff in 1937. It is a special type of sound system for use by people with hearing aids, providing a magnetic, wireless signal that is picked up by the hearing aid when it is set to 'T' (Telecoil) setting. Its invention transformed the accessibility of events for those with hearing impairments and is now a requirement for all venues as part of the Equality Act. 1937 - The walkie talkie A walkie-talkie, more formally known as a hand-held transceiver, is a hand-held, portable, two-way radio transceiver. Before the mobile phone, it changed the dynamic of communication into something where you could talk to someone a long distance away while still having the flexibility of mobility. The first device to be widely nicknamed a "walkie-talkie" was developed by the US military during World War II and is still used by on-site organisers of large-scale events. 1953 - The printer The first mechanical printer was designed in the 1800s, but it was not until 1953 that Remington Rand developed the first high-speed printer to be used with the UNIVAC computer. Although typewriters were still primarily used at this time, it saw a new era of ease for businesses everywhere. 1956 - Conference calls Conference calling was invented by Bell Labs in 1956 and introduced to the public by AT&T at the 1964 New York World's Fair. Called the Picturephone, this invention invited the visitors to talk to persons thousands of miles away at Disneyland in Anaheim, California. The internet didn’t exist yet and the computers used were as big as rooms, but it started the process we now rely on every day to communicate with larger groups. 1964 - Fibre optic cables Fibre optics is the contained transmission of light through long fibre rods of either glass or plastics. Since the 1930s thin filaments, or fibres, of glass have been used to see inside the body, but these long remained unusable for long-distance information transfer because too much light was lost along the way. In the 1960s Charles Kao presented a solution: fibres of very pure glass transported sufficient light. Together with laser technology, his solution has made telecommunication using optical fibres possible. Fibre optic cables are used for transmitting voice, images, and other data at close to the speed of light.
HuggingFaceFW/fineweb-edu
default
0.333
- Làm thế nào để giải phóng bồn rửa trong phòng tắm của thiếu nữ - Drano Snake Plus - Nguyên nhân gây tắc nghẽn? - Tóm tắt Phương pháp: Cách thông tắc bồn rửa trong phòng tắm - Bước 1: Xóa Guốc ngay lập tức - Bước 2: Gỡ bỏ Siphon (Thường được gọi là bẫy P) - Bước 3: Hủy đăng ký toàn bộ hệ thống - Lời khuyên về cách tránh tắc nghẽn trong bồn rửa chén của bạn - Nguồn gốc của Thợ sửa ống nước - Sự thật thú vị về hệ thống nước - Phần kết luận Làm thế nào để giải phóng bồn rửa trong phòng tắm của thiếu nữ Tôi là người cha đáng tự hào của những cô gái sinh đôi gần đây đã sống sót qua tuổi thiếu niên và bước qua tuổi hai mươi. Trong suốt hai mươi năm đó, tôi đã phải đối mặt với nhiều thứ cắm lên bồn rửa trong phòng tắm, từ bút chì và tóc đến một số vật thể rất mất vệ sinh sẽ không được đề cập ở đây! Đủ để nói rằng tôi đã phải nghĩ ra một số cách khá độc đáo để thông tắc bồn rửa trong phòng tắm. Một số công cụ tôi đã sử dụng trong nhiều năm qua là: - Bàn chải đánh răng: Rất tiện dụng một khi bạn đã loại bỏ siphon (hoặc p-bẫy). - Móc treo dây bị hỏng: Tuyệt vời để lấy những sợi tóc và guốc khác từ đường ống. - Cờ lê (hoặc cờ lê): Không, không được sử dụng để đập đường ống trong sự thất vọng, nhưng được sử dụng để loại bỏ xi phông. - Tay tôi: May mắn như nó có thể nhận được, đôi khi dính ngón tay của bạn vào các đường ống gần ống hút là cách tốt nhất để loại bỏ phần lớn gunk phủ lên các đường ống. - Một kẹp giấy: Rực rỡ nếu bạn muốn lấy một ít tóc ngay trên đỉnh cống! May mắn cho bạn, tôi tình cờ có một bồn rửa bị tắc tất cả đã sẵn sàng để không bị nghẹt, vì vậy tôi thực sự có thể chỉ cho bạn cách tháo bồn một cách an toàn mà không làm hỏng phòng tắm, cái tôi của bạn hoặc trong trường hợp cực đoan trong cuộc hôn nhân của bạn! Drano Snake Plus Mặc dù phương pháp tôi chỉ ra là một phương pháp hữu ích và không làm tắc bồn rửa rất tốt, nhưng bạn thực sự có thể không phải tháo ống hút để loại bỏ sự tắc nghẽn tóc đáng sợ. Drano Snake Plus là một công cụ tuyệt vời giúp loại bỏ sự cần thiết của các kỹ thuật ống nước tiên tiến! Bạn không chỉ có một ống thoát nước sạch mà còn tránh được những vết thương do cố gắng vắt vào một cái tủ được thiết kế để chứa dầu gội chứ không phải cơ thể người! Nguyên nhân gây tắc nghẽn? Guốc là phần tích tụ đơn giản của 'mảnh vỡ' trong đường ống, guốc có thể nằm trong ống hút hoặc các phần bên của đường ống và thường rất khó để làm sạch. Một đôi guốc thường được tạo thành từ xà phòng, dầu mỡ, kem đánh răng và tóc, mặc dù nếu bạn có những cô gái tuổi teen, bạn có thể sẽ tìm thấy một vài món đồ khác như cắt móng tay, trang điểm, kẹp tóc và các vật bất ly thân khác sẽ tạo thành một phần của guốc Tôi đã tìm thấy bồn rửa trong phòng tắm bị tắc nhiều lần hơn tôi nhớ, nhưng may mắn thay, nó thực sự đã dạy tôi một chút về hệ thống ống nước! Tóm tắt Phương pháp: Cách thông tắc bồn rửa trong phòng tắm Bươc Quá trình Thời gian 1 Sử dụng công cụ để loại bỏ guốc gần đầu cống - có thể đây là tất cả những gì bạn cần làm. 5-10 phút 2 Loại bỏ siphon (thường được gọi là p-bẫy) - guốc thường được xây dựng ở đây, điều này cũng cho phép bạn truy cập vào các đường ống khác trong hệ thống thoát nước 30 - 60 phút 3 Hủy đăng ký toàn bộ hệ thống - bây giờ bạn đã xóa guốc chính, bạn có thể sử dụng Drano hoặc nước sôi để loại bỏ bất kỳ mảnh vụn nào khác như dư lượng xà phòng. 10-60 phút Nhấp vào hình thu nhỏ để xem kích thước đầy đủ Bước 1: Xóa Guốc ngay lập tức Bồn rửa trong phòng tắm của tôi có phích cắm bật lên, vì vậy tôi không thể đặt các thiết bị đơn giản để chụp tóc và các mảnh vụn khác có thể làm tắc bồn rửa, do đó, thường thì cống sẽ bị tắc, và cách duy nhất của tôi là thử và sử dụng một công cụ để có được sự tắc nghẽn. Với việc sử dụng kẹp giấy duỗi thẳng tiện dụng của tôi, tôi có thể đi sâu vào độ sâu của ống xả phía trên và lén lút gỡ bỏ phần tóc đã gây ra guốc, trong nhiều trường hợp, tóc sẽ bị tắc xung quanh và ngay dưới cống nên thăm dò bằng giấy clip có thể là tất cả những gì bạn cần làm. Như bạn có thể thấy từ những bức ảnh bên phải, tôi đã có thể loại bỏ khá nhiều lông và làm thông thoáng bồn rửa. Có một sự khác biệt đáng chú ý đối với dòng nước khi nó rút, nhưng nó vẫn không di chuyển nhanh như tôi muốn, vì vậy tôi phải chuyển sang Bước 2. Bước 2: Gỡ bỏ Siphon (Thường được gọi là bẫy P) Theo kinh nghiệm của tôi, nếu việc tháo guốc ra khỏi đỉnh cống không làm giảm bớt vấn đề tắc nghẽn, thì có lẽ có một sợi tóc chặn ống hút. Do đó, bạn phải lấy mạng sống của mình trong tay và loại bỏ ống hút, và làm sạch nó một cách triệt để. Một vài lời khuyên tôi đã học được: - Luôn có sẵn một cái bát hoặc giỏ lớn để lưu trữ các vật phẩm từ bên dưới bồn rửa. Mặc dù điều này không bắt buộc, nhưng nếu bạn chỉ cần di chuyển chúng lên một bề mặt gần bồn rửa, bạn chắc chắn sẽ cảm thấy sự phẫn nộ của một nửa tốt hơn của bạn khi họ nhìn thấy mớ hỗn độn bạn đã tạo ra, trong khi bạn biết rằng nó sẽ được xóa sạch, thì đó là an toàn hơn nhiều để giữ mọi thứ gọn gàng khi bạn làm việc! - Chuẩn bị một cái bát lớn thứ hai để đặt bên dưới ống hút khi bạn tháo nó ra; Sẽ có rất nhiều gunk và nước được giải phóng khi bạn tháo ống hút và có một cái bát sẵn sàng để bắt chất lỏng này sẽ tiết kiệm rất nhiều thời gian và đau lòng. - Ấm lên! Bạn có thể nghĩ rằng tôi đang nói đùa, nhưng điều cần thiết là bạn phải thực hiện một vài lần kéo dài trước khi cố gắng tháo ống hút, tôi phải thừa nhận rằng lần đầu tiên tôi thử điều này, tôi thấy mình ở một vị trí rất khó xử bị kẹt dưới bồn rửa với Chuột rút làm đau chân tôi, thật khó chịu tôi có thể nói với bạn! Khi bạn đã chuẩn bị một cách thích hợp (loại bỏ nội dung của tủ và đặt bát bên dưới ống hút), bạn đã sẵn sàng để loại bỏ ống hút. - Bắt đầu bằng cách tháo 'đai ốc' ở điểm cao nhất của ống hút; đôi khi điều này sẽ chỉ được siết chặt bằng tay vì vậy bạn sẽ không cần cờ lê, nhưng thường thì nó sẽ rất chặt và bạn sẽ cần nới lỏng nó nhẹ nhàng bằng cờ lê. Hãy cẩn thận nếu bạn có một ống nhựa vì bạn rất dễ bị nứt đường ống. - Khi 'đai ốc' cao hơn đã được nới lỏng, hãy bắt đầu nới lỏng 'đai ốc' thấp hơn - thử và giữ ống hút khi bạn làm điều này để bạn không làm rơi ống hút và làm đổ nước bị kẹt trong đó. Từ từ hạ thấp ống hút một khi bạn đã nới lỏng 'hạt' này và bỏ trống vào một cái bát, tôi lưu ý rằng bạn không nên đổ nước vào bồn rửa, bạn đã bỏ ống hút chưa? Trong những năm còn trẻ, tôi đã làm điều này, như Homer Simpson sẽ nói, DOH! - Sử dụng bàn chải đánh răng hoặc kẹp giấy cũ đáng tin cậy của bạn để làm sạch ống hút để không còn mảnh vụn. Rửa sạch ống bằng nước sôi để loại bỏ bất kỳ dư lượng dầu hoặc xà phòng. - Làm sạch đường ống một cách tốt nhất có thể bằng bàn chải đánh răng và kẹp giấy trước khi gắn ống hút lại; nếu bạn có Drano Snake Plus, bạn sẽ có thể loại bỏ một số mảnh vỡ có thể bị tắc ở nơi đường ống siphon nối với hệ thống ống nước chính. - Vặn lại ống siphon lại trên các đường ống, lần này hãy siết chặt 'đai ốc' trước, trong một số trường hợp tùy thuộc vào ống hút bạn có thể siết chặt bằng tay, bạn muốn vặn chặt đủ để không bị rò rỉ khi chạy nước nghĩ rằng siphon là an toàn để xem nếu nó kín nước. - Tất cả những gì bạn cần làm bây giờ là dọn dẹp để vợ bạn không nhận thấy sự khác biệt. Tôi lưu ý rằng tại thời điểm này, vợ tôi đã sửa chữa bồn rửa bị tắc nhiều lần, công việc có thể được thực hiện bởi bất cứ ai, và không phải lúc nào cũng cần một người đàn ông mưu mô! Bước 3: Hủy đăng ký toàn bộ hệ thống Khi bạn đã loại bỏ toàn bộ pha chế bóng tóc có thể giống như "ngày tóc xấu", bạn nên sử dụng chất lỏng Drano để xả hệ thống của mình, việc này sẽ xử lý mọi guốc khác có thể vượt xa hệ thống của bạn ngoài kẹp giấy ma thuật của bạn. Tôi khuyên bạn nên sử dụng một thùng chứa đầy chất lỏng Drano và để nó ít nhất 30 phút, điều này không chỉ giúp loại bỏ bất kỳ tắc nghẽn còn sót lại mà còn đảm bảo rằng mọi sự tích tụ tóc khác xuống hệ thống sẽ bị xóa. Khi bạn đã để lại hóa chất trong một giờ hoặc lâu hơn (đủ thời gian để xem tập phim Dr. Who mới nhất, cá rằng anh ta không bao giờ phải chiến đấu với quái vật tóc!) Bạn có thể xả nước vào hệ thống. Nếu bạn không thể có được Drano, bạn có thể đun sôi một vài gallon nước và đổ nó xuống cống, vì nó sẽ nóng hơn nước nóng thông thường của bạn sau đó nó sẽ hỗ trợ loại bỏ rất nhiều cặn xà phòng và gunk khác có thể tích lũy trong cống của bạn. Lời khuyên về cách tránh tắc nghẽn trong bồn rửa chén của bạn - Mỗi tháng một lần, đun sôi một vài gallon nước và từ từ đổ nó xuống bồn rửa, vì nước nóng hơn nhiều so với nước máy, sau đó nó có thể hòa tan nhiều cặn xà phòng và các chất nhờn dính khác trên tường của bạn pips. - Sử dụng màn hình cống. Mặc dù điều này không thể có trong tất cả các bồn rửa, một màn hình cống sẽ nằm bên dưới phích cắm của bạn và nhặt nhiều mảnh vỡ có thể gây ra tắc nghẽn như tóc, v.v. - Nếu bạn có nút chặn bật lên, bạn nên vệ sinh nó thường xuyên, những loại nút này có thể thu thập các mảnh vỡ và gây tắc nghẽn nhanh hơn. Nguồn gốc của Thợ sửa ống nước Chữ Latinh cho chì là pumbum, do đó chì được ký hiệu là 'Pb' trên bảng tuần hoàn. Khi người La Mã sử dụng chì trong ống dẫn và ống dẫn, bất kỳ ai làm việc với chúng đều được gọi là Plumbarius, điều này sau đó được rút ngắn thành Plumber. Sự thật thú vị về hệ thống nước - Albert Einstein là một thành viên danh dự của Liên minh Thợ ống nước và Thân cây. - Người Ai Cập đã sử dụng đường ống đồng cho hệ thống ống nước của họ hơn 3000 năm trước. - Trong một năm, một hộ gia đình trung bình lãng phí 9000 gallon nước trong khi chờ nước nóng lên - Chỉ có 2% nước trên trái đất là trong lành, phần lớn trong số này là ở các tảng băng trôi hoặc sông băng hoặc các nguồn ngầm. - Người Ai Cập đặt tên cho nhà vệ sinh là 'Ngôi nhà danh dự' - Người La Mã đặt tên cho nhà vệ sinh là 'Cần thiết' - The Tudors đặt tên cho nhà vệ sinh là 'Quyền riêng tư hoặc Nhà riêng tư' - Người Pháp đặt tên cho nhà vệ sinh là 'La Chamber Sent hoặc buồng có mùi' - Tên người Anh là nhà vệ sinh 'toilet, loo hoặc bog'! - 70% nam giới rời khỏi chỗ ngồi - 89% phụ nữ rời khỏi chỗ ngồi Phần kết luận Tôi không phải là thợ sửa ống nước; Tôi sẽ không bao giờ là một thợ sửa ống nước! Tuy nhiên, tôi đã tự tiết kiệm được khoảng 300 đô la bằng cách ứng biến đơn giản và sử dụng các giải pháp không thông minh vượt trội. Phương pháp của tôi có thể không phải là tốt nhất, và bằng cách chi tiêu một vài đô la, bạn có thể loại bỏ nhu cầu loại bỏ ống hút; tuy nhiên, giải pháp của tôi sẽ đảm bảo rằng bồn rửa trong phòng tắm của tôi vẫn không bị tắc trong một năm nữa. Tôi hy vọng rằng bây giờ bạn cũng có các kỹ năng để thông tắc cống! Bài viết này là chính xác và đúng với kiến thức tốt nhất của tác giả. Nội dung chỉ dành cho mục đích thông tin hoặc giải trí và không thay thế cho tư vấn cá nhân hoặc tư vấn chuyên nghiệp trong các vấn đề kinh doanh, tài chính, pháp lý hoặc kỹ thuật.
HuggingFaceFW/fineweb-2
vie_Latn
0.0775
The paper ballot system, with standardized voting forms, was first adopted in the Australian state of Victoria in 1856, and in the remaining Australian states over the next several years. The paper ballot became known as the "Australian ballot," and New York was the first American state to use it, in 1889. The first official use of a lever type voting machine, known then as the "Myers Automatic Booth," was in Lockport, NY in 1892. Four years later, they were employed on a large scale in Rochester, NY, and soon were adopted statewide. By 1930, lever machines had been installed in virtually every major city in the US, and by the 1960’s well over half of US votes were being cast on these machines. On mechanical lever voting machines, the name of each candidate or ballot issue choice is assigned a particular lever in a rectangular array of levers on the front of the machine. A set of printed strips visible to the voters identifies the lever assignment for each candidate and issue choice. The levers are horizontal in their unvoted positions. The voter activates the machine with a lever that also closes a privacy curtain. The voter pulls down selected levers to indicate choices. When the voter exits the booth by opening the privacy curtain with the handle, the voted levers are automatically returned to their original horizontal position. As each lever returns, it causes a connected counter wheel within the machine to turn one-tenth of a full rotation. The counter wheel, serving as the "ones" position of the numerical count for the associated lever, drives a "tens" counter one-tenth of a rotation for each of its full rotations. The "tens" counter similarly drives a "hundreds" counter. If all mechanical connections are fully operational during the voting period, and the counters are initially set to zero, the position of each counter at the close of the polls indicates the number of votes cast on the lever that drives it. Interlocks in the machine prevent the voter from voting for more choices than permitted. Because these machines are no longer made, the trend is to replace them with computer-based or direct recording electronic systems. (info from About.com, photo from the National Museum of American History of the Smithsonian Institution)
HuggingFaceFW/fineweb-edu
default
0.333
Sunday, August 28, 2011 Fifteen Facts Special Edition: Fifteen Facts About Toluca Fact #1 In Toluca they ALWAYS set up tents in the middle of the road for parties (and most of the time there is no warning). Fact #2 Almost every house in Toluca has bars on the window (which is why I freak out because mine doesn't and someone could easily break in and kidnap me ... wait, why am I advertising that fact to the whole world?). :D Fact #3 A lot of people in Toluca steal electricity. I am not sure how they do that, but they do. (We have yet to pay electricity since we moved here! Hehe!) Fact #4 There are enough dogs in Mexico to fill the entire state of Texas. (Okay, maybe that's a slight exageration, but you get the point). Fact #5 People here like to paint their houses the oddest colors (I think we've already established that though). Fact #6 Never, and I mean NEVER call anyone an Indian. For some strange reason that is a huge offense (I personally think that it's crazy. I mean, I have Indian blood in me and I'm proud of it ... Comanche!!). Fact #7 In Mexico we have colonies. I guess I still don't understand that. But It really used to confuse me when we first moved here. I'd mail letters to my friends and it'd be like: Calle Zapatos Viejos #25 San Miguel de los Monos Toluca, Mexico 356499 I mean, how weird does that look? It's rather confusing if you ask me. Must be why their mailing system stinks ... Fact #8 House numbers don't seem to follow any rules here. Okay, I'm not sure this is a fact, per se, but it is something that I have noticed. So, for example, the house down the road can be #67. And the house right next door is #836. And so on, so forth. It seems to me that they just randomly choose whatever number that they want and they stick that on the front of their house. It makes it very difficult to know how to find a place (once again, this is not really a fact, but around here in Toluca this is what I have seen). Fact #9 There is absolutely NO wildlife in Toluca. The only birds you see are sparrows, black birds, and pidgeons. You might see the occasional hummingbird, or red-wing blackbird, but you never see animals (well, except the dogs and rats). So when they see a rabbit or squirrel they go crazy! And guess what?! That is what I do now too. When I see any type of furry animal I get all excited and point! Gosh, in Kentucky deer and other animals used to be plastered on the roads because there were so many of them. Fact #10 The state of Mexico (Estado De Mexico) is the best state in Mexico. And by far the most beautiful! Fact #11 If the skies are clear you can see the volcanoes in Puebla from the top of our volcano, Nevado de Toluca (Xinantecatl, which means either, The Naked Lord, or Lord of The CornStalks). Haha! Fact #12 Toluca is famous for it's longanisa (It's a type of sausage. Chorizo and longaniza do taste different. Longanisa is way better). Might I say, "Yum!" Fact #13 The temperature is usually in the 50's or 60's around here. It never goes above the low 80's. And when it actually gets in the 80's we are having a "heatwave." Fact #14 Considering the statement above ... boots and winter clothes are pretty much a year round thing. Fact #15 They do not have waterbugs here! I guess they stray from colder weather (because I am told that they live in Veracruz). Praise God for that! Anyways, it has taken me all night to come up with these "facts." I know I have better ones in my brain somewhere, but I have lived here so long that it's hard to remember things that are not normal to Americans. Hehe. Arg, I need to hit the sack. I just got up to finish this post because I could not sleep. It's too hot in my room (although it's only 55 degrees outside). And I am an insomniac. I think my problem is that I drink too much caffeine. It's all of the Dr. Pepper. It leaves me tossing and turning all night. Anyways, I gotta try to sleep. And it'll be just my luck that my dad will come banging on my door in awhile for breakfast ... which we only seem to have on the days when I couldn't sleep. Man, I feel a grouchy mood coming on. Get out of my way, people! :D Peace out! Nicole Wakefield said... Haahaha I totally relate to all of these!! (Well except for #10 of course ;P haha) Everything here is the same- Indian name calling, middle of the street parties, year-round winter clothes, mixed up house numbers... Haha I think they just pick their lucky number :)) The one I get the biggest kick out of is the weather one! It's like when it hits 85 here, and the people are walking around fanning themselves and talking about "este calorón!" LOL! Haha they have nooo idea... Well, keep up the good blogging! We're all enjoying it :) Dacia Loa said... Haha! Yep, you gotta admit ... Mexico is sorta unique. Yeah, I am biased (#10)! Dacia Loa said... Oh, btw, I vote Puebla as the second prettiest state! ;)
mlfoundations/dclm-baseline-1.0
default
0.37
No Migration From West Bank... Jordan Has Nothing to Do With Balfour Declaration Thursday, 12 December, 2019 - 12:00 Saleh ِAl-Qallab Former Jordanian information minister Indeed, it is no coincidence that among two former prominent Jordanian officials who had taken part in taking difficult and decisive decisions, one would warn against a transfer, a mass migration, from West Bank to Jordan and the other against a sequel to the Balfour Declaration targeting the east of the Jordan River, the Hashemite Kingdom of Jordan, and would be implemented by Israel soon under the Zionist slogan, “From the Nile to the Euphrates.” Both these officials are from Nablus, a city that was and still is referred to as the Mountain of Fire for playing a vital role in the successive Palestinian revolutions. In a previous article for Asharq al-Awsat, I had insisted that even if extremist Israelis and Zionist Jews, especially in the United States, considered expelling Palestinians from what was left of Palestine, if it were possible before, was not possible today. These remaining parts of Palestine are populated by more than 3 million Palestinians who will cling to their homeland, and no mass migrations have been recorded among them for more than half a century. The cities and villages of West Bank are well-built and are home to the best universities and schools; its people live under European standards. It is impossible for a mass migration of the people of the West Bank who are as rooted in their land as the centuries-old, even millennia-old, olive trees in that part of Palestine. It is well known that some Israelis who take their distant future into consideration reject this and that the Western World, mainly Europe, also rejects this. Therefore, if such a “transfer” were possible in 1967, it no longer is, even if extremist Zionists wanted it and had the support of the American President Donald Trump. Despite the fears expressed in all seriousness by those whom we respect and admire and discussed this issue with, the way it was presented is out of the question. More important, is that the people of this part of Palestine have learned a lot after what happened in 1948 and in June 1967. This made them very attached to the West Bank, and what is worth noting in this regard is that the number of people who returned to their homeland after the infamous Oslo Accords has exceeded half a million. This is despite the fact that the Israelis kept transgressing these agreements and the former Israeli Prime Minister Yitzhak Rabin was the only one who was serious about implementing them, famously gave his life for this conviction. Importantly, and undisputedly, the motivations behind those warnings against mass migration are cautions stemming from bitter experiences. We must affirm, however, that it is entirely out of the question, as these people are holding very tightly to their land. There will be no migration from this land, come what may. This is one issue. Another is that caution compelled the former Jordanian Prime Minister, Taher al-Masri, to warn against any regional divisions in Jordanian society that may weaken Jordan’s position visa vie the Zionist project in the east of the Jordan River, considering that it was part of the mighty Balfour Declaration, something that Israel and Christian Zionists are pursuing so that they complete their occupation of historical Palestine and finish the Jordanian part of this project. Of course, the former Jordanian Prime Minister deserves nothing but respect and admiration for his fears of this grave issue, but what is known in this regard that the cursed, and not mighty, Balfour Declaration did not indicate anything to do with the east of Jordan River as part of the Zionist project, and no text indicates a Jordanian part of what Israel and Christian Zionists are pursuing other than the notorious Zionist slogan, “From the Nile to the Euphrates.” No doubt that the Oslo Accords have miserably failed after the assassination of Yitzhak Rabin, which left matters in the hands of more extreme Zionists who undermined the foundations of this agreement in the region and closed the horizon for Palestinian liberation and the establishment of their own independent country. This essentially means that if things keep moving in this direction, then all agreements in the area are transgressed, from Camp David, Wadi Araba Treaty, to Oslo, and there will be no disputing that the basis for this whole Middle Eastern struggle is the Palestinian cause. Perhaps extreme Zionists dream of East Jordan after completing their project at the level of Palestine, which will never happen. It is necessary to affirm that the East of the Jordan River, the Hashemite Kingdom of Jordan, was never mentioned in the infamous Balfour Declaration. The truth is that those who championed the Arab project wanted the Levant and Iraq to be one Arab country. Of course, this did not work from the beginning, given the circumstances of that well-known historical period. It is, therefore, a big mistake to say that the Zionist project includes the East of the Jordan River as part of the Balfour Declaration, as no official and unofficial documents indicate this. Consequently, we ought not to treat the issue this way, and it is well known that whoever keeps talking about the wolf will find the wolf at his doorstep. In all cases, what is supposedly known to every Jordanian and Arab is that the only time that Israelis tried to cross the Jordan River in an attempt to occupy the Western As-Salt Heights was on the 21st of March, 1968 and that the great Battle of Karameh took place between the Jordanian Arab Army alongside their Palestinian Fedayeen brethren, and defeated the Israeli Army. This is still an example of the fact that it is possible to defeat the Israelis and to expel them from every occupied Palestinian and Arab land. This means that even if the Israelis think the way the former Prime Minister Taher al-Nasri describes their ambitions, then they will not only face one Karameh but a thousand, and that the Jordanians will not be an easy bite in the Zionist project East of the Jordan River, as all Jordanians will all become Fedayeen. They should be sure that if such a battle were to take place, it would be a nationalist struggle that Iraqis, Khaleejis, and others will participate in it, just like they did before. Other opinion articles Editor Picks
mlfoundations/dclm-baseline-1.0
default
0.37
Oneida Nation Mini-Language Lesson Objective: This lesson is to provide students with exposure to the Oneida nations language by providing a written example of one of their ledgends, an oral example of their language, an activity and a test of knowledge. Included in this webpage: Ledgend of the No Faced Doll, An oral example of simple animal words, a crossword using facts from the story and the language lesson, and a test of all knowledge learned from this lesson. Ledgend Of The No-Faced Doll The Oneida Audio Language Page The Crossword Puzzle The graphics used for this page are a courtesy from The rest of this material is soley the creators and may not be reproduced without the permission of the AUTHOR. If there is any material here that is not credited, please email me and i will give you credit or remove the offending materials at once. These lessons may be used in your classrooms if you choose.
HuggingFaceFW/fineweb-edu
default
0.333
As we remember the Dominican Month for Peace in Ukraine in Advent... Let us remember: Ukraine gained its independence in 1991, following the dissolution of the Soviet Union. Following its independence, Ukraine declared itself a neutral state; it formed a limited military partnership with Russia and other Commonwealth of Independent States while also establishing a partnership with NATO in 1994. In 2013, after the government of President Viktor Yanukovych had decided to suspend the Ukraine-European Union Association Agreement and seek closer economic ties with Russia, a several-months-long wave of demonstrations and protests known as the Euromaidan began, which later escalated into the 2014 Ukrainian revolution that led to the overthrow of Yanukovych and the establishment of a new government. Since April 2014, when following Ukraine’s Revolution of Dignity, Russia annexed Crimea and launched aggression in the East of Ukraine, the fighting in parts of the Donetsk and Luhansk regions (collectively called the “Donbas”) between the Ukrainian Army and Russian-backed militia and regular troops has been ongoing. According to the Office of the United Nations High Commissioner for Human Rights, between March 2014 and October 31, 2019, approximately 13,000 – 13,200 people (including at least 3,345 civilians) were killed in this fighting. The number of wounded is estimated at 29,000 – 31,000, including approximately 7,000 – 9,000 civilians. In the Ukranian lands occupied by Russian-led troops, there has been killing and repression on ethnic and religious grounds, and thousands of homes and even entire settlements have been destroyed, causing a massive wave of internal displacement. More than one and a half million civilians inDonetsk and Luhansk regions have been forced to migrate to other regions. By 2020, hostilities in eastern Ukraine entered their sixth year and continue to put civilians’ lives and well-being at risk, even as absolute numbers of civilian casualties dropped. Former comedian Volodymyr Zelensky won the presidential election on April 21, 2019. Parliamentary elections in July delivered his party, Servant of the People, a single-party parliamentary majority, for the first time since Ukraine’s independence. After taking office, Zelensky demonstrated commitment to carrying out anti-corruption reform and ending the armed conflict with Russia. Let us pray: Today we hear the words of Jesus in the gospel of Mark, “be watchful, be alert” (Mk 13: 33) Let us awake during this Dominican Month of Peace for the plight of the people of Ukraine, that our voices across the world may unite us in solidarity for the dignity and sovereignty of the people of Ukraine.
HuggingFaceFW/fineweb-edu
default
0.333
), a lyre, the chief stringed instrument used in Greek music. Two main varieties are known to us from ancient art and literature, viz. the lyre ) properly so called, and the The distinctness of the lyre and the cithara may be shown from iii. p. 399 D, λύρα δή σοι, ἧν δ ̓ἐγώ, καὶ κιθάρα λείπεται κατὰ πόλιν ), and from Aristotle, who excludes the cithara from 8.6 = p. 1341, 18, οὔτε γὰρ αὐλοὺς εἰς παιδείαν ἀκτέον οὔτ ̓ ἄαλλο τεχνικὸν ὄργανον, οἷον κιθάραν κἂν εἴ τι τοιοῦτον ἕτερόν ) Mythologists generally taught that the cithara was invented by Apollo, the lyre by Hermes (Paus. ). The difference between the two instruments seems to be sufficiently ascertained from the representations of them [p. 2.105] found on ancient monuments, especially painted vases, on which two well-marked types can Cithara (Guhl and Koner.) be traced. One of these answers closely to the description which the author of the Homeric hymn to Hermes gives of the lyre invented by the youthful god (H. Merc. 41 ff.). The lower part or body of the instrument consists of a tortoise-shell, or of a wooden case in which the original tortoiseshell is more or less faithfully reflected. In this shell are fixed two curved arms (πήχεις horns, joined at the upper end by a crossbar (ζυγόν ). The strings pass from the shell, over a bridge or fret of reeds (δόνακες ), to the ζυγόν. The instruments of the other type are larger, and show a decided advance in point of construction. The shell is replaced by a wooden case, usually square or angular, and instead of “horns” we find the sides of the case prolonged upwards, so that the whole framework acts as a resonance box of considerable power. Now, it is clear from the evidence of the monuments that the first of these was the instrument of education and of every-day life; while the second was the “technical instrument,” seen in the hands of professional ), who wear the long robe proper to musical contests and other festivals. The first, therefore, must be the lyre, and the second the cithara. The early history of the lyre and cithara is obscure. In Homer we find a stringed instrument called the φόρμιγξ, used especially to accompany singing or epic recitation (ἀοιδή ). We also hear, somewhat less frequently, of the κίθαρις : but there is no trace of a difference between them. The verb (φορμίζω is used of the κίθαρις ); and conversely we find the phrase φόρμιγγι κιθαρίζειν (Il. 18.569). The word λύρα is post-Homeric: it occurs once in the Hymn to Hermes (50.423), but does not seem to have been in common use before the time of Pindar. It is worth noticing, as a consequence of the comparatively late date of the word, that the derivatives λυρίζω, are unknown in good Greek, κιθαρίζω being always used of the lyre and cithara alike; just as χαλκεύς, “bronze-smith,” was applied to workers in iron as well as in the older metal. It would be rash, however, to infer that the Homeric instrument resembled the cithara rather than the lyre. We may suppose that the later form of the cithara was developed gradually, retaining the original name, which therefore included all varieties, until the new word came into vogue for the commoner and more primitive kind. The author of the Hymn to Hermes recognises only one form, that of the lyre, to which he applies the terms κίθαρις as well. The identity of the κίθαρις and the lyre is also maintained by Aristoxenus, the pupil of Aristotle (Ammon. de diff. Voc. p. 82, κίθαρις καὶ κιθάρα διαφέρει, φησὶν Ἀριστόξενος ἐν τῷ περὶ ὀργάνου: κίθαρις γάρ ἐστιν ἡ λύρα κ. τ. λ. Regarding the original number and tuning of the strings, contradictory accounts were current. According to one statement in Diodorus (1.16 ), Hermes was the author of harmony of sound, and in that character invented a lyre with three strings, answering to the three seasons. The same author elsewhere (5.75) says that Hermes invented his lyre in place of the cithara, which Apollo had laid aside in remorse for his cruelty to Marsyas. According to the Hymn to Hermes (50.51) the primitive lyre was one of seven strings: ἑπτὰ δὲ συμφώνους ὀἱ̈ων ἐτανύσσατο χορδάς. On the other hand, the increase of the number of strings from four to seven appears to be claimed by Terpander, in two lines attributed to him: “ σοὶ δ ̓ἡμεῖς τετράγηρυν ἀποστέρξαντες ἀοιδὰν ἑπτατόνῳ φόρμιγγι νέους κελαδήσομεν ὕμνους. A different account, however, is given by Aristotle (Probl. 19.32), where he touches on the question why the interval of an Octave is not called δι ̓ὀκτώ (as a Fourth is a Fifth διὰ πέντε ). He suggests by way of answer that the scale was formerly one of seven notes only, saying that Terpander left out the note called τριτη, and added the at the upper end of the scale (the octave of the ὑπάτη, or lowest note). If this account is the true one, what Terpander did was to raise the scale to the compass of an Octave, but without increasing the traditional number of strings. However this may be, the comparative antiquity of a scale of at least seven notes is proved by their names. The following are the notes of the central octave in the later system, with the modern notes which show the intervals on the diatonic scale:-- , lit. “uppermost,” our “” next to ὑπάτη. third, viz. from the νήτη. “lowest,” our “highest.” Of these names there is only one that is admittedly later than the rest, viz. which probably dates from the time when the heptachord of Terpander acquired an eighth string, and consequently a complete diatonic scale of the compass of an Octave. If we may trust a passage quoted from Philolaus (Nicom. p. 17), the gap then filled up was not that between μέση the name τρίτα (he writes in Doric) to the the note which was a tone above the μέση. The change, therefore, consisted in inserting a note half a tone above the τρίτη of Philolaus, which new note then became the “third,” and made it necessary to find a new name--παραμέση --for the old τρίτη. But the language of Aristotle himself 19.7, 32, 47) shows that the exact steps of this progress were no longer known. According to Nicomachus, the eighth string of the scale was added by Pythagoras. Probably, however, this is a mere inference from the Pythagorean discovery of the numerical ratios on which the musical intervals--the Octave, Fifth, Fourth, and Tone--are based. Another notice (Boeth. de Mus. the improvement to a certain Lycaon of Samos. The lyre was originally played without the [p. 2.106] aid of a plectrum; and each string seems to have been sounded by a particular finger. Thus the lixano\s or “forefinger” was so called, according to Nicomachus (p. 22), because it was sounded by the forefinger of the left hand. It follows, as has been pointed out by Gevaert (ii. p. 254), that the left hand was used for the lower tetrachord, and that the little finger was not used to touch the strings. When the plectrum came into use, it was held in the right hand, and perhaps was specially employed for the air, while the softer tones produced by the fingers. of the left hand served for the accompaniment. This is suggested (though by no means proved) by the epigram of Agathias (Anth. Pat. by Gevaert: “ τὸν σοφὸν ἐν κιθάρῃ, τὸν μουσικὸν Ἀνδροτίωνα, εἴρετό τις τοίην κρουματκὴν σοφίην: δεξιτερὴν ὑπάτην ὁπότε πλήκτροισι δόνησας, ἡ λαιὴ νήτη πάλλεται αὐτομάτως. The phenomenon here referred to is the “sympathy” by which a sounding body excites the vibration of another whose note is in unison with it, or with one of its harmonics. Anacreon playing the Lyre. (Vase-painting in the British Museum.) The seven-stringed lyre was still in use in the time of Pindar, unless we suppose that the epithets ἑπτάκτυπος 2, 70) and ἐπτάγλωσσος 5, 24) are due to mere poetical tradition. On the other hand, we are told that Lasus of Hermione, who was an older contemporary of Pindar, introduced new notes, by which he broke up (διέρριψεν ) the existing scale (Plut. Mus. cc. 29, 30) A passage quoted by Plutarch (l.c. ) from the comic poet Pherecrates denounces a series of similar innovators--Melanippides, Phrynis, Cinesias, Citharista with Lyre. (Dennis's Etruria.) finally Timotheus of Miletus, who “outraged music with his twelve strings.” The object of the additional strings seems to have been not so much to obtain greater compass as to make it possible to combine different modes or keys, perhaps also different genera (see the art. ), on the same instrument, and to pass easily from one to another. It is the “multiplicity of keys or scales” (πολυαρμονία ) which is always ways associated with “multiplicity of strings” (πολυχορδία ) in the minds of those who, like Plato, regarded such changes as dangerous and corrupting. It is characteristic of the lyre and the cithara that the strings are all of the same length, so that the difference of pitch is entirely due to different thickness. In this respect they differed from instruments such as the harp, which have strings of different length, and again from those in which the length of the string is varied by the player, as in the case of the violin. The woodcuts above show the method of holding the lyre, in playing with the right hand only or with both. It was also played sitting, and supported on the knees. The cithara was held in the same manner. The harp type was represented in Greek music by the τρίγωνον or triangular harp, a Phrygian instrument, with which we find associated the Lydian πηκτίς. Both are condemned by Plato (Rep. iii. p. 399) for the excessive number of their strings. They are also mentioned together in a fragment of Sophocles, fr. πολὺς δὲ Φρὺξ τρίγωνος ἀντίσπαστά τε Λυδῆς ἐφυμνεῖ πηκτίδος συγχορδία. which was closely akin to the was so called from the bridge or fret (μαγάς ), by which a string could be divided by the player, so as to yield a higher note. It had twenty strings, and admitted of playing the same tones simultaneously in different octaves (hence called μαγαδίζειν ). This is also attributed by Aristotle (Probl. 19.14) to an instrument called the φοινίκιον or Phoenician lyre. The most perfect of all these instruments seems to have been the ἐπιγονεῖον, called after its inventor, Epigonus of Ambracia, which had forty strings. Besides these, we hear of the βάρβιτος, which is thought to have been nearly related to the lyre, also the νάβλα (Strab. x. p.471 ). Several of these names are confessedly barbarous, and all the instruments now in question lay under the imputation of being more or less alien to genuine Greek art. They evidently enjoyed much popularity, but were never regarded as of equal dignity with the lyre (Compare Carl von Jan, De fidibus Graecorum, Berolini, 1859; Westphal, Geschichte der alten und mittelalterlichen Breslau, 1864; Gevaert, Histoire et Théorie de la Musique de l'Antiquité,
HuggingFaceFW/fineweb-edu
default
0.333
Thursday, February 25, 2010 Chronicle of a Struggling Learner On Monday I begin my new life as a homeschooling mom. It's a midstream change, kind of sudden but really not. Our journey with my third child began years ago, probably around his second grade year (now seventh) when I noted that he tended to skip small words while reading aloud. Eventually this translated to poor performance on tests although I didn't make the connection until much later. Teachers warned me not to compare his performance with my first two, both top of class students. I guess I kept hoping that he would catch up but each year was the same drill. He started strong then crashed by about week seven or eight, usually around the time of the first comprehensive test. On a hunch, in fifth grade, I read aloud some questions from his "F" test. To my surprise, he answered me with 100% accuracy. The kid was not dumb. Fortunately for us, we have an optometrist in our town who tests for a peculiar learning issue. As my son subjected himself to a battery of "games," I was shocked to discover how much he missed as I followed along. In a nutshell, he has a visual processing deficit, not correctable by lenses. It goes much deeper in the heart of his brain. I discovered that my son had adapted to this, unconsciously, by suppressing the receiving center for one of his eyes. Think about reading a book with the type on the two facing pages overlapping, even slightly. This is how text looked to him. The brain is a marvelous organ and it does what it needs to do when taxed. Unfortunately, in this case, it staunched the dimension needed for information to make sense. Without a 3D understanding of text, my son struggled to get visual pictures, to organize information, to memorize--in short, to learn. Schools have tests for struggling learners but they do not test for this particular issue and, I found, do not really consider this a "valid" problem. It doesn't fit their box and they do not have specialists or programs ready to address this deficit. Coupled with the fact that my son could, with my help, make passing grades, he never failed enough to get the magical IEP (nor did we particularly want him attached to the little pink folder). I kept hoping that he would manage his learning using the sills he learned in an intensive vision therapy program (our dime) while I alerted teachers to particular hot spots. It hasn't worked despite my hopes and intentions. Denial is bliss, but truth is probably better though inconvenient. How can I expect a teacher to buffet my son amidst a class of 35 students, some with severe learning disabilities and specifically outlined objectives, as well as top learners who regularly linger on the edge of boredom in the classroom? I couldn't do it. Teachers must teach to the masses and pray that their teaching method reaches the majority of students. What tipped the scale for me was the dull undercurrent of concern about my son's social life. He has never been in "trouble" but he continues to disturb me with his friend choices and I see behaviors that, while small and maybe even age-appropriate (by the world standards), will grow in the wrong direction. For a kid who suffers in the classroom, there is a great desire to succeed at something. That something isn't always positive. Kids like this are grouped with underachievers, not the best friend pool. They lag schoolmates in understanding so they embellish, pose as characters to garner approval, or simply lie. It creates a disturbing social cycle. I finally had to ask. Why am I sending my son to school for seven hours each day? He isn't gaining more than a fraction of an academic year each year and he isn't thriving socially. Why is he there? It's sobering to admit, but the honest truth is that he was there for me, to give me a break. I have subjected him to a daylong babysitter--and not really a good one. So here we are. Starting homeschool. After the initial resistance to the idea, my son is excited and relieved that he is about to be released from the grind of school. We anticipate spending the first several weeks in a detox process, if you will, learning what learning can be outside the confines of a traditional environment. My challenge will be to teach a right brain child from my own left and linear brain. This will be a journey, thus the name of my blog. Regel, in Hebrew, means journey. Also tied to the word is enduring. I know I can do this but I also know my own limitations and tendencies. I figure that if I am going to suffer, I should share my suffering with others. After all, what is suffering for but that others can learn? I hope my friends will follow my journey, but more than that, I pray that other parents with struggling learners will find their way here and that I can encourage them through our experiences, successes, failures, blunders, and joys. Our kids our worth it. My son is worth it. This could possibly be the greatest investment I have ever made as a parent. As I read just this morning, "Rest assured that God never leaves a willing servant with nothing to do. The alternate opportunity He has in mind will yield bigger fruit, more satisfaction, and greater glory for Him." Jeremiah 10:23-24 NLT Blessings. CS 1. Great writing, Carrie. I pray God blesses you on your homeschool regal. (Love the word!)You ARE very clever.:-) 2. Carrie: This is an intriguing story. I hope that you will continue to share your adventures in homeschooling. I would like to read how this works out for you and your son. Homeschooling is an adjustment for both the student and the teacher. The learning curve is steep. I wish you both the best of luck. Keep writing! 3. You will certainly have an incredible journey! So glad you've decided to take it! p.s. i have a child with autism who also has Ocular Motor Dysfunction, and I use this resource for eye-exercises since we've opted not to do the vision therapy: 4. Hi: We started this week in Talca, Chile. We are one of the 60 odd families in the WHOLE country!. My daugther it's the same age than your son. She asked me if she could write to your son. I told her yes BUT you must agree first and through this blog first (mum filter). It's that ok for you? She is learning to speak and write english. Our blog is, if he wants to, your son could write a few spanish words for her there 5. Melissa- Thanks for the resource. Maulina- Wow! a pioneer, for sure. It would be great to have the kids write to each other. Not totally sure how they would write through the blog. You can email me and we can figure it out.
mlfoundations/dclm-baseline-1.0
default
0.37
Re: Compiling an ellipsis James Jones <> 23 Nov 1999 00:41:17 -0500 From comp.compilers Related articles Compiling an ellipsis (Guillaume Comeau) (1999-11-21) Re: Compiling an ellipsis (Alan Donovan) (1999-11-23) Re: Compiling an ellipsis (Zalman Stern) (1999-11-23) Re: Compiling an ellipsis (James Jones) (1999-11-23) | List of all articles for this month | From: James Jones <> Newsgroups: comp.compilers Date: 23 Nov 1999 00:41:17 -0500 Organization: Microware Systems Corporation References: 99-11-129 Keywords: C, code The answer to your question is: yes. Seriously, some compilers force optional arguments onto the stack, and some go ahead and pass them as if they were expected arguments (but performing the "usual argument promotions"). One can argue that the latter is preferable for the following reasons: 1. It covers up for programmers who sleaze out and forget to include --if the compiler does something special at the call side, forgetting to have a prototype visible at the call breaks things. (That's why the standard requires it.) 2. The parameter passing code is simplified, though just a little bit, because you have to deal with passing parameters on the stack anyway. 3. The "pretend everything's normal' approach, in the implementations I've seen at least, involve leaving space in the activation record to dump all the parameters potentially passed in registers into memory--the dumpage appears on the called function side. Depending on the architecture, it may be easier/more efficient to do it all at once and in just one spot. All that said...what really does all this mess is the va_start(), va_arg(), and va_end(); the *printf() calls just use those. Are you targeting a compiler with a broken ? James Jones Guillaume Comeau wrote: > Hence the question: are parameters in ellipsis forcefully on the > operand stack, or can they be in internal registers as space allows? > (in which I have some assembly work to do for each processor port). Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
mlfoundations/dclm-baseline-1.0
default
0.37
Publishers of technology books, eBooks, and videos for creative people Home > Articles > Digital Audio, Video > Pinnacle • Print • + Share This This chapter is from the book Choosing Your Capture Format The previous exercise explained how to capture in DV format, which is appropriate for the vast majority of users and projects. However, when you are capturing DV video from a DV camcorder, Studio gives you two other options: preview-quality capture and MPEG full-quality capture. Preview-Quality Capture. You select this option on the Diskometer or on the Capture Format tab of the Pinnacle Studio Setup Options dialog box. Preview-quality capture relates to Studio's SmartCapture feature, which stores the DV footage in a reduced-quality format that saves disk space but retains the original DV time code information. You edit using the preview-quality video, and then Studio captures the footage at full DV quality before rendering. SmartCapture was wonderful when it was introduced because disk drives were pricey and workspace critical. Today, however, an 80-GB hard drive costs under $100. And although SmartCapture works well, it adds both time and complexity to the production process. For this reason, I won't discuss SmartCapture further; see Studio's manual or Help files for assistance. MPEG Full-Quality Capture. Capturing in MPEG format is a slightly different story with a similar ending. Capturing using the MPEG option saves file space and production time if you're producing a DVD, VideoCD (VCD), or Super VideoCD (SVCD) project with MPEG video. However, the algorithm that Studio uses to encode MPEG during capture is optimized for speed, not quality, so Studio can store the video to disk in as near to real time as possible. In contrast, when Studio outputs to MPEG format during final project rendering, say for DVD production, the algorithm is optimized for quality, not speed. Note also that when you insert effects such as transitions, titles, or color correction into captured MPEG video, Studio implements the effects and then re-renders the affected portions of the video into MPEG format. So if your edits affect substantial portions of the video, your production-time savings will be minimal. In addition, the edited sections are encoded in MPEG format twice—once during capture and once during rendering—the digital equivalent of photocopying a photocopy. So unless you're producing a disk-based project and your edits will be minimal—and production time is absolutely critical—you should capture in DV format and then render in MPEG format after editing. This approach will maximize production quality, though production time may be extended. To choose your capture format 1. On the Diskometer, click the button for the desired capture format (Figure 3.15). The light to the left of the button lights up. If you choose DV Full-Quality Capture, you're all set; there are no other options to select. If you choose MPEG Full-Quality Capture, you need to set several options before capture. (See the following section for more information.) If you choose Preview-Quality Capture, check Pinnacle's Studio 9 manual for additional help. • + Share This • 🔖 Save To Your Account
mlfoundations/dclm-baseline-1.0
default
0.37
Skip to Content Astana Process: Foreign Minister meets with heads of government and Syrian opposition Minister of Foreign Affairs of Kazakhstan Kairat Abdrakhmanov met with head of Syrian government delegation, Syrias Permanent Representative to the UN Bashar al-Jaafari. Kazakh Foreign Minister noted the importance of the upcoming meeting of Syrian government and armed opposition as an important step in the preparations for the next round of Geneva peace talks under UN auspices. Astanas efforts in alleviating the humanitarian situation in Syria and providing appropriate assistance to the Syrian people were emphasized. Kazakhstan and its President N.Nazarbayev intend to continue contributing to the international efforts aimed at strengthening the ceasefire and finding a political solution to inter-Syrian conflict. The same day was held a meeting with Syrian opposition delegation where issues of upcoming talks in Astana were discussed. Heads of the two delegations expressed gratitude to the leadership and people of Kazakhstan for contribution to the peaceful settlement of the armed conflict in Syria and the perfect conditions provided for the international meeting, and noted the mood for a meaningful nature of the upcoming talks. Source: The Government of the Republic of Kazakhstan
mlfoundations/dclm-baseline-1.0
default
0.37
Highpower rifles, the 10-ring and Xs--Can you hit that at 1,000 yards? Gun Quotes: Rifles Finer than Minute of Angle Think your rifle shoots at distance? It doesn’t unless you know minute of angle like a Palma shooter knows minute of angle. “What you have to have is a rifle/ammo combination that you could consistently shoot into 1/2 minute of angle or less with iron sights at 1,000 yards if there were no wind. An...
mlfoundations/dclm-baseline-1.0
default
0.37
Abstract Volume:5 Issue-11 Year-2017 Original Research Articles |Online ISSN : 2347 - 3215 Issues : 12 per year Publisher : Excellent Publishers Email : firstname.lastname@example.org 2Environmental researches group (ERG), Department of Chemical sciences, faculty of Science and Technology, University Malaysia Terengganu, 21030 kuala Terengganu, Terengganu, Malaysia Physico-chemical features are among important factor that contributes to the variability of the estuarine environment. This study has been conducted to assess diurnal fluctuations of some physicochemical features in relation to tides in the surface water of Terengganu river estuary. The three stations were set al.,ong the area which linked the river to the coastal water where salinity, temperature, dissolved Oxygen, pH, TSS, chlorophyll-a, Phosphate and silicate nutrients were measured at the surface water for every 2 hrs over a period of 12 hrs during spring tide of September 2012. The salinity changes were found to follow the tidal rhythm, while the daily variation of the remaining feature were found to be more pronounced than the tidal induced variation.With exception to salinity, gradual increases in all features were observed during the day tides, which generally, decreased from afternoon and early morning. The effect of tidal amplitude which said to be important in determining the extent of variation was more pronounced at lower estuary. The remaining stations were observed to be strongly influenced by the river flow. How to cite this article:Selwa Seif Salum Mchenga, Suhaimi Suratman and Norhayati Mohd Tahir. 2017. Between River and Sea: What Control a Daily Variation of Physicochemical Properties of Estuary?Int.J.Curr.Res.Aca.Rev. 5(11): 30-40
HuggingFaceFW/fineweb-edu
default
0.333
His philosophy was outlined in 1898 in the book, To-morrow: A Peaceful Path to Real Reform (republished in 1902 as Garden Cities of Tomorrow), in which he suggested that out of a marriage of town and country would spring “a new hope, a new life, a new civilisation”. This was to use a model of community governance, an capture an uplift in land value that arose from development for the community rather than individual land owners, investing it for the long-term benefit of all. Drawing philanthropists and idealists from far afield, Howard founded Letchworth Garden City in 1903 to breathe life into this dream. The town, and the global Garden City movement it inspired, revolutionised ideas about high quality planned towns. His experiment at Letchworth Garden City proved his model worked, and Howard’s ideas were soon being shared more widely around the world.
HuggingFaceFW/fineweb-edu
default
0.333
What are the main threats? The justification for their inclusion on the IUCN Red List is numerous: first is their very small area of occupancy (i.e. their habitat range) which is less than 10 square kilometers (approx. 3.9 square miles). In fact the axolotl can only be found in just six isolated areas within one region of Mexico. This demonstrates the level of fragmentation of their habitat and is another key reason why there are listed as endangered. Human development (i.e. dams, farming land) cuts populations off from one another and prevents easy access to resources. Another key factor behind their endangered status is the deteriorating water quality of the canals they inhabit. Again this is caused by anthropogenic activities with the growing urbanization of Mexican cities leading to increased pollution levels resulting from surface water run-off. A new threat in the form of invasive species is adding to the pressure currently impacting the axolotl population. In particular the introduction of carp and tilapia fish species is a particular concern to the future of the population. These have been released into the Peubla region due to their importance in controlling aquatic weeds and insects. However both species out compete axolotls for resources and the tilapia fish will also consume axolotl eggs. The latter in particular has shown huge increases in abundance in recent years with one survey finding 600 kilograms (approx. 1,322 pounds) of tilapia fish within a single 100 meter net. Furthermore axolotls are also targeted by local fishermen both for their use in traditional dishes as well as for medicinal purposes.
HuggingFaceFW/fineweb-edu
default
0.333
Usually, when speaking of geological phenomena, one speaks of time in “aeons” or references events as “prehistoric.” But on the Big Island of Hawai’i, the vocabulary is a little different. Lava—or Pele, as it is commonly referred, after the volcano goddess of the Hawaiian religious tradition—is a fact of life that actively and regularly reshapes the land. It’s not uncommon to drive down a highway and find that lava has flowed over the road and rendering it inoperable, or to hear someone mention that a house or property was “taken by Pele.” On a hazy day, locals will complain of the vog (volcanic smog) and seasoned fisherman will tell you that the best place to catch fresh ahi is in the waters warmed by the lava pouring into the ocean. It’s no small wonder that Hawaiians and visitors alike continue to give offerings to the unpredictable volcano goddess. She exists as an unparalleled geological spectacle and an active reminder of the power of nature over man. Spiritually minded travelers who trek out to see the lava flows will often bring Pele wishes wrapped in ti-tree leaves (the same leaves that are used to make leis) or small gifts to leave at the site. It’s hard to describe the experience of being in the presence of an active lava flow. But to witness Pele in action, you have to do a little detective work, seeing as no one can ever quite predict where the flows of magma will crop up. The more scientifically-minded could do some research into fault lines and plate tectonics, but for the average traveler, the simplest way to do some lava-hunting is to simply ask around. People in the area tend to know. Lava, like many things that glow, is best seen in the dark. Because of this, many people perform their treks in the evening, but this can be challenging—and dangerous—for visitors unfamiliar with the terrain. Go with someone who knows the land, as it’s easy to get lost. Depending on where the lava is flowing, there may be organized hikes (where everyone gets to wear neon orange construction vests), boat rides, or scenic flights. If none of these are an option or you decide to do some do-it-yourself adventuring, just remember: lava may be pretty, but it’s dangerous. Don’t get too close. Know Before You Go Check out the visitors information for guided tours, the wonderful video presentation and fun places you can hike on your own.
HuggingFaceFW/fineweb-edu
default
0.333
Bacterial little RNAs perform many regulatory roles, including operating as antitoxic components in toxinCantitoxin systems. the trimeric complicated. Inhibition and self-assembly are both mediated completely with the ToxIPa RNA, without requirement for CC-4047 mobile elements or exogenous energy. Finally, we describe the roots of ToxI antitoxin selectivity through our crystal framework from the ToxINBt complicated. Our results present how a prepared RNA pseudoknot can inhibit a deleterious proteins with beautiful molecular specificity and exactly how these self-contained and addictive RNA-protein pairs can confer different adaptive benefits within their bacterial hosts. (hereafter ToxINPa), which originally was uncovered through its capability to confer bacteriophage level of resistance as an abortive an infection program (12, 13). ToxINPa includes a proteins toxin (ToxNPa) and a little RNA antitoxin (ToxIPa), that have a eliminate/recovery phenotype when overexpressed in (hereafter ToxINBt). The transcript and it is inhibited by ToxIPa in vivo. cells filled with individually inducible ToxNPa-FLAG and ToxIPa plasmids had been grown to log stage, and the result of ToxNPa appearance and following coexpression of ToxIPa on transcript amounts was examined by North blot (transcription during the period of the test. ToxIPa is normally a rare exemplory case of a normally occurring little RNA which features to counteract the experience of the enzyme. The crystal structure of ToxNPa sure to ToxIPa provided main insights in to the mechanism of the antitoxic activity: three ToxIPa RNAs, that are themselves cleaved off their recurring precursor by ToxNPa, are sure head-to-tail by three ToxNPa monomers to create a heterohexameric, triangular set up where the ToxNPa energetic site is normally occluded (Fig. 1were performed pursuing overexpression of ToxNPa and the next co-overexpression of ToxIPa. As proven in Fig. 1transcript, and following overexpression of ToxIPa restored transcript amounts. The degradation had not been noticed when an inactive, frameshifted ToxNPa variant, (ToxNPa-FS) (12), was portrayed, and RNA amounts weren’t restored in the ToxIPa vector-only control stress. The same design of ToxNPa-mediated RNA degradation and ToxIPa-mediated recovery was seen using the and RNAs (Fig. S1). Overexpression of ToxNPa also created a wide size distribution of ToxIPa items, displaying that ToxIPa is definitely prepared by ToxNPa in vivo. These outcomes confirm the ribonuclease activity of ToxNPa in vivo aimed both to general mobile targets also to its CC-4047 antitoxin transcript and the CC-4047 capability of ToxIPa to suppress this activity. ToxI Antitoxins Are Selective. After confirming the ribonuclease activity of ToxNPa in vivo as well as the actions of ToxIPa to neutralize this activity, we wanted to explore the specificity from the ToxI RNA antitoxin. To take action, cross-inhibition tests were performed using the RNA sequences are unrelated. Within an eliminate/recovery assay, ToxIPa counteracted ToxNPa however, not ToxNBt, and vice versa; each ToxI RNA antitoxin was energetic only against its toxin partner (Fig. 2DH5 pursuing induction of ToxNBt or ToxNPa appearance as well as either ToxIBt or ToxIPa. Outcomes shown are suggest and SD for three natural replicates. ToxIN Systems Promote Plasmid Maintenance. Many TA systems can mediate plasmid stabilization by postsegregational eliminating, where the fast degradation from the antitoxin after plasmid reduction leads to the unaggressive activation from the toxin to eliminate plasmid-free segregants (10). To determine whether ToxINPa and ToxINBt likewise have this activity, we performed long-term plasmid-loss tests. ToxINPa completely avoided lack of plasmid pRBJ200 in W3110 within the duration from the test, whereas Rabbit Polyclonal to SUPT16H ToxINBt got no impact (Fig. 3YB886 (Fig. 3test vector is dependant on the low-copy amount pBS72 replicon (19), this stabilization activity will probably connect with ToxINBt in its indigenous framework on plasmid pAW63 (20). This plasmid-stabilization function may represent the natural function of ToxINBt, which, unlike ToxINPa, didn’t have got a detectable phage-resistance phenotype. The explanation for the web host dependence of the activity probably can be that ToxNBt isn’t toxic enough directly into mediate postsegregational eliminating when portrayed from its indigenous promoter on the single-copy vector; ToxNBt demonstrated lower toxicity than ToxNPa in (Fig. S2W3110. The percentage of cells keeping the plasmid before and 24 h after development without selection can be proven for ToxINPa, ToxINBt, as well as the vector-only control. (YB886. The percentage of cells keeping the plasmid can be plotted being a function of the amount of hours of development without selection. Both and display the mean and SD for three natural replicates. ToxNPa Is usually Inhibited by both Processed and Precursor ToxIPa. In theory, toxin inhibition by ToxI RNA could need cleavage from the repeated elements, for example by linking the power of cleavage with steady assembly. To check this probability, stop-point RNA degradation assays had been performed in vitro using purified ToxNPa ribonuclease with RNA like a substrate, and ToxIPa RNA was added either as the lengthy repeated precursor, that was transcribed in vitro, or as precleaved, 36-nt. History IgE antibodies play a paramount function in the pathogenesis of varied intestinal disorders. non-e of the examples was positive for the β-string in the epithelial level. The efficiency of FcεRI was verified by individual IgE binding. In tests with individual intestinal tumor cell lines subconfluent Caco-2/TC7 and HCT-8 cells had been found expressing the α- and γ-stores of FcεRI also to bind IgE whereas confluent cells had been detrimental for γ-stores. Conclusions/Significance Our data supply the initial evidence which the components of an operating Picaridin FcεRI are indicated by the human being intestinal epithelial cells depending on differentiation and more importantly in epithelia of individuals with colon cancer or gastrointestinal inflammations. Therefore a contribution of FcεRI either Picaridin to immunosurveillance or pathophysiology of the Picaridin intestinal epithelium is definitely suggested. Intro Although immunoglobulins are important constituents of Picaridin sponsor defense in mucosal compartments they have been ascribed opposing functions in various intestinal diseases. Improved levels of immunoglobulin E (IgE) have been found during parasite illness having a putative beneficial host defense function . In contrast IgE takes on a documented detrimental part in allergy. Significantly increased levels of IgE and anti-IgE autoantibodies might contribute also to the pathophysiology in Crohn’s disease (CD) . Interestingly it has been suggested that food allergic reactions might be induced as a consequence of gastrointestinal swelling . Additionally growing evidence points towards a participation of IgE in antibody-dependent tumoricidal activities -. IgE function depends on its connection with effector cells via specific surface-receptors. The high affinity IgE receptor (FcεRI) is definitely a multimeric cell-surface receptor which binds the Fc website of IgE with an affinity of 1010 M?1 . The conformational switch of the IgE constant region that occurs upon binding to FcεRI was proposed to contribute to the amazingly slow dissociation rate of receptor-bound IgE . FcεRI has been so far recognized on human being mast cells basophils neutrophils monocytes macrophages dendritic cells Langerhans cells eosinophils and platelets . While the extracellular website of the receptor α-chain bears the IgE binding site the β- and γ-chains are involved in transmission transduction . The αβγ2 tetramer is definitely indicated in effector cells such as mast cells and basophils and ligand-engagement prospects to cell activation by a defined signaling cascade. In contrast the αγ2 trimer participates in antigen demonstration . The low affinity IgE receptor (FcεRII/CD23) is definitely a single chain glycoprotein having a molecular excess weight of 49 kDa . In contrast to FcεRI CD23 binds IgE having a significantly lower affinity (107 M?1). CD23 was initially recognized on B-lymphocytes but consequently also recognized on several other cell types such as monocytes macrophages eosinophils and Langerhans cells . Interestingly CD23 is also indicated on intestinal epithelial cells where it is elevated in inflammatory conditions such as CD and food allergies . An IgE/CD23-dependent transepithelial shuttle mechanism controlled by interleukin (IL)-4 has been explained which mediates transport of intact food antigens -. Besides FcεRI and FcεRII/CD23 the IgE-binding protein (εBP Galectin-3) also specifically interacts with IgE . Due to its wide cells distribution and manifestation on numerous cell types a multifunctional part in cell growth rules cell adhesion and Picaridin tumor metastases among others was suggested -. The intestinal distribution pattern of εBP is definitely well established and it has been shown that it is downregulated in swelling whereas an elevated expression Rabbit Polyclonal to SUPT16H. in colon cancer influences the neoplastic progression . The presence of CD23 and εBP on intestinal epithelia is definitely well recorded and functional studies have supported their biological importance. However since no Picaridin data were available concerning manifestation of FcεRI on enterocytes to day we screened the intestinal mucosa of individuals with gastrointestinal pathologies and settings as well as intestinal epithelial cell lines for FcεRI manifestation. Herein we statement that both FcεRI α- and γ-chains are indicated by intestinal epithelial cells while FcεRI β-chain could only become detected in. Objective Genome-wide association research have uncovered a lot of genetic variants connected with type 2 diabetes or related phenotypes. of Wellness (August 21 2007 and the neighborhood human analysis committee (Sept 14 2011 verified Sesamoside that our research did not need registration. Yet in the facial skin of continuously changing suggestions and their possibly equivocal interpretation regarding SUGAR-MGH the analysis team initiated enrollment in past due 2012 and achieved enrollment in in January 2013. The authors concur that all related and ongoing trials because of this medication/intervention are registered. Ethics Declaration The protocol is certainly approved with the Companions Human Analysis Committee (Companions Health care Boston MA). Created up to date consent is certainly extracted from all scholarly research participants; the initial consent form (S1 Document) and the initial research protocol (S1 Process) along with the latest and current variations of these docs (S2 Document and S2 Process respectively) accepted by the individual research committee are given as supplementary documents. Go to 1 (Time 1) After an right away fast of a minimum of 8 hours individuals receive a one open-label dental dosage of 5 mg glipizide within the CRC and stay resident within the CRC through bottom line from the 240 minute glipizide problem. Participants using a fasting blood sugar <4.44 mmol/L aren't dosed with glipizide. Furthermore the time of observation pursuing glipizide administration could be terminated ahead of 240 minutes in case a participant grows neuroglycopenic symptoms (dilemma blurred eyesight slurred talk) a blood sugar ≤2.77 mmol/L with outward indications of hypoglycemia blood sugar <2.50 mmol/L with or without outward indications of hypoglycemia or on the discretion of research staff predicated on clinical assessment. All individuals Sesamoside are given with meals by the end of the analysis go to and discharged only once blood glucose is certainly documented to become higher than 4.44 mmol/L. Five times later a satisfactory “wash-out” period for glipizide individuals commence a two-day open-label span of 500 mg metformin orally double daily. Individuals who are uncovered to get contraindications to secure metformin make use of at Go to 1 testing laboratories are up to date not to consider the medication. Individuals are permitted to consider less than the four recommended dosages of metformin as long as they develop unwanted effects in keeping with metformin Rabbit Polyclonal to SUPT16H. intolerance. Go to 2 (Time 8) After another right away fast of a minimum of 8 hours individuals go back to the CRC have the 4th dosage of metformin and something hour later go through a typical two-hour 75 dental glucose tolerance check (OGTT). Rationale for interventions Glipizide and metformin are universal medicines used to take care of type 2 diabetes commonly. Metformin and glipizide are believed initial and second-line therapy respectively for folks with recently diagnosed diabetes by main professional institutions [9 13 Metformin provides further been proven effective in stopping occurrence diabetes in at-risk people [14 15 However scientific reaction to both therapies is certainly heterogeneous and several sufferers with type 2 diabetes treated with either metformin or glipizide ultimately require extra therapy [16 17 As a result understanding and characterizing the function of genetics within the reaction to both medicines has direct scientific relevance. Provided their different systems of actions glipizide through elevated insulin secretion and metformin through decreased hepatic glucose result the study from the reaction to these two medicines is certainly hypothesized to reveal distinctive influences on blood sugar homeostasis. A 75-g OGTT can be Sesamoside used in scientific practice for the medical diagnosis of diabetes and in SUGAR-MGH exams the physiological reaction to dental blood sugar ingestion in the current presence Sesamoside of metformin. Participants Man or nonpregnant feminine adults na?ve to glipizide and metformin meet the criteria for the scholarly Sesamoside research. Individuals at risky of developing type 2 diabetes are preferentially enrolled by concentrating on for recruitment people using the metabolic symptoms obesity a brief history of gestational diabetes a brief history of polycystic ovary symptoms or a family group background of type 2 diabetes; people with lifestyle-controlled type 2 diabetes meet the criteria for the analysis also. The process excludes people who are presently taking medicines used to take care of diabetes or which are known to have an effect on glycemic parameters experienced onset of diabetes.
HuggingFaceFW/fineweb-edu
default
0.333
What's the Best Name? In this missing numbers worksheet, 1st graders look over 4 boxes, write in the missing numbers, find the best name for each box and color in the best circled answer. 3 Views 1 Download What A Pair! A Cross Grade Writing Activity What a pair! Older pupils interview younger ones and use what they learn to write a short, illustrated storybook that features the youngster as the main character. The youngster responds with a thank-you note in which they identify their... Pre-K - 12th English Language Arts Learning Names of Articles of Clothing What to wear today; such a vexing question. Spend some time introducing the names, fabrics, types, colors, and functions of various articles of clothing to your class. Each child will take turns asking each other what they are wearing.... K - 6th English Language Arts
HuggingFaceFW/fineweb-edu
default
0.333
Bullying is a school, home and community problem that affects all kids. Bullying is a learned behavior that fails to show respect. The bully intends hurt and repeats this behavior. 20% of kids say they have been a bully 50% of kids say they have been a victim 80% of kids are regular bystanders
HuggingFaceFW/fineweb-edu
default
0.333
Over the past 40 years, the number of U.S. hospitals declined by 12 percent, from more than 7,100 in 1975 to 6,200 in 2017, according to the latest American Hospital Association survey. And, yet, despite shuttering nearly 1,000 facilities, hospitals remain the nation’s largest source of health care spending, accounting for $1.1 trillion annually (or 33% of all national health care expenditures). Much of that money goes to the 7.5 million people employed by hospitals who, together, make up nearly half the nation’s health care workforce. These statistics tell the story of an industry struggling to right-size its workforce, bring down costs and, ultimately, achieve more with less. If there’s a silver lining, it’s that we’ve seen this story play out before in another industry. To the benefit of the entire country, U.S. agriculture overcame many of the same struggles that hospitals face today. Modern agriculture: Why a peach doesn’t cost $5 In 1870, half of the population worked in agriculture. Today, farming makes up less than 2% of the American workforce. That’s because, starting several decades ago, most family farmers realized they’d need to either “get big or go bust.” Between 1950 and 1970, the number of farms declined by half, the population of farmers went from over 20 million to under 10 million, and the average farm ballooned from 205 acres to almost 400. Many rural communities called it a “crisis,” lamenting the loss of the American farming tradition. Modern society has a different word for it: “productivity.” At the turn of the 20th century, the average farmer produced five major commodities (e.g., chickens, corn, milk, pigs, etc.) but fed just 26 people (even in the 1960s). By the end of the 20th century, the average farmer produced fewer than two commodities but took on considerably more responsibility and today feeds 155 Americans, more than five times as many. As farmers specialized and focused on achieving economies of scale, consumers benefited with a plentiful supply of affordable foods. This is the crux of productivity: greater output with fewer people. We have modern agricultural practices to thank for the $0.99 peach and the $3 carton of cage-free eggs. And were it not for the farming-productivity boom of the previous century, our nation’s produce, dairy, and meat would be priced out of reach for many shoppers while leaving even more parents struggling to feed their families. Productivity is key to industry revitalization, and it’s the only way we’ll once again make health care affordable. Modernizing the U.S. hospital system Nowadays, even the slightest whiff of a hospital closure sends politicians, business leaders, and labor unions running to defend their local facility. Naturally, they fear the loss of jobs and access to nearby medical care—not to mention the loss of prestige and status many communities associate with their local hospital. And in rural communities, where more than 110 hospitals have closed since 2010, these fears are especially relevant today. Nevertheless, a difficult truth persists in towns of all sizes: Improving the quality of hospital care while reducing costs will require major changes, including the closure of many low-volume hospitals. Here are five improvements needed to modernize the American hospital landscape: Hospital megamergers have become one of the decade’s top health care trends. But whereas most businesses use M&A to streamline operations and lower prices, today’s hospitals consolidate to gain market control and raise prices. In the future, hospitals will need to merge and consolidate services, but they’ll need to so for the right reasons—namely, to bolster efficiencies and improve clinical outcomes. To showcase the need for change, consider this fact: Along a 50-mile stretch of highway between San Jose and San Francisco, that there are 12 hospitals that offer heart surgery. Three of these facilities perform fewer than 300 cases a year. This means that for at least 65 days out of the year, heart-surgery teams at these facilities are available and getting paid, but are not performing surgery. When it comes to its hospitals, the United States has a “volume problem.” For example, of the nation’s 6,200 community hospitals, 1,350 of them are classified as “Critical Access Hospitals.” These facilities house fewer than 25 beds, and nearly all are located in rural areas. Like many of the nation’s heart surgery programs, low-volume facilities are unable to deliver high-quality and efficient medical care. Although the solution to this problem wouldn’t be popular, the logical and necessary thing to do is this: Consolidate and close 20% to 30% of all U.S. hospitals. Doing so would boost economies of scale, improve the quality of clinical care, and eliminate hospital-bed and medical-device redundancy. Here’s an obvious fact: The more you do something, the better you become at it. Whether you’re playing the piano or providing medical care, specialization improves performance. Here’s another obvious fact: Sub-specialization leads to even better outcomes. Surgical teams that do the same operation throughout the day consistently outperform those who are responsible for a wide range of procedures. And the narrower the set of procedures, the greater the individual expertise, and the more likely physicians are to increase the overall efficiency of health care delivery. In 2015, The Permanente Medical Group (TPMG) looked at its total-joint surgery volume and came to a difficult conclusion. The group had too many orthopedic surgeons and not enough surgical cases. Nationwide, orthopedic associations frequently update their volume recommendations, detailing the bare minimum number procedures an orthopedist should perform annually to be competent. Rather than applying these minimal standards, clinical leaders at TPMG required that its orthopedic surgeons perform a sufficient number of cases so as to achieve superior outcomes. The quality of surgical outcomes improved significantly in the years that followed. What’s more, 60% of orthopedic patients went home the same-day (compared to the previous inpatient average of three days), thus dramatically lowering the risk of hospital-acquired infections (the No. 1 killer of inpatients today). And by applying optimal (not minimal) volume standards to other specialties, the medical group observed similar improvements in outcomes tied to complex cancer resections and minimally invasive laparoscopic procedures. In hospitals, productivity is typically measured by the length of a patient’s stay or how long it takes to complete a procedure using an expensive machine (a scanner, a robot, etc.). A better way to measure productivity (and to understand the opportunity for improvement) would be to calculate the size of the population served by a single surgeon or machine. This measure would tell us how many physicians and specialists are needed in any given geography. And when applied nationally, this measure would no doubt indicate that we have far more specialists than necessary. Eliminating jobs is always a touchy subject, but eliminating hospital jobs is especially sensitive. These are high-paying positions, staffed by respected members of the community. That’s why mayors, city councils, and hospital boards fight vigorously to keep their facilities open, even when there are better alternatives. Closing hospitals can have a particularly negative impact on small-town economies. However, imagine the implications if there were 10 million more farmers today producing the same volume of food. We’d be a nation of $5 peaches and $15 cartons of eggs. Naturally, one of the biggest safety concerns when closing inpatient facilities in smaller towns is getting patients the emergency care they need in a timely manner. One solution would be to shutter the inpatient areas of underperforming hospitals while maintaining 24-hour emergency services. Although this change would require amendments to current laws, it would be safer to stabilize rural patients at the local ER and then transport them to larger, high-volume facilities that are staffed by physicians with greater expertise and more experience. But getting patients from one location to another is expensive. Today’s ambulance and air-transport services are extremely over-priced, with helicopter rides to the hospital totaling as much as $25,000 for people without private insurance. As it is throughout the health care system, the pricing strategy for emergency transportation doesn’t reflect the actual cost of services rendered. It reflects a dire lack of competition and regulation. The next step toward making hospital care efficient requires a more efficient transportation system, something the federal government will need to establish and help fund. This would be the equivalent of the government’s role in road construction and utility provision. Completing the modernization of the American hospital system will require doctors to use modern technology to better coordinate patient care. Imagine this: A 40-year-old man has a potentially life-threatening condition like sepsis. Given the complexity of the infection, he needs to be transported from his small local hospital to a large multispecialty facility where doctors have the expertise and technology to treat him effectively. In the current hospital system, this patient will mostly likely arrive at the receiving hospital’s emergency room (ER), where a second set of doctors will evaluate him and order duplicate laboratory tests, thus driving up costs and delaying definitive care. It doesn’t have to be this way. By connecting doctors at both sites through a common electronic health record and video technology, the physicians can decide together on the best course of treatment—without all the usual ER delays and redundant tests. Physicians at the Mid-Atlantic Permanente Medical Group (MAPMG) in the states of Virginia, Maryland, and in Washington, D.C., have used this approach successfully for the past five years. As one example, imagine doctors working at one of Kaiser Permanente’s 24-hour urgent care centers diagnose a patient with a heart attack. These physicians, trained in emergency medicine, can quickly connect via video with cardiac specialists in a nearby hospital and arrange for the patient to go directly to the cath lab. This results in timelier care than if the patient had gone to the hospital’s ER. To fix hospitals, follow the money Long ago, a news reporter asked the notorious bank robber Willie Sutton why he robbed banks. Sutton replied, “Because that’s where the money is.” If we want to make medical care more affordable for more people, we’ll need to fix our overpriced hospital system. That begins by closing inpatient facilities with low volumes, eliminating redundant high-end services, and finding ways to provide higher quality care with fewer physicians, nurses, and staff. Just like the farms of the 20th century, today’s hospitals have to change, whether communities like it or not. The only question is who’s going to drive the change? Already, signs of disruption are emerging from outside the hospital industry. Major employers like Walmart are directing patients to medical centers of excellence. One company in Wisconsin has begun to offer its employees $5,000 to receive total joint replacements in Mexico, just to avoid the higher cost (and lower quality) of getting the procedure done locally. In a “people-intensive” industry like medicine, the cost of care can’t be more affordable until the cost of labor comes down. To make the U.S. hospital system as efficient as U.S. agriculture, we need to close low-volume facilities, streamline clinical services, and retain fewer people. Although these actions will prove painful in the short-term, they’re essential for reducing costs and improving the quality of American health care. Robert Pearl is a physician and CEO, Permanente Medical Groups. He is the author of Mistreated: Why We Think We’re Getting Good Health Care–And Why We’re Usually Wrong and can be reached on Twitter @RobertPearlMD. This article originally appeared in Forbes. Image credit: Shutterstock.com
HuggingFaceFW/fineweb-edu
default
0.333
Below is part of the sample researched debate essay we looked at in its entirety in class on March 14th. Please note: Janae Svoboda Professor Westman December 10, 2001 12:30 ENGL200 Research Debate Essay Is Popular Culture Beneficial to the Development of Society? This question is at the heart of a lunch-hour debate between two middle school teachers. As they sit in the teacher's lounge over their lunch breaks, Kim, the more open-minded and optimistic teacher and Karen, the more conservative teacher, find that their opinions about the benefit of popular culture are quite different. Kim sees the positive, beneficial side of popular culture, reasoning that popular culture encourages the arts, communicates the sign of the times, and allows many artists to go forth in a purposeful way to use their fame as an advantage to communicate a good and valuable message. Karen, on the other hand, does not view popular culture as beneficial to society because the entertainment industry is out of touch with the public, dangerous media images influence real-life behavior, and some artists do not set forth to be model members of society and don't care how they impact society. The two teachers are quite knowledgeable and well-informed about their points of view on this issue. Kim: Popular culture is beneficial to society because it encourages the arts. Popular culture is an art form itself. Barbra Streisand put it quite well when she suggested that art exists to enhance the "constant search for the truth," as well as "to entertain." Streisand asserts: "to deny artists, or any of us, free expression and free thought ... is to weaken the very foundation of our democracy" (493). Streisand herself is a vocal artist who believes that "arts programs" "bring culture, education, and joy into the lives of ordinary Americans" (494). Arts programs in schools and communities were not just thought up one day by a room full of suits. Arts programs are deeply rooted in the audio and visual forms of our culture; in other words, they are rooted in popular culture. So of course popular culture encourages the arts, which are clearly a valuable entity. If the arts were not valuable, schools would not fund art programs. The arts are characterized by creativity and open-mindedness, qualities that inadvertently entertain. Without the arts, there would be less unity because art brings people together. People can identify with the arts and find commonality. If not for the influence of popular culture, there would be no call for the arts. As a result, society as we know it would be gravely damaged. Karen: The arts would still exist if not for the popular culture of today. Perhaps then the focus could be placed upon true art forms, such as famous paintings by real artists. Most people appreciate the art of Van Gough and the early days of film when choreography enhanced the movie set, but what positive impact can possibly be derived from scantly clad vocalists who have computerized voices and use obscene dance moves? Popular culture is not beneficial to the development of society because the entertainment industry is out of touch with the public. Michael Medved discusses the "joke" that violence has become in entertainment today, despite the fact that "most of us deplore violence" (490), showing proof that the entertainment industry is not in touch with the public. The entertainment industry should dedicate itself to being more public friendly. No one wants to watch a program or movie that has a high percentage of violent content. If the general public does not condone violence, what warrants the over-bearing presence of violence in popular entertainment? There is obviously a mismatched link between what the public receives and what the public desires for entertainment. Another example brought out by Medved is the music video, "Black or White," by pop artist Michael Jackson, whose sexual and violent nature led to controversy (490-493). Medved goes on to question how those in charge could have been "surprised" by the "public's outraged response" (492). A very relevant issue is brought to the surface. The video clearly contained sex and violence that the public had no desire to witness. The public's voice was heard in this case, and the video was edited, but that was then (490-493). Today, such content is so commonplace that we don't even blink an eye at the violent and sexual content and innuendos that popular culture offers as entertainment. The public does not necessarily like or accept what they see and hear, but they merely have no choice but to ignore the inappropriate scenes and references in order to suck some enjoyment out of today's sorry excuse for entertainment. My personal experience has illustrated this public disapproval of entertainment. My 19-year-old cousin saw the movie American Pie II and was very disgusted by the thick sexual focus. I remember her telling me once that she regretted paying money to see such a worthless film and the ideas behind it. My cousin's reaction to American Pie II is just one more example of how out-of-touch popular culture is with the ideals of the public, and even the young public that the movie was catered to in this case. Kim: It is obvious the "public" previously discussed all come from a very conservative sector of society. I know people who enjoyed the originality of the music video "Black or White" and the humor of the movie American Pie II, so the evaluation of overall public disapproval is an unwarranted generalization. The benefits of popular culture truly do outweigh the drawbacks. A second benefit of popular culture is the way it communicates the sign of the times... [Kim then completes her second speech, and Karen responds with her second speech. They then have one more exchange, arguing for their third reasons. After Karen's third speech, we reach the concluding paragraph below.] There are conclusively two very distinct sides of the debate over whether or not popular culture is beneficial to the development of society. Kim, an open-minded middle school teacher with enthusiasm about the benefits of popular culture, explains that it encourages the arts, it communicates the sign of the times, and many artists go forth in a purposeful way, using their fame as an advantage to communicate a good and open-minded message. Her fellow middle school teacher, Karen, who is quite conservative, refutes Kim's viewpoint, claiming that the absence of popular culture's benefits is too great because the entertainment industry is out of touch with the public, media images influence real-life behavior, and some artists so not set forth to be a model member of society and don't care how they impact society. One o'clock approaches and the noon hour comes to an end, as does the discussion between the two middle school teachers who remain on opposite sides of the fence of popular culture's impact on society. Return to ENGL 805 Return to ENGL 200
mlfoundations/dclm-baseline-1.0
default
0.37
For the past couple of weeks there has been a lot of talk about planking. Famous names from LeToya Luckett, Dwight Howard and Rosario Dawson to Evelyn Lozada, Amber Rose, and Flava Flav have been captured in the act in which you lie face down on an unusual location and balance yourself in a completely flat, zero-degree angle with your palms to your thighs, feet pointed to the floor and face faced down. (Related: Planking photo gallery) Though it’s gained a lot of popularity, many are complaining that planking is very offensive. Why? Because some claim that it has ties to slavery. They say that it is too similar to the position in which African slaves had to lie on wooden boards for months during the 16th century “Middle Passage.” The trick was allegedly started by two Australian men 14 years ago. Gary Clarkson and Christian Langdon performed in public places. With the amusement of onlookers catching on, in 2007 a Facebook group was created and, years later, planking has become a phenomenon. Needless to say, the amateur stunt, popularized by social media sites and mainly on Twitter, has become an Internet craze, but should people continue to plank if it really does have ties to slavery? According to USHistory.org planks were used as beds for slaves. Placed two-by-two, men and women were forced beneath the ship deck of slave ships. The more slaves a boat could hold, the greater a profit the captain would make. When the planks were lifted, it provided holding collars for five slaves, and then the plank was chained down. “The captives lay down on unfinished planking with virtually no room to move or breathe. Elbows and wrists will be scraped to the bone by the motion of the rough seas,” the history website reads. Even if some don’t want to go as far as comparing planking to slavery, there’s no denying that some pictures are atrocious. In one picture, a girl is planking with her head in a toilet bowl with her body outstretched. In another instance, rapper Diamond is planking with her body at a ninety degree angle to the floor on a stripper-style pole. And if that isn’t bad enough, a 20-year-old’s death was the result of trying to take a picture while planking off a balcony. Fads are fads, and it is understood that they will pass over time, but for the sake of possibly respecting history, our pride and, in some cases, our lives, the fad expiration date for planking may need to come sooner rather than later. To share story ideas with BET.com national reporter Danielle Wright, tweet her on Twitter at @DaniWrightTV (Photo: Chaiwat Subprasom/Landov)
HuggingFaceFW/fineweb-edu
default
0.333
back to school canada Yes, these kids are socialized. No, they're not weird. The beauty of tandem nursing. These ideas are epic! For most young parents, approaching the new school year means getting the family prepared and equipped, and that comes at a cost -- on average, Canadians spend $428 per child to get them ready for school. While the annual cost of sending your children to school is high, there are some much larger costs coming down the road if your child plans on attending college or university. After meeting their teachers, greeting their friends and choosing their desks, at the end of the school day almost 150 students were told they were no longer welcome at their school. Many of the children had attended, with their siblings, the school for years. Students are in for a magical learning experience. New Power episodes are no longer playing on Starz, you can't turn on the TV or radio without some type of "bogo" sale going on and at least one person is having their final all white party. If this was Twitter, that sentence would have ended with #SummerIsOver #Back2SchoolSeasonIsHere #TeachersStopCrying.
mlfoundations/dclm-baseline-1.0
default
0.37
Abbreviated as SYR by abbreviationfinder.org, Syria belongs to the less resourceful Arab states and has limited deposits of oil and other minerals. Agriculture, therefore, plays a relatively greater role than in most countries in the Middle East, and employs about one-third of the working population. Trade has a central place in Syrian culture and economy, which in recent times is strongly characterized by the fact that the country is an Arab frontline state in the Arab struggle against Israel, as well as the neighboring Gulf region through Iraq. As a participant in several wars, and with part of its territory (Golan) annexed by Israel, Syria has for decades spent significant portions of its budgets for military purposes. Especially in the wars of 1967 and 1973, Syria suffered significant economic losses, including destruction of infrastructure. In pursuit of its regional ambitions, Syria has also spent considerable funds on its military involvement in Lebanon, until 2005. Indirectly, the political conflict with Iraq, partly during the first Gulf War (1980-88) and partly after others (1990-91) also Syria suffered major losses in the form of lost transit revenues from the oil pipeline to shipping ports in the Mediterranean. As part of the UN program for Iraq, some trade with Iraq and through Syria was carried out. The same is true of the situation following the 2003 US invasion of Iraq, and the unrest that has prevented full normalization of economic activity, which would naturally include oil exports from northern Iraq via Syria. Since 2003, Syria has also suffered a financial strain by welcoming over 1.5 million refugees from Iraq. The large number of refugees has among other things helped to push up property and rental prices, with increased pressure on the economy and greater struggle for jobs. Increasing unemployment is a major political and social challenge that demands increased economic growth. Syria received considerable financial and military assistance from the Soviet Union, and suffered its dissolution, both in the form of less aid and reduced trade. During the Second Gulf War, Syria joined the Allied forces and, in return, received financial assistance from the United States. Syria has traditionally also received significant assistance from other Arab countries (as compensation for Syrian costs in the pan-Arab war against Israel). However, this support has varied with political as well as economic cycles. Among other things, to compensate for the loss of Soviet support, Syria, in which the state has controlled significant parts of the business sector, launched economic reforms in the 1990s that would, among other things, encourage foreign investment. Around 2000, political reforms were implemented to promote local private business, including easier opportunities for private investment and ownership, including the legalization of private banking (2001) and the establishment of a stock exchange (2007). In 1997, Syria began negotiations with the EU with a view to an association agreement, and in 2008, the country joined a newly established Mediterranean Union, initiated by France. Syria’s oil exports are an important factor in the country’s economy, as its main export commodity and currency earner. With limited oil reserves and rising consumption, it is expected that Syria will become a net importer of oil by 2012, and the revenue loss will have to be compensated. Energy is also a critical factor in the development of the economy, and natural gas is gradually gaining a more important role in power generation. It also produces hydropower, which depends on regional cooperation for joint management of water resources. This is ensured by guarantees from Turkey, which controls the Euphrates upper race, as well as with Jordan regarding the utilization of water from the Yarmouk River, both for irrigation and power generation. In 1978, the large Buhayrat al Asad (also called Tabaqah Dam) opened in Euphrates. The dam provided increased power supply and enabled a significant increase in agricultural land in the area. Next to oil, the leading export products are cotton and phosphate. All three are products that are exposed to partly large price fluctuations in the international market, with financial uncertainty for exports as a result. Syria has a number of historical monuments, and tourism is a potential growth industry. The sector experienced an upswing in the 1990s, but was subsequently weakened due to political unrest in the region. Syria has significant deposits of oil and gas, and is the largest energy producer in the Levant. The country also has other mineral deposits, several of which are extracted, substantially for domestic consumption. Among other things, it is extracted bitumen, basalt, gypsum, salt, phosphate and limestone; iron ore and other minerals have also been detected. Oil took over the position as the country’s most important export item (from cotton) in 1974, and remained the country’s main source of income into the 2000s, although declining production and increased domestic consumption have reduced the surplus for exports. The deposits were first found on the Karatchuk field in the northeast, 1959, then on the Suwayda and Rumelan fields, and the extraction started at Suwayda 1968; four years earlier, all oil operations had been nationalized. However, oil recovery was no greater than Syria becoming a net oil importer in the 1980s, until new large deposits were found at Deir ez-Zour, and production rose to a peak of 590,000 barrels per day in 1996. Since then, production has declined somewhat, and fell below 400,000 barrels in 2007 – not least as a result of lower recovery at Jebisseh, al-Thayyamm and Omar fields – and is expected to decline further.in 2007, petroleum was 184,000 barrels per day. Proven reserves were per. 2008 of approx. 2500 million barrels. Syria has gas reserves of about 8.5 trillion cubic feet, substantially in the northeast, with Palmyra, al-Furat and Suwayda as the largest fields. Gas production increased strongly in the 1990s, and it is an ambition to utilize the gas for domestic purposes – both to meet increased energy needs and to release as much oil as possible for export. Equally, there is a need for gas imports, and supply agreements have been signed with Egypt, at the same time as there are plans to expand the extraction of its own gas deposits. Syria has signed an agreement on importing gas from Iran. Plans for gas exports to Lebanon were fueled by political turmoil in 2005 and Israel’s invasion in 2006. The oil deposits at Karatchuk are piped to the port of Tartus. Syria has also had revenues from oil pipelines crossing the country, even though several have been completely or partially closed down; Tapline from Saudi Arabia and Kirkuk wiresin Iraq. In the 2000s, several new pipelines were planned, involving Syria, including for gas exports from Egypt to Turkey, from Iraq to Jordan and from Syria to Lebanon. The conflict with Iraq has led to huge revenue losses as Iraqis have for two periods shut down the oil pipeline that runs through Syria to the Mediterranean coast and on which Syrians lost revenue. During UN sanctions against Iraq, oil was exported via Syria. Iran has supplied Syria oil at a subsidized price from the early 1980s. The Baniya oil pipeline (50 km north of Tartus) went out of business in 1982. There is also a domestic pipeline for gas distribution. In 2005, Syria had an installed capacity for electricity generation of approx. 6.5 GW, of which approx. 1.5 GW from hydropower, substantially from the Euphrates and Yarmuk watercourses. The largest production comes from oil-powered power stations. Increased power generation is considered a prerequisite for further economic growth, but there is a shortage of water in the region and therefore limited opportunity for increased hydroelectric production. The focus is therefore mainly on the use of gas, as well as on the development of renewable energy sources, especially wind. Utilization of water resources is also a political issue; both Euphrates and Yarmouk must be managed in consultation with other countries in the area. In the 1970s, there was conflict with both Turkey and Iraq over the utilization of Euphrates water; with Iraq when Syria built the Asad Dam, and with Turkey when the Turks built the Atatürk Dam. Later, agreements were made with Turkey and with Jordan, with the latter regarding water from Jarmuk. In 2003, a new water reservoir was constructed in Latakia. Agriculture and fishing Despite that a large part of Syria consisting of desert and semi-desert, agriculture is an important industry, both for employment and value creation. The sector employs almost 40% of the working population and contributes approx. a quarter of GNI. Of the total land, 30% is used for fields and 45% is steppe or pasture. The most important agricultural areas are along the Mediterranean coast, along Nahr al-Asi (Orontes) and in the northeast, where the Euphrates and its bees are used for artificial irrigation. Further east, in the extension of the Lebanese mountains, the valley along Nahr al-Asi is cultivated as Syria’s most fertile area. Cotton is the most important agricultural product, and is grown not least in the Jezireh area, with irrigation from the Euphrates; over 80% of production is exported. Cotton was Syria’s most important export item to 1974, but production volume and value have declined since the mid-1970s, when oil took over as the main export commodity. Wheat, barley, sugar beets, maize, cotton, olives, tobacco, lentils, grapes, watermelons, citrus fruits and tomatoes are especially grown. Cereals are produced substantially in the central parts of the country. The majority of agricultural crops are produced for domestic consumption; wheat is exported in good years, and increased production of fruit and vegetables is being exported. Artificial watering is not widespread, and the crops vary greatly. Animal husbandry is important and there is a large production of dairy products. Much of the animal husbandry is still run by nomadic people according to traditional methods. Cattle and camels are kept, as well as lots of sheep and chickens. The majority of agricultural crops are produced for domestic consumption; wheat is exported in good years, and increased production of fruit and vegetables is being exported. Artificial watering is not widespread, and the crops vary greatly. Animal husbandry is important and there is a large production of dairy products. Much of the animal husbandry is still run by nomadic people according to traditional methods. Cattle and camels are kept, as well as lots of sheep and chickens. The majority of agricultural crops are produced for domestic consumption; wheat is exported in good years, and increased production of fruit and vegetables is being exported. Artificial watering is not widespread, and the crops vary greatly. Animal husbandry is important and there is a large production of dairy products. Much of the animal husbandry is still run by nomadic people according to traditional methods. Cattle and camels are kept, as well as lots of sheep and chickens. Fishing is conducted in the Mediterranean and in the rivers, with catch for domestic consumption. Syrian industry was quickly built after independence, and was dominated by textile production, not least with Lebanon as the market. In the 1970s, the development was continued with the assistance of the Soviet Union, and the Soviet market was significant until 1991. The sector includes in particular textile factories and companies manufacturing consumer goods for the domestic market, including electrical goods; further iron and steel industry at Hama, cement production and mineral fertilizer production based on the country’s phosphate deposits. Syria has two state-owned oil refineries, at Baniyas and Homs, with a total capacity of about 240,000 barrels per day. Plans to build three new refineries have been announced to increase the capacity to approximately 3800 barrels in 2010-2011. The country has five gas processing plants. Industry passed agriculture as the most important value creation in 1971, contributing close to 1/3 of GNI. The industry consists of both state-owned heavy industry and private small industry enterprises. Syria has a large number of historical monuments, from several eras, including Roman times and the Ottoman period, including Roman ruins, crusader castles and mosques. Tourism has become an important trade route, as much is also invested in developing. Most tourists come from the region, but the number of visitors from the West is increasing. Syria normally has a negative balance of trade with foreign countries, but the deficit is partly covered by fees from transit oil pipelines, as well as transfers from Syrians abroad, and has been sought to reduce through strict import restrictions. Petroleum and petroleum products are the most important export goods, followed by cotton and textiles, as well as fruits and vegetables. The Soviet Union was an important trading partner until 1991, later most of trade with Europe (especially France, Italy and Germany) – as well as neighboring countries Lebanon, Turkey and Iraq; further China and Korea. - COUNTRYAAH: Find major trading partners of Syria, including major exports and major imports with latest trade value and market share as well as growth rate. Transport and Communications Syria has a relatively well-developed road network of approx. 47,000 km, as well as an extensive railway network of approx. 2460 km. There are rail links in several parts of the country, as well as to Turkey and Jordan (the Hejaz line from Damascus to Amman); formerly also to Lebanon. Note: the capital city of Syria is Damascus with a population of 2.6 million (estimate 2010). Other major cities include Aleppo with a population of 1.7 million, Homs with a population of 890,000, Hama with a population of 546,000, Latakia with a population of 371,000 (estimate 2010). The main ports are Latakia, Baniyas and Tartus, which have been overcapacitated in the absence of transit trade, especially from Iraq. Damascus, Aleppo and Latakia have international airports.
HuggingFaceFW/fineweb-edu
default
0.333
The VPS is an extra saddle piece eqquipped by the factory with a short bayonet socket. This means that the valve can be screwed directly into the saddle piece, ensuring a good finish between the saddle piece and the valve. The VPS can be used with the following valves: KI, KIR, KU and KSU Povezani dokumenti Nema programa dostupnog za download za ovaj proizvod.
mlfoundations/dclm-baseline-1.0
default
0.37