text
stringlengths 16
3.64k
| page_title
stringlengths 2
111
| source
stringclasses 1
value |
|---|---|---|
the passivhaus - institut ( phi ) is responsible for promoting and maintaining the passivhaus building program. the " passivhaus institute " was founded in 1996, and is based and active in darmstadt, germany. the english spelling was used for the passive house institute us ( phius ) when it formed in 2007 originally under the umbrella of the passivhaus institute. the two separated in 2012. though phi and phius sustainable design standards are different, they both share common goals for drastic energy conservation and carbon reduction through sustainable architecture design techniques and specifications to create low - energy houses and other structures with low energy building practices for the public benefit worldwide. = = see also = = list of low - energy building techniques history of passive solar building design energy - efficient landscaping = = references = =
|
Passivhaus-Institut
|
wikipedia
|
barbados was estimated at 30, 000, of which about 800 were of african descent, with the remainder mainly of english descent. by 1700, there were 15, 000 free whites and 50, 000 enslaved africans. in jamaica, although the african slave population in the 1670s and 1680s never exceeded 10, 000, by 1800 it had increased to over 300, 000. the increased implementation of slave codes or black codes, which created differential treatment between africans and the white workers and ruling planter class. in response to these codes, several slave rebellions were attempted or planned during this time, but none succeeded. the planters of the dutch colony of suriname relied heavily on african slaves to cultivate, harvest and process the commodity crops of coffee, cocoa, sugar cane and cotton plantations. the netherlands abolished slavery in suriname in 1863. many slaves escaped the plantations. with the help of the native south americans living in the adjoining rain forests, these runaway slaves established a new and unique culture in the interior that was highly successful in its own right. they were known collectively in english as maroons, in french as neg'marrons ( literally meaning " brown negroes ", that is " pale - skinned negroes " ), and in dutch as marrons. the maroons gradually developed several independent tribes through a process of ethnogenesis, as they were made up of slaves from different african ethnicities. these tribes include the saramaka, paramaka, ndyuka or aukan, kwinti, aluku or boni, and matawai. the maroons often raided plantations to recruit new members from the slaves and capture women, as well as to acquire weapons, food and supplies. they sometimes killed planters and their families in the raids. the colonists also mounted armed campaigns against the maroons, who generally escaped through the rain forest, which they knew much better than did the colonists. to end hostilities, in the 18th century the european colonial authorities signed several peace treaties with different tribes. they granted the maroons sovereign status and trade rights in their inland territories, giving them autonomy. = = = = brazil = = = = slavery in brazil began long before the first portuguese settlement was established in 1532, as members of one tribe would enslave captured members of another. later, portuguese colonists were heavily dependent on indigenous labour during the initial phases of settlement to maintain the subsistence economy, and natives were often captured by expeditions called bandeiras. the importation of african slaves began midway through the 16th century, but
|
Slavery
|
wikipedia
|
elicit two different categories of responses : an excitatory response, normally in the form of an action potential, and an inhibitory response. when a neuron is stimulated by an excitatory impulse, neuronal dendrites are bound by neurotransmitters which cause the cell to become permeable to a specific type of ion ; the type of neurotransmitter determines to which ion the neurotransmitter will become permeable. in excitatory postsynaptic potentials, an excitatory response is generated. this is caused by an excitatory neurotransmitter, normally glutamate binding to a neuron's dendrites, causing an influx of sodium ions through channels located near the binding site. this change in membrane permeability in the dendrites is known as a local graded potential and causes the membrane voltage to change from a negative resting potential to a more positive voltage, a process known as depolarization. the opening of sodium channels allows nearby sodium channels to open, allowing the change in permeability to spread from the dendrites to the cell body. if a graded potential is strong enough, or if several graded potentials occur in a fast enough frequency, the depolarization is able to spread across the cell body to the axon hillock. from the axon hillock, an action potential can be generated and propagated down the neuron's axon, causing sodium ion channels in the axon to open as the impulse travels. once the signal begins to travel down the axon, the membrane potential has already passed threshold, which means that it cannot be stopped. this phenomenon is known as an all - or - nothing response. groups of sodium channels opened by the change in membrane potential strengthen the signal as it travels away from the axon hillock, allowing it to move the length of the axon. as the depolarization reaches the end of the axon, or the axon terminal, the end of the neuron becomes permeable to calcium ions, which enters the cell via calcium ion channels. calcium causes the release of neurotransmitters stored in synaptic vesicles, which enter the synapse between two neurons known as the presynaptic and postsynaptic neurons ; if the signal from the presynaptic neuron is excitatory, it will cause the release of an excitatory neurotrans
|
Stimulus (physiology)
|
wikipedia
|
dnaa – the protein produced by the dnaa gene ; leua− – the phenotype of a leua mutant ; ampr – the ampicillin - resistance phenotype of the β - lactamase gene bla ). = = = bacterial protein name nomenclature = = = protein names are generally the same as the gene names, but the protein names are not italicized, and the first letter is upper - case. e. g. the name of rna polymerase is rpob, and this protein is encoded by rpob gene. = = vertebrate gene and protein symbol conventions = = the research communities of vertebrate model organisms have adopted guidelines whereby genes in these species are given, whenever possible, the same names as their human orthologs. the use of prefixes on gene symbols to indicate species ( e. g., " z " for zebrafish ) is discouraged. the recommended formatting of printed gene and protein symbols varies between species. = = = symbol and name = = = vertebrate genes and proteins have names ( typically strings of words ) and symbols, which are short identifiers ( typically 3 to 8 characters ). for example, the gene cytotoxic t - lymphocyte - associated protein 4 has the hgnc symbol ctla4. these symbols are usually, but not always, coined by contraction or acronymic abbreviation of the name. they are pseudo - acronyms, however, in the sense that they are complete identifiers by themselves — short names, essentially. they are synonymous with ( rather than standing for ) the gene / protein name ( or any of its aliases ), regardless of whether the initial letters " match ". for example, the symbol for the gene v - akt murine thymoma viral oncogene homolog 1, which is akt1, cannot be said to be an acronym for the name, and neither can any of its various synonyms, which include akt, pkb, prkba, and rac. thus, the relationship of a gene symbol to the gene name is functionally the relationship of a nickname to a formal name ( both are complete identifiers ) — it is not the relationship of an acronym to its expansion. in this sense they are similar to the symbols for units of measurement in the si system ( such as km for the kilometre ), in that they can be viewed as true logograms rather than just abbreviations
|
Gene nomenclature
|
wikipedia
|
acquisition companies, though not to the wide extent that the seg - y format has been modified by seismic acquisition and processing companies and oil companies using their own inhouse software. as magnetic tape technology developed, the original seg - y format using individual small data blocks for each distinct seismic trace became very inefficient in terms of tape performance so the first and subsequent revisions allowed for larger tape data blocks containing many individual traces. there have been many suggestions for including different kinds of metadata within the standard over the years, since when the standard was first proposed the processes of acquisition and processing were technologically much simpler. for example geographical positioning information either real world or relative wasn't stored in the trace header at acquisition or final processing whereas it is routine today. however, the relative simplicity of the seg - y format has meant that it has worked well for interchange of seismic data allowing anyone to read and understand data recorded in the 1970s on half inch magnetic tape as easily as reading it on modern tape media or from a disk file, as long as the standard has been adhered to. while seg - y format data is still stored on magnetic tape as a permanent archive, seg - y data is increasingly stored as disk files for online and near - line ease of access, and later format revisions allow for character and number data to be stored in native system representations as ascii and ieee. = = see also = = reflection seismology = = references = = = = external links = = downloadable seg technical standards, including seg y revisions 0, 1 and 2
|
SEG-Y
|
wikipedia
|
_ { n } } is formed only with parameters using the same subscript. surveys of dh conventions and its differences have been published. = = see also = = forward kinematics inverse kinematics kinematic chain kinematics robotics conventions mechanical systems = = references = =
|
Denavit–Hartenberg parameters
|
wikipedia
|
or satisfaction. they can also affect how they view social situations, such as who deserves blame and responsibility. = = history = = counterfactual thinking has philosophical roots and can be traced back to early philosophers such as aristotle and plato who pondered the epistemological status of subjunctive suppositions and their nonexistent but feasible outcomes. in the seventeenth century, the german philosopher leibniz argued that there could be an infinite number of alternate worlds, so long as they were not in conflict with laws of logic. the philosopher nicholas rescher ( as well as others ) has written about the interrelationship between counterfactual reasoning and modal logic. this relationship may also be exploited in literature or victorian studies, painting and poetry. ruth m. j. byrne in the rational imagination : how people create alternatives to reality ( 2005 ) proposed that the mental representations and cognitive processes that underlie the imagination of alternatives to reality are similar to those that underlie rational thought, including reasoning from counterfactual conditionals. more recently, counterfactual thinking has gained interest from a psychological perspective. cognitive scientists have examined the mental representations and cognitive processes that underlie the creation of counterfactuals. daniel kahneman and amos tversky ( 1982 ) pioneered the study of counterfactual thought, showing that people tend to think'if only'more often about exceptional events than about normal events. many related tendencies have since been examined, e. g., whether the event is an action or inaction, whether it is controllable, its place in the temporal order of events, or its causal relation to other events. social psychologists have studied cognitive functioning and counterfactuals in a larger, social context. early research on counterfactual thinking took the perspective that these kinds of thoughts were indicative of poor coping skills, psychological error or bias, and were generally dysfunctional in nature. as research developed, a new wave of insight beginning in the 1990s began taking a functional perspective, believing that counterfactual thinking served as a largely beneficial behavioral regulator. although negative feelings and biases arise, the overall benefit is positive for human behavior. = = activation = = there are two portions to counterfactual thinking : activation and content. the activation portion is whether we allow the counterfactual thought to seep into our conscious thought. the content portion creates the end scenario for the counterfactual antecedent. the activation portion raises the question of why we allow
|
Counterfactual thinking
|
wikipedia
|
fishing activities. classic examples in lakes, piscivorous fish can dramatically reduce populations of zooplanktivorous fish, zooplanktivorous fish can dramatically alter freshwater zooplankton communities, and zooplankton grazing can in turn have large impacts on phytoplankton communities. removal of piscivorous fish can change lake water from clear to green by allowing phytoplankton to flourish. in the eel river, in northern california, fish ( steelhead and roach ) consume fish larvae and predatory insects. these smaller predators prey on midge larvae, which feed on algae. removal of the larger fish increases the abundance of algae. in pacific kelp forests, sea otters feed on sea urchins. in areas where sea otters have been hunted to extinction, sea urchins increase in abundance and decimate kelp a recent theory, the mesopredator release hypothesis, states that the decline of top predators in an ecosystem results in increased populations of medium - sized predators ( mesopredators ). = = basic models = = the classic population equilibrium model is verhulst's 1838 growth model : d n d t = r n ( 1 − n k ) { \ displaystyle { \ frac { dn } { dt } } = rn \ left ( 1 - { \ frac { n } { k } } \ right ) } where n ( t ) represents number of individuals at time t, r the intrinsic growth rate and k is the carrying capacity, or the maximum number of individuals that the environment can support. the individual growth model, published by von bertalanffy in 1934, can be used to model the rate at which fish grow. it exists in a number of versions, but in its simplest form it is expressed as a differential equation of length ( l ) over time ( t ) : l ′ ( t ) = r b ( l ∞ − l ( t ) ) { \ displaystyle l'( t ) = r _ { b } \ left ( l _ { \ infty } - l ( t ) \ right ) } where rb is the von bertalanffy growth rate and l∞ the ultimate length of the individual. schaefer published a fishery equilibrium model based on the verhulst model with an assumption of a bi - linear catch equation, often referred to as the schaefer short - term catch equation : h ( e, x ) = q e x { \
|
Population dynamics of fisheries
|
wikipedia
|
trion is a basic unit of the neural network model of cortical organization called trion model. this unit represents a highly structured and interconnected aggregate of about a hundred of neurons with the overall diameter of about 0. 7 mm. each trion has three levels of firing activity, and thus a cluster of trions can produce a complex firing pattern which changes rapidly ( millisecond scale ) according to probabilistic ( monte carlo ) rules. = = references = =
|
Trion (neural networks)
|
wikipedia
|
is important. such gradients affect and regulate the hydration of the body as well as blood ph, and are critical for nerve and muscle function. various mechanisms exist in living species that keep the concentrations of different electrolytes under tight control. both muscle tissue and neurons are considered electric tissues of the body. muscles and neurons are activated by electrolyte activity between the extracellular fluid or interstitial fluid, and intracellular fluid. electrolytes may enter or leave the cell membrane through specialized protein structures embedded in the plasma membrane called " ion channels ". for example, muscle contraction is dependent upon the presence of calcium ( ca2 + ), sodium ( na + ), and potassium ( k + ). without sufficient levels of these key electrolytes, muscle weakness or severe muscle contractions may occur. electrolyte balance is maintained by oral, or in emergencies, intravenous ( iv ) intake of electrolyte - containing substances, and is regulated by hormones, in general with the kidneys flushing out excess levels. in humans, electrolyte homeostasis is regulated by hormones such as antidiuretic hormones, aldosterone and parathyroid hormones. serious electrolyte disturbances, such as dehydration and overhydration, may lead to cardiac and neurological complications and, unless they are rapidly resolved, will result in a medical emergency. = = = measurement = = = measurement of electrolytes is a commonly performed diagnostic procedure, performed via blood testing with ion - selective electrodes or urinalysis by medical technologists. the interpretation of these values is somewhat meaningless without analysis of the clinical history and is often impossible without parallel measurements of renal function. the electrolytes measured most often are sodium and potassium. chloride levels are rarely measured except for arterial blood gas interpretations since they are inherently linked to sodium levels. one important test conducted on urine is the specific gravity test to determine the occurrence of an electrolyte imbalance. conductivity cells are another kind of tools used to measure the electrolyte solution's strength to conduct electricity. = = = rehydration = = = according to a study paid for by the gatorade sports science institute, electrolyte drinks containing sodium and potassium salts replenish the body's water and electrolyte concentrations after dehydration caused by exercise, excessive alcohol consumption, diaphoresis ( heavy sweating ), diarrhea, vomiting, intoxication or starvation ; the study says that athletes exercising in extreme conditions ( for
|
Electrolyte
|
wikipedia
|
lead to the identification of all the parties involved, including by managing the following : collection, analysis and management of evidence, data and materials forensic examination of important physical locations, including the death / crime scene family liaison development of a victim profile finding, interviewing and protecting witnesses international technical assistance telecommunications and other digital evidence financial issues chronology of events particular sections are dedicated to processes for interviewing witnesses and for recovering human remains. the protocol then provides a great deal of detail attesting both to the importance of, and practical guidance for, the identification of human remains. particular guidance is offered on the techniques for collecting and sampling different types of evidence, including the following : human biological evidence non - biological physical evidence digital evidence forensic accounting soil / environmental samples investigation of potentially unlawful deaths will almost always be aided by the conduct of an autopsy. in a section setting out the general principles of an autopsy the protocol provides an overview of the duties of a forensic doctor in relation to a death investigation, and then establishes the basic aims of autopsy will assist in fulfilling those duties. the aims of the autopsy, principally are : discover and record all the identifying characteristics of the deceased ( where necessary ) ; discover and record all the pathological processes, including injuries, present ; draw conclusions about the identity of the deceased ( where necessary ) ; and draw conclusions as to the cause of death and factors contributing to death. in general, the protocol establishes in various places the requirement of professional ethics for investigators, including forensic doctors. it highlights that any forensic doctor involved in an investigation has responsibilities to justice, to the relatives of the deceased, and more generally to the public. whether or not they are employed by the police or the state, forensic doctors must understand their obligations to justice ( not to the police or the state ) and to the relatives of the deceased, so that a true account is provided of the cause of death and the circumstances surrounding it. = = notes = = = = see also = = autopsy coroner istanbul protocol physicians for human rights = = further reading = = minnesota protocol on the investigation of potentially unlawful death ( 2016 ) icrc guidelines for investigating deaths in custody ( 2013 ) un principles on the effective prevention and investigation of extra - legal, arbitrary and summary executions ( 1989 ) united nations manual on the effective prevention and investigation of extra - legal, arbitrary and summary executions, u. n. doc. e / st / csdha /. 12 ( 1991 ) – via university of minnesota human rights library = = external links = = mandate of the un special rapporteur on extrajudicial
|
Minnesota Protocol
|
wikipedia
|
associated with the job - related well - being or simply whether the employee feels happiness during the work, other factors are also important. firstly, the health constraints such as being ill would force the employee absence from the work. secondly, social and families pressure can also influence the employee's decision to participate in the work. = = = employee turnover = = = employee turnover can be considered as another result derived from employee happiness. in particular, it is more likely that individual employees are able to deal with stress and passive feelings when they are in good mood. as people spend a considerable amount of time in the workplace, factors such as employee relationship, organizational culture and job performance can have a significant impact on work happiness. what is more, avey and his colleagues use a concept called psychological capital to link employee satisfaction with work related outcomes, especially turnover intention and actual turnover. however, their findings were limited due to some reasons. for example, they omitted an important factor, which was emotional stability. additionally, other researchers have pointed out that the relationship between work happiness and turnover intention is generally low, even if a dissatisfied employee is more likely to quit his / her job than the satisfied one. therefore, whether or not employee happiness can be linked with employee's turnover intention is still a moot point. = = measurement = = there are a few surveys used to measure the happiness or well - being level of people in different countries such as the world happiness report, the happy planet index and the oecd better life index. there are also surveys created to assess the job satisfaction level of employees. job satisfaction is a different concept from happiness, but it is positively correlated to happiness and subjective well - being. the main job satisfaction scales are : the job satisfaction survey ( jss ), the job descriptive index ( jdi ) and the minnesota satisfaction questionnaire ( msq ). the job satisfaction survey ( jss ) assesses nine facets of job satisfaction, as well as overall satisfaction. the facets include pay and pay raises, promotion opportunities, relationship with the immediate supervisor, fringe benefits, rewards given for good performance, rules and procedures, relationship with coworkers, type of work performed and communication within the organization. the scale contains thirty - six items and uses a summated rating scale format. the jss can provide ten scores. each of the nine subscales produce a separate score and the total of all items produces a total score. the job descriptive index ( jdi ) scale assesses five facets which are work,
|
Happiness at work
|
wikipedia
|
. but now, the organism is dependent on that interaction that emerged by chance. a new interaction has emerged in the system, and individuals who lose that interaction will be eliminated through purifying selection. the system overall has complexified, although the outcome is the same. the rise of interdependent microbial communities has been posited to be explainable through this mechanism. initially, the loss of a gene dedicated to producing an important resource for the cell would be deleterious. however, a community of microbes might have an excess of that resource. for this reason, the presence of these interspecies microbial interactions enables an otherwise deleterious mutation ( loss of a gene needed for generating an important resource ) to be acquired but without a deleterious effect on the individual. genetic drift then results in this trait ( or the loss thereof ) to spread into the population, and the population of the species in the community is now dependent on its community for survival. while the individual species has simplified, the complexity of the microbial community overall has risen due to the requirement for additional and symbiotic interactions to propagate the community as a whole. = = application = = the bqh was proposed to explain the evolution of dependencies within free - living microbial communities, but was later extended to explain nitrogen fixation, nutrient acquisition and biofilm production in microbes. more generally, it has also been used to explain gene loss via genome streamlining, cooperative interactions and evolution of communities. studies have also shown that local interactions within bacterial communities can promote the right amount of trade - off between resource production and resource limitation to stimulate mutual dependencies as proposed by bqh. this type of black queen dynamism has also been described in microbial and microbialite mats from cuatro cienegas coahuila where the particular physicochemical properties of the site have caused the microbial communities to remain practically isolated for millions of years. it has been observed that the bacteria of the genus bacilus have substantially reduced their genomes, as well as they have shown an interdependence between the bacteria of that site, which has led to the suggestion of the existence of a pangenome or holobionts. = = quorum sensing and partial privatization of goods = = quorum sensing is a regulatory process that plays a role in the management of partially privatized or mixed goods, as outlined in various studies. however, there's a scarcity of evidence to
|
Black Queen hypothesis
|
wikipedia
|
a marsquake is a quake which, much like an earthquake, is a shaking of the surface or interior of the planet mars. such quakes may occur with a shift in the planet's interior, such as the result of plate tectonics, from which most quakes on earth originate, or possibly from hotspots such as olympus mons or the tharsis montes. the detection and analysis of marsquakes are informative to probing the interior structure of mars, as well as potentially identifying whether any of mars's many volcanoes continue to be volcanically active. quakes have been observed and well - documented on the moon, and there is evidence of past quakes on venus. marsquakes were first detected but not confirmed by the viking mission in 1976. marsquakes were detected and confirmed by the insight mission in 2019. using insight data and analysis, the viking marsquakes were confirmed in 2023. compelling evidence has been found that mars has in the past been seismically more active, with clear magnetic striping over a large region of southern mars. magnetic striping on earth is often a sign of a region of particularly thin crust splitting and spreading, forming new land in the slowly separating rifts ; a prime example of this being the mid - atlantic ridge. however, no clear spreading ridge has been found in this region, suggesting that another, possibly non - seismic explanation may be needed. the 4, 000 km ( 2, 500 mi ) long canyon system, valles marineris, has been suggested to be the remnant of an ancient martian strike - slip fault. the first confirmed seismic event emanating from valles marineris, a quake with a magnitude of 4. 2, was detected by insight on 25 august 2021, proving it to be an active fault. = = detectability = = the first attempts to detect seismic activity on mars were with the viking program with two landers, viking 1 & 2 in 1976, with seismometers mounted on top of the lander. the seismometer on the viking 1 lander failed. the viking 2 seismometer collected data for 2100 hours ( 89 days ) of data over 560 sols of lander recorded. viking 2 recorded two possible marsquakes on sol 53 ( daytime during windy period ) and sol 80 ( nighttime during low wind period ). due to the inability to separate ground motion from wind - driven lander vibrations and the lack of other collaborating possible marsquakes, the sol 53 and sol 80 events could not
|
Marsquake
|
wikipedia
|
##mash sites in california, indicating considerable trade with the distant site of casa diablo hot springs in the sierra nevada. obsidian tools found in mission santa clara has shown the existence of exchange networks between various tribes in california. obsidian in california comes from 5 major locations all around the state, and when mission santa clara was built, the tribes took their obsidian tools with them and from the analysis the of the obsidian tools it showed that all 5 major location of obsidian were present. pre - columbian mesoamericans'use of obsidian was extensive and sophisticated ; including carved and worked obsidian for tools and decorative objects. mesoamericans also made a type of sword with obsidian blades mounted in a wooden body. called a macuahuitl, the weapon could inflict terrible injuries, combining the sharp cutting edge of an obsidian blade with the ragged cut of a serrated weapon. the polearm version of this weapon was called tepoztopilli. obsidian mirrors were used by some aztec priests to conjure visions and make prophecies. they were connected with tezcatlipoca, god of obsidian and sorcery, whose name can be translated from the nahuatl language as'smoking mirror '. indigenous people traded obsidian throughout the americas. each volcano and in some cases each volcanic eruption produces a distinguishable type of obsidian allowing archaeologists to use methods such as non - destructive energy dispersive x - ray fluorescence to select minor element compositions from both the artifact and geological sample to trace the origins of a particular artifact. similar tracing techniques have also allowed obsidian in greece to be identified as coming from milos, nisyros or gyali, islands in the aegean sea. obsidian cores and blades were traded great distances inland from the coast. in chile obsidian tools from chaiten volcano have been found as far away as in chan - chan 400 km ( 250 mi ) north of the volcano, and also in sites 400 km south of it. = = = oceania = = = the lapita culture, active across a large area of the pacific ocean around 1000 bc, made widespread use of obsidian tools and engaged in long distance obsidian trading. the complexity of the production technique for these tools, and the care taken in their storage, may indicate that beyond their practical use they were associated with prestige or high status. obsidian was also used on rapa nui ( easter island ) for edged tools such as mataia and the pupils of the eyes of their moai ( statues ), which were encircled by rings of bird bone. obsidian was used
|
Obsidian
|
wikipedia
|
, cosimo i de'medici, who had become ruler of the city at the age of only 17, also decided to launch a program of aqueduct and fountain building. the city had previously gotten all its drinking water from wells and reservoirs of rain water, which meant that there was little water or water pressure to run fountains. cosimo built an aqueduct large enough for the first continually - running fountain in florence, the fountain of neptune in the piazza della signoria ( 1560 – 1567 ). this fountain featured an enormous white marble statue of neptune, resembling cosimo, by sculptor bartolomeo ammannati. under the medicis, fountains were not just sources of water, but advertisements of the power and benevolence of the city's rulers. they became central elements not only of city squares, but of the new italian renaissance garden. the great medici villa at castello, built for cosimo by benedetto varchi, featured two monumental fountains on its central axis ; one showing with two bronze figures representing hercules slaying antaeus, symbolizing the victory of cosimo over his enemies ; and a second fountain, in the middle of a circular labyrinth of cypresses, laurel, myrtle and roses, had a bronze statue by giambologna which showed the goddess venus wringing her hair. the planet venus was governed by capricorn, which was the emblem of cosimo ; the fountain symbolized that he was the absolute master of florence. by the middle renaissance, fountains had become a form of theater, with cascades and jets of water coming from marble statues of animals and mythological figures. the most famous fountains of this kind were found in the villa d'este ( 1550 – 1572 ), at tivoli near rome, which featured a hillside of basins, fountains and jets of water, as well as a fountain which produced music by pouring water into a chamber, forcing air into a series of flute - like pipes. the gardens also featured giochi d'acqua, water jokes, hidden fountains which suddenly soaked visitors. between 1546 and 1549, the merchants of paris built the first renaissance - style fountain in paris, the fontaine des innocents, to commemorate the ceremonial entry of the king into the city. the fountain, which originally stood against the wall of the church of the holy innocents, as rebuilt several times and now stands in a square near les halles. it is the oldest fountain in paris. henry constructed an italian - style garden with a
|
Fountain
|
wikipedia
|
dna in human eggs from women with mitochondrial disease into the eggs of women donors who were unaffected. in such cases, ethical questions have been raised regarding biological motherhood, since the child receives genes and gene regulatory molecules from two different women. using genetic engineering in attempts to produce babies free of mitochondrial disease is controversial in some circles and raises important ethical issues. a male baby was born in mexico in 2016 from a mother with leigh syndrome using mrt. in september 2012 a public consultation was launched in the uk to explore the ethical issues involved. human genetic engineering was used on a small scale to allow infertile women with genetic defects in their mitochondria to have children. in june 2013, the united kingdom government agreed to develop legislation that would legalize the'three - person ivf'procedure as a treatment to fix or eliminate mitochondrial diseases that are passed on from mother to child. the procedure could be offered from 29 october 2015 once regulations had been established. embryonic mitochondrial transplant and protofection have been proposed as a possible treatment for inherited mitochondrial disease, and allotopic expression of mitochondrial proteins as a radical treatment for mtdna mutation load. in june 2018 australian senate's senate community affairs references committee recommended a move towards legalising mitochondrial replacement therapy ( mrt ). research and clinical applications of mrt were overseen by laws made by federal and state governments. state laws were, for the most part, consistent with federal law. in all states, legislation prohibited the use of mrt techniques in the clinic, and except for western australia, research on a limited range of mrt was permissible up to day 14 of embryo development, subject to a license being granted. in 2010, the hon. mark butler mp, then federal minister for mental health and ageing, had appointed an independent committee to review the two relevant acts : the prohibition of human cloning for reproduction act 2002 and the research involving human embryos act 2002. the committee's report, released in july 2011, recommended the existing legislation remain unchanged currently, human clinical trials are underway at gensight biologics ( clinicaltrials. gov # nct02064569 ) and the university of miami ( clinicaltrials. gov # nct02161380 ) to examine the safety and efficacy of mitochondrial gene therapy in leber's hereditary optic neuropathy. = = epidemiology = = about 1 in 4, 000 children in the united states will develop mitochondrial disease by the age of 10 years. up to 4, 000 children per
|
Mitochondrial disease
|
wikipedia
|
, lamarckism, was an influence on the soviet biologist trofim lysenko's ill - fated antagonism to mainstream genetic theory as late as the mid - 20th century. between 1835 and 1837, the zoologist edward blyth worked on the area of variation, artificial selection, and how a similar process occurs in nature. darwin acknowledged blyth's ideas in the first chapter on variation of on the origin of species. = = = darwin's theory = = = in 1859, charles darwin set out his theory of evolution by natural selection as an explanation for adaptation and speciation. he defined natural selection as the " principle by which each slight variation [ of a trait ], if useful, is preserved ". the concept was simple but powerful : individuals best adapted to their environments are more likely to survive and reproduce. as long as there is some variation between them and that variation is heritable, there will be an inevitable selection of individuals with the most advantageous variations. if the variations are heritable, then differential reproductive success leads to the evolution of particular populations of a species, and populations that evolve to be sufficiently different eventually become different species. darwin's ideas were inspired by the observations that he had made on the second voyage of hms beagle ( 1831 – 1836 ), and by the work of a political economist, thomas robert malthus, who, in an essay on the principle of population ( 1798 ), noted that population ( if unchecked ) increases exponentially, whereas the food supply grows only arithmetically ; thus, inevitable limitations of resources would have demographic implications, leading to a " struggle for existence ". when darwin read malthus in 1838 he was already primed by his work as a naturalist to appreciate the " struggle for existence " in nature. it struck him that as population outgrew resources, " favourable variations would tend to be preserved, and unfavourable ones to be destroyed. the result of this would be the formation of new species. " darwin wrote : if during the long course of ages and under varying conditions of life, organic beings vary at all in the several parts of their organisation, and i think this cannot be disputed ; if there be, owing to the high geometrical powers of increase of each species, at some age, season, or year, a severe struggle for life, and this certainly cannot be disputed ; then, considering the infinite complexity of the relations of all organic beings to each other and to their conditions of existence, causing an infinite diversity
|
Negative selection (natural selection)
|
wikipedia
|
to their surface homogeneity, which is an important factor for applying dlvo theory. the quartz surface originally has negative potential. however, the surface of the collectors was usually modified to have positive surface for the favorable deposition experiments. in some experiments, the surface collector was coated with an alginate layer with negative charge for simulating the real conditioning film in natural system. = = = result = = = it was concluded that bacterial deposition mainly occurred in a secondary energy minimum by using dlvo theory. dlvo calculation predicted an energy barrier of 140kt at 31. 6 mm ionic strength to over 2000kt at 1mm ionic strength. this data was not in agreement with the experimental data, which showed increasing deposition with increasing ionic strength. therefore, the deposit could occur at secondary minimum having the energy from 0. 09kt to 8. 1kt at 1mm and 31. 6 mm ionic strength, respectively. the conclusion was further proven by the partial release of deposited bacteria when the ionic strength decreased. because the amount of released bacteria was less than 100 %, it was suggested that bacteria could deposit at the primary minimum due to the heterogeneity of the surface collector or bacterial surface. this fact was not covered in classical dlvo theory. the presence of divalent electrolytes ( ca2 + ) can neutralize the charge surface of bacteria by the binding between ca2 + and the functional group on the oocyst surface. this resulted in an observable bacterial deposition despite the very high electrostatic repulsive energy from the dlvo prediction. the motility of bacteria also has a significant effect on the bacterial adhesion. nonmotile and motile bacteria showed different behavior in deposition experiments. at the same ionic strength, motile bacteria showed greater adhesion to the surface than nonmotile bacteria and motile bacteria can attach to the surface of the collector at high repulsive electrostatic force. it was suggested that the swimming energy of the cells could overcome the repulsive energy or they can adhere to regions of heterogeneity on the surface. the swimming capacity increase with the ionic strength and 100mm is the optimal concentration for the rotation of flagella. despite the electrostatic repulsion energy from dlvo calculation between the bacteria and surface collector, the deposition could occur due to other interactions such as the steric impact of the presence of flagella on the cell environment and the strong hydrophobicity of the cell. = = references = =
|
Bacterial adhesion in aquatic system
|
wikipedia
|
c s c l ) ln ( 1 − f s ) e ( c s c l ln ( 1 − f s ) ) = ( c s c 0 ) ( 1 − f s ) ln ( 1 − f s ) { \ displaystyle \ left ( { \ frac { c _ { s } } { c _ { l } } } \ right ) \ ln ( 1 - f _ { s } ) \, e ^ { \ left ( { \ frac { c _ { s } } { c _ { l } } } \ ln ( 1 - f _ { s } ) \ right ) } = \ left ( { \ frac { c _ { s } } { c _ { 0 } } } \ right ) ( 1 - f _ { s } ) \ ln ( 1 - f _ { s } ) } such equations can be solved explicitly using the lambert w function, w ( x ) { \ displaystyle w ( x ) }, which is defined as the inverse function of x = w ( x ) e w ( x ) { \ displaystyle x = w ( x ) e ^ { w ( x ) } }. by rearranging terms into this canonical form, the solution for c l { \ displaystyle c _ { l } } becomes : c l = c s ln ( 1 − f s ) w ( z ) { \ displaystyle c _ { l } = { \ frac { c _ { s } \ ln ( 1 - f _ { s } ) } { w ( z ) } } } or c l = c 0 ( 1 − f s ) e w ( z ) { \ displaystyle c _ { l } = { \ frac { c _ { 0 } } { ( 1 - f _ { s } ) } } e ^ { w ( z ) } } where z = ( c s c 0 ) ( 1 − f s ) ln ( 1 − f s ) { \ displaystyle z = \ left ( { \ frac { c _ { s } } { c _ { 0 } } } \ right ) ( 1 - f _ { s } ) \ ln ( 1 - f _ { s } ) }. this application of the lambert w function is particularly valuable in modeling impurity segregation during crystal growth which optimizes melt utilization and enhances crystal growth efficiency. for further details, see https : / / doi. org / 10
|
Scheil equation
|
wikipedia
|
delta _ { i } \ nabla _ { \ mathbf { d } } \ sum _ { i \ in s } \ | x _ { i } - \ mathbf { d } r _ { i } \ | _ { 2 } ^ { 2 } + \ lambda \ | r _ { i } \ | _ { 1 } \ right \ } }, where s { \ displaystyle s } is a random subset of { 1... k } { \ displaystyle \ { 1... k \ } } and δ i { \ displaystyle \ delta _ { i } } is a gradient step. = = = lagrange dual method = = = an algorithm based on solving a dual lagrangian problem provides an efficient way to solve for the dictionary having no complications induced by the sparsity function. consider the following lagrangian : l ( d, λ ) = tr ( ( x − d r ) t ( x − d r ) ) + j = 1 n λ j ( i = 1 d d i j 2 − c ) { \ displaystyle { \ mathcal { l } } ( \ mathbf { d }, \ lambda ) = { \ text { tr } } \ left ( ( x - \ mathbf { d } r ) ^ { t } ( x - \ mathbf { d } r ) \ right ) + \ sum _ { j = 1 } ^ { n } \ lambda _ { j } \ left ( { \ sum _ { i = 1 } ^ { d } \ mathbf { d } _ { ij } ^ { 2 } - c } \ right ) }, where c { \ displaystyle c } is a constraint on the norm of the atoms and λ i { \ displaystyle \ lambda _ { i } } are the so - called dual variables forming the diagonal matrix λ { \ displaystyle \ lambda }. we can then provide an analytical expression for the lagrange dual after minimization over d { \ displaystyle \ mathbf { d } } : d ( λ ) = min d l ( d, λ ) = tr ( x t x − x r t ( r r t + λ ) − 1 ( x r t ) t − c λ ) { \ displaystyle { \ mathcal { d } } ( \ lambda ) = \ min _ { \ mathbf { d } } { \ mathcal { l } } ( \ mathbf { d
|
Sparse dictionary learning
|
wikipedia
|
and severity of rejection ), and context ( other significant others, social - situational characteristics of the larger environment ). traits that appear to be associated with good affective copers include a differentiated sense of self, a strong sense of self - determination, and the ability to depersonalize. = = = sociocultural systems model and subtheory = = = the sociocultural systems subtheory concentrates on major causes and sociocultural correlates of interpersonal acceptance – rejection in a global context. the subtheory looks at larger sociocultural factors that influence why significant others show acceptance or rejection. larger social institutions like the economic system, family structure, and political organization tend to shape how much acceptance parents and other significant persons offer. additionally, the cultural context can also influence how children and youth perceive their acceptance or rejection, and how they react to or cope with it. the system is also bidirectional, because a culture's tendency toward acceptance or rejection may result in different institutionalized expressive systems and behaviors, which can include people's spiritual and artistic beliefs and behaviors. = = warmth dimension = = ipartheory posits that all significant interpersonal relationships fall along the warmth dimension, from interpersonal acceptance to interpersonal rejection, depending on how much love or warmth a person perceives from a significant other. the specific physical, verbal, and symbolic behaviors associated with interpersonal acceptance or rejection may differ by culture or society, but the effects of feeling acceptance or rejection remain stable across cultures. interpersonal acceptance is marked by warmth, affection, comfort, emotional support, and love which is expressed by the significant other. relationships that are high in interpersonal rejection, on the other hand, are characterized by an absence of positive feelings and may also include emotional withdrawal, as well as the presence of psychologically or physically hurtful behaviors. rejection may be experienced by any combination of four expressions : coldness or lack of affection, hostility or aggression, indifference or neglect, and undifferentiated rejection. undifferentiated rejection is based on the perception of the individual that a parent ( or an attachment figure ) or other person who is important to the individual does not care about them, want them, or love them, though there may not be any behavioral manifestations from the prior three categories of rejection. = = history = = ipartheory was developed by ronald p. rohner. he started working on issues of interpersonal acceptance and rejection as a graduate student at stanford university in 1959. while carrying out
|
Interpersonal acceptance–rejection theory
|
wikipedia
|
##dicates ( be surprised ), negative conjunctions ( without ), comparatives and superlatives, too - phrases, negative predicates ( unlikely ), some subjunctive complements, some disjunctions, imperatives, and others ( finally, only ). given that many of these environments are not strictly downward entailing, alternative licensing conditions have been proposed building on concepts such as strawson entailment and nonveridicality ( proposed by zwarts and giannakidou ). different npis may be licensed by different expressions. thus, while the npi anything is licensed by the downward entailing expression at most two of the visitors, the idiomatic npi not lift a finger ( known as a minimizer ) is not licensed by the same expression. at most two of the visitors had seen anything. * at most two of the visitors lifted a finger to help. while npis have been discovered in many languages, their distribution is subject to substantial cross - linguistic variation ; this aspect of npis is currently the subject of ongoing research in cross - linguistic semantics. = = see also = = downward entailing generalized quantifier grammatical polarity subtrigging veridicality = = notes = = = = references = = baker, c. lee ( 1970 ). " double negatives ". linguistic inquiry. 1 ( 2 ) : 169 – 186. jstor 4177551. klima, edward ( 1964 ). " negation in english ". in jerry a. fodor & jerrold j. katz ( ed. ). the structure of language. englewood cliffs : prentice hall, 246 - 323. fauconnier, gilles ( 1975 ). " polarity and the scale principle ". chicago linguistic society. vol. 11. pp. 188 – 199. giannakidou, anastasia ( 2001 ). " the meaning of free choice ". linguistics and philosophy. 24 ( 6 ) : 659 – 735. doi : 10. 1023 / a : 1012758115458. s2cid 10533949. ladusaw, william a. ( 1979 ). polarity sensitivity as inherent scope relations. ph. d. dissertation, university of texas, austin. zwarts, frans ( 1981 ). " negatief polaire uitdrukkingen i ". glot. 4 : 35 – 102. = = external links = = the polarity items
|
Polarity item
|
wikipedia
|
logical grammar or rational grammar is a term used in the history and philosophy of linguistics to refer to certain linguistic and grammatical theories that were prominent until the early 19th century and later influenced 20th - century linguistic thought. these theories were developed by scholars and philosophers who sought to establish a logical and rational basis for understanding the relationship between reality, meaning, cognition, and language. examples from the classical and modern period represent a realistic approach to linguistics, while accounts written during the age of enlightenment represent rationalism, focusing on human thought. logical, rational or general grammar was the dominant approach to language until it was supplanted by romanticism. since then, there have been attempts to revive logical grammar. the idea is today at least partially represented by categorial grammar, formal semantics, and transcendental phenomenology, = = method and history = = logical grammar consists of the analysis of the sentence into a predicate - argument structure and of a commutation test, which breaks the form down paradigmatically into layers of syntactic categories. through such procedure, formal grammar is extracted from the material. applying the rules of the grammar produces grammatical sentences, which may be recursive. = = = subject and predication = = = the foundation of logical grammar was laid out by the greek philosophers. according to plato, the task of the sentence is to make a statement about the subject by means of predication. in the sophist, he uses the example of " theaetetus is sitting " to illustrate the idea of predication. this statement involves the subject " theaetetus " and the predicate " is sitting ". plato then delves into questions about the relationship between these two elements and the nature of being and non - being. in the parmenides, plato uses examples like " theaetetus is a man " and " theaetetus is not a man " to illustrate the complexities and challenges of predication, particularly concerning the relationship between particulars and universal concepts. plato's discussions of predication in these dialogues are part of his broader exploration of metaphysics, epistemology, and the nature of reality. after plato, aristotle's syllogism relies on the concept of predication, as it forms the basis for his system of deductive reasoning. in aristotelian syllogism, predication plays a central role in establishing the relationships between different terms within categorical statements. syllogistic reasoning consists of a series of subjects ( s )
|
Logical grammar
|
wikipedia
|
a very small hole in the plasma membrane of the cell. since the microelectrode has fluid with a high h + concentration inside, relative to the outside of the electrode, there is a potential created due to the ph discrepancy between the inside and outside of the electrode. from this voltage difference, and a predetermined ph for the fluid inside the electrode, one can determine the intracellular ph ( phi ) of the cell of interest. = = = fluorescence spectroscopy = = = another way to measure intracellular ph ( phi ) is with dyes that are sensitive to ph, and fluoresce differently at various ph values. this technique, which makes use of fluorescence spectroscopy, consists of adding this special dye to the cytosol of a cell. by exciting the dye in the cell with energy from light, and measuring the wavelength of light released by the photon as it returns to its native energy state, one can determine the type of dye present, and relate that to the intracellular ph of the given cell. = = = nuclear magnetic resonance = = = in addition to using ph - sensitive electrodes and dyes to measure phi, nuclear magnetic resonance ( nmr ) spectroscopy can also be used to quantify phi. nmr, typically speaking, reveals information about the inside of a cell by placing the cell in an environment with a potent magnetic field. based on the ratio between the concentrations of protonated, compared to deprotonated, forms of phosphate compounds in a given cell, the internal ph of the cell can be determined. additionally, nmr may also be used to reveal the presence of intracellular sodium, which can also provide information about the phi. using nmr spectroscopy, it has been determined that lymphocytes maintain a constant internal ph of 7. 17± 0. 06, though, like all cells, the intracellular ph changes in the same direction as extracellular ph. = = = ph - sensitive gfps = = = to determine the ph inside organelles, ph - sensitive gfps are often used as part of a noninvasive and effective technique. by using cdna as a template along with the appropriate primers, the gfp gene can be expressed in the cytosol, and the proteins produced can target specific regions within the cell, such as the mitochondria, golgi apparatus, cytoplasm, and endoplasmic reticulum. if certain gfp mutants that are highly sensitive to ph in intra
|
Intracellular pH
|
wikipedia
|
in fluid dynamics, hicks equation, sometimes also referred as bragg – hawthorne equation or squire – long equation, is a partial differential equation that describes the distribution of stream function for axisymmetric inviscid fluid, named after william mitchinson hicks, who derived it first in 1898. the equation was also re - derived by stephen bragg and william hawthorne in 1950 and by robert r. long in 1953 and by herbert squire in 1956. the hicks equation without swirl was first introduced by george gabriel stokes in 1842. the grad – shafranov equation appearing in plasma physics also takes the same form as the hicks equation. representing ( r, θ, z ) { \ displaystyle ( r, \ theta, z ) } as coordinates in the sense of cylindrical coordinate system with corresponding flow velocity components denoted by ( v r, v θ, v z ) { \ displaystyle ( v _ { r }, v _ { \ theta }, v _ { z } ) }, the stream function ψ { \ displaystyle \ psi } that defines the meridional motion can be defined as r v r = − ∂ ψ ∂ z, r v z = ∂ ψ ∂ r { \ displaystyle rv _ { r } = - { \ frac { \ partial \ psi } { \ partial z } }, \ quad rv _ { z } = { \ frac { \ partial \ psi } { \ partial r } } } that satisfies the continuity equation for axisymmetric flows automatically. the hicks equation is then given by ∂ 2 ψ ∂ r 2 − 1 r ∂ ψ ∂ r + ∂ 2 ψ ∂ z 2 = r 2 d h d ψ − γ d γ d ψ { \ displaystyle { \ frac { \ partial ^ { 2 } \ psi } { \ partial r ^ { 2 } } } - { \ frac { 1 } { r } } { \ frac { \ partial \ psi } { \ partial r } } + { \ frac { \ partial ^ { 2 } \ psi } { \ partial z ^ { 2 } } } = r ^ { 2 } { \ frac { \ mathrm { d } h } { \ mathrm { d } \ psi } } - \ gamma { \ frac { \ mathrm { d } \ gamma } { \ mathrm { d } \ psi } } } where h ( ψ ) = p ρ + 1 2 ( v r 2 + v θ 2 + v z 2 ), γ
|
Hicks equation
|
wikipedia
|
= = the riemannian orbit model = = shapes in computational anatomy ( ca ) are studied via the use of diffeomorphic mapping for establishing correspondences between anatomical coordinate systems. in this setting, 3 - dimensional medical images are modelled as diffeomorphic transformations of some exemplar, termed the template i t e m p { \ displaystyle i _ { temp } }, resulting in the observed images to be elements of the random orbit model of ca. for images these are defined as i ∈ i { i = i t e m p ∘ φ, φ ∈ diff v } { \ displaystyle i \ in { \ mathcal { i } } \ doteq \ { i = i _ { temp } \ circ \ varphi, \ varphi \ in \ operatorname { diff } _ { v } \ } }, with for charts representing sub - manifolds denoted as m { φ ⋅ m t e m p : φ ∈ diff v } { \ displaystyle { \ mathcal { m } } \ doteq \ { \ varphi \ cdot m _ { temp } : \ varphi \ in \ operatorname { diff } _ { v } \ } }. = = the riemannian metric = = the orbit of shapes and forms in computational anatomy are generated by the group action m { φ ⋅ m : φ ∈ diff v } { \ displaystyle { \ mathcal { m } } \ doteq \ { \ varphi \ cdot m : \ varphi \ in \ operatorname { diff } _ { v } \ } }. this is made into a riemannian orbit by introducing a metric associated to each point and associated tangent space. for this a metric is defined on the group which induces the metric on the orbit. take as the metric for computational anatomy at each element of the tangent space φ ∈ diff v { \ displaystyle \ varphi \ in \ operatorname { diff } _ { v } } in the group of diffeomorphisms ‖ φ ‖ φ ‖ φ ∘ φ − 1 ‖ v = ‖ v ‖ v { \ displaystyle \ | { \ dot { \ varphi } } \ | _ { \ varphi } \ doteq \ | { \ dot { \ varphi } } \ circ \ varphi ^ { - 1 } \ | _ { v } = \ | v \ | _ {
|
Riemannian metric and Lie bracket in computational anatomy
|
wikipedia
|
relations. = = approaches in social sciences = = = = = social science = = = the social sciences in general have moved increasingly toward including quantitative frameworks for assessing causality. much of this has been described as a means of providing greater rigor to social science methodology. political science was significantly influenced by the publication of designing social inquiry, by gary king, robert keohane, and sidney verba, in 1994. king, keohane, and verba recommend that researchers apply both quantitative and qualitative methods and adopt the language of statistical inference to be clearer about their subjects of interest and units of analysis. proponents of quantitative methods have also increasingly adopted the potential outcomes framework, developed by donald rubin, as a standard for inferring causality. while much of the emphasis remains on statistical inference in the potential outcomes framework, social science methodologists have developed new tools to conduct causal inference with both qualitative and quantitative methods, sometimes called a " mixed methods " approach. advocates of diverse methodological approaches argue that different methodologies are better suited to different subjects of study. sociologist herbert smith and political scientists james mahoney and gary goertz have cited the observation of paul w. holland, a statistician and author of the 1986 article " statistics and causal inference ", that statistical inference is most appropriate for assessing the " effects of causes " rather than the " causes of effects ". qualitative methodologists have argued that formalized models of causation, including process tracing and fuzzy set theory, provide opportunities to infer causation through the identification of critical factors within case studies or through a process of comparison among several case studies. these methodologies are also valuable for subjects in which a limited number of potential observations or the presence of confounding variables would limit the applicability of statistical inference. on longer timescales, persistence studies uses causal inference to link historical events to later political, economic and social outcomes. = = = economics and political science = = = in the economic sciences and political sciences causal inference is often difficult, owing to the real world complexity of economic and political realities and the inability to recreate many large - scale phenomena within controlled experiments. causal inference in the economic and political sciences continues to see improvement in methodology and rigor, due to the increased level of technology available to social scientists, the increase in the number of social scientists and research, and improvements to causal inference methodologies throughout social sciences. despite the difficulties inherent in determining causality in economic systems, several widely employed methods exist
|
Causal inference
|
wikipedia
|
the global model ; lack of access to global training data makes it harder to identify unwanted biases entering the training e. g. age, gender, sexual orientation ; partial or total loss of model updates due to node failures affecting the global model ; lack of annotations or labels on the client side. heterogeneity between processing platforms = = variations = = a number of different algorithms for federated optimization have been proposed. = = = federated stochastic gradient descent ( fedsgd ) = = = stochastic gradient descent is an approach used in deep learning, where gradients are computed on a random subset of the total dataset and then used to make one step of the gradient descent. federated stochastic gradient descent is the analog of this algorithm to the federated setting, but uses a random subset of the nodes, each node using all its data. the server averages the gradients in proportion to the number of training data on each node, and uses the average to make a gradient descent step. = = = federated averaging ( fedavg ) = = = federated averaging ( fedavg ) is a generalization of fedsgd which allows nodes to do more than one batch update on local data and exchange updated weights rather than gradients. this reduces communication and is equivalent to averaging the weights if all nodes start with the same weights. it does not not seem to hurt the resulting averaged model's performance compared to fedsgd. fedavg variations have been proposed based on adaptive optimizers such as adam and adagrad, and tend to outperform fedavg. = = = federated learning with dynamic regularization ( feddyn ) = = = federated learning methods suffer when node datasets are distributed heterogeneously, because then minimizing the node losses is not the same as minimizing the global loss. in 2021, acar et al. introduced a solution called feddyn, which dynamically regularizes each node loss function so that they converge to the global loss. since the local losses are aligned, feddyn is robust to the different heterogeneity levels and so it can safely perform full minimization in each device. in theory, feddyn converges to the optimal ( a stationary point for nonconvex losses ) by being agnostic to the heterogeneity levels. these claims are verified with extensive experiments on various datasets. besides reducing communication, it is also beneficial to reduce
|
Federated learning
|
wikipedia
|
. 2022 : establishment of bone marrow transplant ( bmt ) unit. 2022 : establishment of linear accelerator 4 halcyon - e = = patient care at bmchrc = = bmchrc provides comprehensive cancer care to patients. the specialities include surgical oncology neuro onco surgery ortho oncology plastic & reconstructive surgery medical oncology bone marrow transplant ( bmt ) radiation oncology haematology - oncology gynecologic oncology anesthesiology in addition to the above treatment, the hospital provides numerous clinical services to the patients, including palliative care, psycho - oncology, ostomy clinic, dietary services, physiotherapy, dental care, and preventive oncology. diagnostic bmchrc provides diagnostic care to the patients through various diagnostic procedures including : nuclear medicine radiology mri pathology transfusion services microbiology = = major achievements = = in 2023, bhagwan mahaveer cancer hospital & cancer care launched cancer screening campaign cancer jaanch aapke dwar for cancer screening at an early stage. designed to identify the most common cancers in women & men includes breast cancer, cervical cancer & ovarian cancer, lung cancer, prostate cancer and blood cancer. since 2021, the bhagwan mahaveer cancer hospital has worked on several programmes. over 189 children under 14 who were suffering from any of the three forms of curable blood malignancies — acute lymphoblastic leukaemia, acute promyelocytic leukaemia, or hodgkin lymphoma - benefited from a one - of - a - kind fund. during the financial year 2020 – 2021, the hospital spent 7. 28 crore for free treatment for people under the below poverty line ( bpl ). in 2022, bmchrc provided free blood cancer treatment to 199 children under its flagship welfare programme. in 2017, the bmchrc team successfully performed a unique organ reconstruction surgery to remove oral cancer in a 42 - year - old patient. = = awards and recognition = = the institute has won awards in different categories by different organisations, bodies, and institutions over the years. 2016 : best healthcare trust provider award = = r & d and technology = = the bmchrc uses various diagnostic, treatment, aftercare, and research facilities at the hospital. diagnosis forms a significant part of the hospital's outpatient department, with the hospital providing radio diagnosis, pathology, nuclear medicine
|
Bhagwan Mahaveer Cancer Hospital and Research Centre
|
wikipedia
|
a lie groupoid. in particular, a lie pseudogroup is called of finite order k if it can be " reconstructed " from the space of its k - jets. = = references = = st. golab ( 1939 ). " uber den begriff der " pseudogruppe von transformationen " ". mathematische annalen. 116 : 768 – 780. doi : 10. 1007 / bf01597390. s2cid 124962440. = = external links = = alekseevskii, d. v. ( 2001 ) [ 1994 ], " pseudo - groups ", encyclopedia of mathematics, ems press
|
Pseudogroup
|
wikipedia
|
##ively. another class of matrices for which the permanent is of particular interest, is the positive - semidefinite matrices. using a technique of stockmeyer counting, they can be computed within the class bpp np { \ displaystyle { \ textsf { bpp } } ^ { \ textsf { np } } }, but this is considered an infeasible class in general. it is np - hard to approximate permanents of psd matrices within a subexponential factor, and it is conjectured to be bpp np { \ displaystyle { \ textsf { bpp } } ^ { \ textsf { np } } } - hard if further constraints on the spectrum are imposed, there are more efficient algorithms known. one randomized algorithm is based on the model of boson sampling and it uses the tools proper to quantum optics, to represent the permanent of positive - semidefinite matrices as the expected value of a specific random variable. the latter is then approximated by its sample mean. this algorithm, for a certain set of positive - semidefinite matrices, approximates their permanent in polynomial time up to an additive error, which is more reliable than that of the standard classical polynomial - time algorithm by gurvits. = = notes = = = = references = = = = further reading = = barvinok, a. ( 2017 ), " approximating permanents and hafnians ", discrete analysis, arxiv : 1601. 07518, doi : 10. 19086 / da. 1244, s2cid 397350.
|
Computing the permanent
|
wikipedia
|
##me'was defined with its current meaning by tettelin et al. in 2005 ; it derives'pan'from the greek word παν, meaning'whole'or'everything ', while the genome is a commonly used term to describe an organism's complete genetic material. tettelin et al. applied the term specifically to bacteria, whose pangenome " includes a core genome containing genes present in all strains and a dispensable genome composed of genes absent from one or more strains and genes that are unique to each strain. " = = parts of the pangenome = = = = = core = = = is the part of the pangenome that is shared by every genome in the tested set. some authors have divided the core pangenome in hard core, those families of homologous genes that has at least one copy of the family shared by every genome ( 100 % of genomes ) and the soft core or extended core, those families distributed above a certain threshold ( 90 % ). in a study that involves the pangenomes of bacillus cereus and staphylococcus aureus, some of them isolated from the international space station, the thresholds used for segmenting the pangenomes were as follows : " cloud ", " shell ", and " core " corresponding to gene families with presence in < 10 %, 10 – 95 %, and > 95 % of the genomes, respectively. the core genome size and proportion to the pangenome depends on several factors, but it is especially dependent on the phylogenetic similarity of the considered genomes. for example, the core of two identical genomes would also be the complete pangenome. the core of a genus will always be smaller than the core genome of a species. genes that belong to the core genome are often related to house keeping functions and primary metabolism of the lineage, nevertheless, the core gene can also contain some genes that differentiate the species from other species of the genus, i. e. that may be related pathogenicity to niche adaptation. = = = shell = = = is the part of the pangenome shared by the majority of the genomes in a pangenome. there is not a universally accepted threshold to define the shell genome, some authors consider a gene family as part of the shell pangenome if it shared by more than 50 % of the genomes in the pangenome. a family can be part of the shell by several evolutive dynamics, for example by
|
Pan-genome
|
wikipedia
|
technologies ag ( germany ). the project was coordinated by hans uszkoreit, a professor of computational linguistics at saarland university. = = references = = = = external links = = official homepage euromatrixplus official homepage
|
Euromatrix
|
wikipedia
|
for space heating. an stes can also be used for summer cooling by storing the cold of winter underground. to cope with fluctuations in demand, zero energy buildings are frequently connected to the electricity grid, export electricity to the grid when there is a surplus, and drawing electricity when not enough electricity is being produced. other buildings may be fully autonomous. energy harvesting is most often more effective in regards to cost and resource utilization when done on a local but combined scale, for example a group of houses, cohousing, local district or village rather than an individual house basis. an energy benefit of such localized energy harvesting is the virtual elimination of electrical transmission and electricity distribution losses. on - site energy harvesting such as with roof top mounted solar panels eliminates these transmission losses entirely. energy harvesting in commercial and industrial applications should benefit from the topography of each location. however, a site that is free of shade can generate large amounts of solar powered electricity from the building's roof and almost any site can use geothermal or air - sourced heat pumps. the production of goods under net zero fossil energy consumption requires locations of geothermal, microhydro, solar, and wind resources to sustain the concept. zero - energy neighborhoods, such as the bedzed development in the united kingdom, and those that are spreading rapidly in california and china, may use distributed generation schemes. this may in some cases include district heating, community chilled water, shared wind turbines, etc. there are current plans to use zeb technologies to build entire off - the - grid or net zero energy use cities. = = = the " energy harvest " versus " energy conservation " debate = = = one of the key areas of debate in zero energy building design is over the balance between energy conservation and the distributed point - of - use harvesting of renewable energy ( solar energy, wind energy, and thermal energy ). most zero energy homes use a combination of these strategies. as a result of significant government subsidies for photovoltaic solar electric systems, wind turbines, etc., there are those who suggest that a zeb is a conventional house with distributed renewable energy harvesting technologies. entire additions of such homes have appeared in locations where photovoltaic ( pv ) subsidies are significant, but many so called " zero energy homes " still have utility bills. this type of energy harvesting without added energy conservation may not be cost effective with the current price of electricity generated with photovoltaic equipment, depending on the local price of power company electricity. the cost, energy and carbon - footprint savings from conservation (
|
Zero-energy building
|
wikipedia
|
outlet from the vessel. as the level rises the controller acts to open the valve to draw off liquid to reduce the level. similarly as the levels fall the controller acts to close the lcv to reduce outflow of fluid. some vessels store liquid until it is pumped out. the controller ( lic ) acts to start and stop the pump within a specified band. for example, it may start the pump when the level rises to 0. 6 m and stop the pump when the level falls to 0. 4 m. high and low level alarms ( lah and lal ) warn operating personnel that levels are outside predefined limits. further deviation ( lahh and lall ) initiates a shutdown either to close emergency shutdown valves ( esdv ) on the inlet to the vessel or on the liquid outlet lines. as with high and low pressure instrumentation the shutdown function comprises an independent measurement loop to prevent a common mode failure. loss of liquid level in the vessel may lead to gas blowby where high pressure gas flows to the downstream vessel through the liquid outlet line. the structural integrity of the downstream vessel can be compromised. in addition high liquid level in the vessel may lead to carryover of liquid into the gas outlet may damage downstream equipment such as gas compressors. high liquid level in a flare drum can lead to undesirable carryover of liquid to the flare. a high - high liquid level ( lshh ) in the flare drum initiates a plant shutdown. one of the problems with a significant number of technologies is that they are installed through a nozzle and are exposed to products. this can create several problems, especially when retrofitting new equipment to vessels that have already been stress relieved, as it may not be possible to fit the instrument at the location required. also, as the measuring element is exposed to the contents within the vessel, it may either attack or coat the instrument causing it to fail in service. one of the most reliable methods for measuring level is using a nuclear gauge, as it is installed outside the vessel and doesn't normally require a nozzle for bulk level measurement. the measuring element is installed outside the process and can be maintained in normal operation without taking a shutdown. shutdown is only required for an accurate calibration. = = analyser instrumentation = = a wide range of analysis instruments are used in the oil, gas and petrochemical industries. chromatography – to measure the quality of product or reactants density ( oil ) – for custody meter
|
Instrumentation in petrochemical industries
|
wikipedia
|
). these reads undergo quality control checkpoints using tools to assess sequence read quality, read trimming and host depletion to prepare the viral sequences for assembly and alignment. reference - guided de novo assembly is the most popular method for genome assembly in virome analysis. sequencing reads are assembled into overlapping subsequences of a fixed length k ( k - mers ) known as contigs. contigs are aligned to reference databases for sequence similarity to assign viral taxonomy of the sample. this method, however, requires prior knowledge of viral taxonomy and is greatly impacted by the lack of robust references available. current databases tend to be biased towards clinically relevant and cultivable viruses, notably reducing the analysis power. as a result, it is believed that our understanding of virus classification and taxonomy greatly underestimates the virome's true diversity. another limitation is the ability of the assembly tools to assemble low coverage, low abundance viruses. low abundance viruses may end up fragmented if sequencing depth is insufficient. tools can adjust for shorter k - mer lengths to include fragmented viral reads but this can introduce issues with contig ambiguity. this limitation leads to considerable proportions of uncharacterized viral sequencing reads or'viral dark matter '. new analysis software that harnesses machine learning have emerged to improve the deficiencies of reference database similarity approaches. = = deep learning in virome analysis = = deep learning has demonstrated advantages in many other applications within the genomics field, often surpassing traditional, state - of - the - art computational methods in terms of predictive performance, especially when trained with sufficient data. deep learning supports multitask learning, which is an approach where the model shares knowledge across a primary task and one or more secondary tasks, improving the versatility of tools. moreover, with multi - view learning, which facilitates the integration of multiple data types – such as sequence data, dna methylation, gene expression, and more – can produce more accurate and robust predictions. virome classification and analysis present a unique challenge due to the rapid evolution of viral genomes, which often leads to high sequence divergence within a species. deep learning models attempt to address this challenge and can recognize complex patterns in viral sequence fragments while handling high - dimensional data. = = = viral identification = = = traditional database - based tools like blast rely on reference data and can struggle with highly divergent viruses with no known homologs across previously identified in existing genomes – these sequences are generally classified as “ unknown ”, providing little
|
Virome analysis
|
wikipedia
|
u = h u, u † ( t ) = u − 1 ( t ), u ( 0 ) = 1. { \ displaystyle i \ hbar { \ dot { u } } = hu \,, \ quad u ^ { \ dagger } ( t ) = u ^ { - 1 } ( t ) \,, \ quad u ( 0 ) = 1 \,. } alternatively, we can argue that these operators must commute if we are to obtain the correct equations of motion from the hamiltonian, just as the corresponding poisson brackets in classical theory must vanish in order to generate the correct hamilton equations. the formal solution of the field equation is : a k λ ( t ) = a k λ ( 0 ) e − i ω k t + i e 2 π ω k v 0 t d t ′ e k λ ⋅ x ( t ′ ) e i ω k ( t ′ − t ) { \ displaystyle a _ { \ mathbf { k } \ lambda } ( t ) = a _ { \ mathbf { k } \ lambda } ( 0 ) e ^ { - i \ omega _ { k } t } + ie { \ sqrt { \ frac { 2 \ pi } { \ hbar \ omega _ { k } v } } } \ int _ { 0 } ^ { t } dt'\, e _ { \ mathbf { k } \ lambda } \ cdot \ mathbf { \ dot { x } } ( t') e ^ { i \ omega _ { k } \ left ( t'- t \ right ) } } and therefore the equation for akλ may be written : x ¨ + ω 0 2 x = e m e 0 ( t ) + e m e r r ( t ) { \ displaystyle \ mathbf { \ ddot { x } } + \ omega _ { 0 } ^ { 2 } \ mathbf { x } = { \ frac { e } { m } } \ mathbf { e } _ { 0 } ( t ) + { \ frac { e } { m } } \ mathbf { e } _ { rr } ( t ) } where e 0 ( t ) = i k λ 2 π ω k v [ a k λ ( 0 ) e − i ω k t − a k λ † ( 0 ) e i ω k t ] e k λ { \ displaystyle \ mathbf { e } _ { 0
|
Zero-point energy
|
wikipedia
|
of atlantic squid ( loligo pealei ), and were among the first applications of the " voltage clamp " technique. today, most microelectrodes used for intracellular recording are glass micropipettes, with a tip diameter of < 1 micrometre, and a resistance of several megohms. the micropipettes are filled with a solution that has a similar ionic composition to the intracellular fluid of the cell. a chlorided silver wire inserted into the pipette connects the electrolyte electrically to the amplifier and signal processing circuit. the voltage measured by the electrode is compared to the voltage of a reference electrode, usually a silver chloride - coated silver wire in contact with the extracellular fluid around the cell. in general, the smaller the electrode tip, the higher its electrical resistance. so an electrode is a compromise between size ( small enough to penetrate a single cell with minimum damage to the cell ) and resistance ( low enough so that small neuronal signals can be discerned from thermal noise in the electrode tip ). maintaining healthy brain slices is pivotal for successful electrophysiological recordings. the preparation of these slices is commonly achieved with tools such as the compresstome vibratome, ensuring optimal conditions for accurate and reliable recordings. nevertheless, even with the highest standards of tissue handling, slice preparation induces rapid and robust phenotype changes of the brain's major immune cells, microglia, which must be taken into consideration when using this model. = = = voltage clamp = = = the voltage clamp technique allows an experimenter to " clamp " the cell potential at a chosen value. this makes it possible to measure how much ionic current crosses a cell's membrane at any given voltage. this is important because many of the ion channels in the membrane of a neuron are voltage - gated ion channels, which open only when the membrane voltage is within a certain range. voltage clamp measurements of current are made possible by the near - simultaneous digital subtraction of transient capacitive currents that pass as the recording electrode and cell membrane are charged to alter the cell's potential. = = = current clamp = = = the current clamp technique records the membrane potential by injecting current into a cell through the recording electrode. unlike in the voltage clamp mode, where the membrane potential is held at a level determined by the experimenter, in " current clamp " mode the membrane potential is free to vary, and the amplifier records whatever voltage the cell generates
|
Electrophysiology
|
wikipedia
|
design = = = by the turn of the 20th century, amateur advisors and publications were increasingly challenging the monopoly that the large retail companies had on interior design. english feminist author mary haweis wrote a series of widely read essays in the 1880s in which she derided the eagerness with which aspiring middle - class people furnished their houses according to the rigid models offered to them by the retailers. she advocated the individual adoption of a particular style, tailor - made to the individual needs and preferences of the customer : one of my strongest convictions, and one of the first canons of good taste, is that our houses, like the fish's shell and the bird's nest, ought to represent our individual taste and habits. the move toward decoration as a separate artistic profession, unrelated to the manufacturers and retailers, received an impetus with the 1899 formation of the institute of british decorators ; with john dibblee crace as its president, it represented almost 200 decorators around the country. by 1915, the london directory listed 127 individuals trading as interior decorators, of which 10 were women. rhoda garrett and agnes garrett were the first women to train professionally as home decorators in 1874. the importance of their work on design was regarded at the time as on a par with that of william morris. in 1876, their work – suggestions for house decoration in painting, woodwork and furniture – spread their ideas on artistic interior design to a wide middle - class audience. by 1900, the situation was described by the illustrated carpenter and builder : until recently when a man wanted to furnish he would visit all the dealers and select piece by piece of furniture.... today he sends for a dealer in art furnishings and fittings who surveys all the rooms in the house and he brings his artistic mind to bear on the subject. in america, candace wheeler was one of the first woman interior designers and helped encourage a new style of american design. she was instrumental in the development of art courses for women in a number of major american cities and was considered a national authority on home design. an important influence on the new profession was the decoration of houses, a manual of interior design written by edith wharton with architect ogden codman in 1897 in america. in the book, the authors denounced victorian - style interior decoration and interior design, especially those rooms that were decorated with heavy window curtains, victorian bric - a - brac, and overstuffed furniture. they argued that such rooms emphasized upholstery at the expense of proper space planning and architectural design and
|
Interior design
|
wikipedia
|
- k option to sort on a certain column. for example, use " - k 2 " to sort on the second column. in old versions of sort, the + 1 option made the program sort on the second column of data ( + 2 for the third, etc. ). this usage is deprecated. = = = sort on multiple fields = = = the - k m, n option lets you sort on a key that is potentially composed of multiple fields ( start at column m, end at column n ) : here the first sort is done using column 2. - k2, 2n specifies sorting on the key starting and ending with column 2, and sorting numerically. if - k2 is used instead, the sort key would begin at column 2 and extend to the end of the line, spanning all the fields in between. - k1, 1 dictates breaking ties using the value in column 1, sorting alphabetically by default. note that bob, and chad have the same quota and are sorted alphabetically in the final output. = = = sorting a pipe delimited file = = = = = = sorting a tab delimited file = = = sorting a file with tab separated values requires a tab character to be specified as the column delimiter. this illustration uses the shell's dollar - quote notation to specify the tab as a c escape sequence. = = = sort in reverse = = = the - r option just reverses the order of the sort : = = = sort in random = = = the gnu implementation has a - r - - random - sort option based on hashing ; this is not a full random shuffle because it will sort identical lines together. a true random sort is provided by the unix utility shuf. = = = sort by version = = = the gnu implementation has a - v - - version - sort option which is a natural sort of ( version ) numbers within text. two text strings that are to be compared are split into blocks of letters and blocks of digits. blocks of letters are compared alpha - numerically, and blocks of digits are compared numerically ( i. e., skipping leading zeros, more digits means larger, otherwise the leftmost digits that differ determine the result ). blocks are compared left - to - right and the first non - equal block in that loop decides which text is larger. this happens to work for ip addresses, debian package version strings and similar tasks where numbers of variable length are embedded in strings. = = see
|
Sort (Unix)
|
wikipedia
|
its values. nevertheless, simply to call these anthropological films would, while true, be a little like calling things fall apart [ by chinua achebe ] an anthropological novel. they are a major contribution to our screen culture, and deserve to be seen well beyond the confines of the discipline. " lucien castaing - taylor, professor of visual arts and anthropology at harvard department of anthropology, said about the film : " an extraordinarily insightful and intimate exploration of the social and cultural landscape of india's most elite boys'boarding school. in following the boys'daily routines and dramas, the film also affords us a rare glimpse at processes of postcolonial indian identity formation. this is a wonderful teaching tool that will enhance any course dealing with issues of adolescence, education, institutional structure and'habitus ', or postcolonial elites. my students were stupefied by the eloquence, independence, and maturity of the doon school boys. " anna grimshaw, professor at the emory college of arts and sciences, emory university, reviewed the project in her book observational cinema : anthropology, film, and the exploration of social life : " in working observationally, aligning his own practice as a filmmaker with the everyday process of children's learning ( rather than commenting on a place outside of them ), macdougall attempts to generate the conditions in which his own understanding might be transformed by the agency of his subjects. through filming he found himself documenting the innovative qualities of children — their capacity to think laterally about their and others'lives. his camera revealed them to be adept at working things out in practice. " = = honours = = the film series was an honoree at the margaret mead film festival, association for asian studies, film festival of the royal anthropological institute of great britain and ireland, and gottingen international ethnographic film festival. it has been selected and screened by the society for visual anthropology and american anthropological association. = = bibliography = = grimshaw, anna ; ravetz, amanda ( 2009 ). observational cinema : anthropology, film, and the exploration of social life. indiana university press. isbn 978 - 0253221582. houtman, coral ( 2011 ). " the student author, lacanian discourse theory and'la nuit americaine'". in myer, clive ( ed. ). critical cinema : beyond the theory of practice. columbia university press. isbn 9781906660369. macdougall, david (
|
The Doon School Quintet
|
wikipedia
|
2nd rev. and enlarged ed. ). berlin heidelberg paris [ etc. ] : springer. isbn 978 - 3 - 540 - 67723 - 9. berggren, karl - fredrik ; °aberg, sven, eds. ( 2001 ). quantum chaos y2k : proceedings of nobel symposium 116, backaskog castle, sweden, june 13 - 17, 2000. stockholm, sweden : physica scripta, the royal swedish academy of sciences. isbn 978 - 981 - 02 - 4711 - 9. reichl, linda e. ( 2004 ). the transition to chaos : conservative classical systems and quantum manifestations. institute for nonlinear science ( 2. [ new ] ed. ). new york heidelberg : springer. isbn 978 - 0 - 387 - 98788 - 0. = = external links = = quantum chaos by martin gutzwiller ( 1992 and 2008, scientific american ) quantum chaos martin gutzwiller scholarpedia 2 ( 12 ) : 3146. doi : 10. 4249 / scholarpedia. 3146 category : quantum chaos scholarpedia what is... quantum chaos by ze'ev rudnick ( january 2008, notices of the american mathematical society ) brian hayes, " the spectrum of riemannium " ; american scientist volume 91, number 4, july – august, 2003 pp. 296 – 300. discusses relation to the riemann zeta function. eigenfunctions in chaotic quantum systems by arnd backer. chaosbook. org
|
Quantum chaos
|
wikipedia
|
##exus, lumbosacral plexus, vertebral neuroforamina, base of skull, cranium, and pelvic bones. = = = = intracranial metastasis = = = = there are three types of intracranial metastasis : brain metastasis, dural metastasis, and leptomeningeal metastasis. brain metastasis can be single or multiple and involve any portion of the brain. metastasis to dural structures generally occurs by hematogenous spread or direct invasion from a contiguous bone. dural metastases can invade the underlying brain and cause focal edema and associated neurologic symptoms. these processes tend to cause seizures early in the course because of their cortical location. metastasis to the leptomeninges is an uncommon but well - recognized clinical presentation in cancer patients. leptomeningeal metastasis most commonly is due to breast, lung, or melanoma primary tumors. = = = = skull metastasis = = = = metastases to the skull are divided into two categories by general site : calvarium and skull base. metastases to the calvarium usually are asymptomatic. metastases to the skull base quickly become symptomatic because of their proximity to cranial nerves and vascular structures. = = = = spinal metastasis = = = = the spine most often is affected by metastatic disease involving the epidural space. this usually occurs as direct tumor spread from a vertebral body ( 85 % ) or by invasion of paravertebral masses through a neuroforamin ( 10 – 15 % ). = = mechanisms = = = = = tumor factors = = = = = = = histology = = = = seizures are common in patients with low - grade tumors such as dysembryoblastic neuroepithelial tumors, gangligliomas, and oligodendrogliomas. the rapid growth of fast - growing high - grade brain tumors may damage the subcortical network essential for electrical transmission, whereas slow - growing tumors have been suggested to induce partial deafferentation of cortical regions, causing denervation hypersensitivity and producing an epileptogenic milieu. studies strongly suggest that genetic factors may play a role in tumor development and tumor - related epilepsy. = = = = glutamate neurotransmission =
|
Neuro-oncology
|
wikipedia
|
the abc conjecture ( also known as the oesterle – masser conjecture ) is a conjecture in number theory that arose out of a discussion of joseph oesterle and david masser in 1985. it is stated in terms of three positive integers a, b { \ displaystyle a, b } and c { \ displaystyle c } ( hence the name ) that are relatively prime and satisfy a + b = c { \ displaystyle a + b = c }. the conjecture essentially states that the product of the distinct prime factors of a b c { \ displaystyle abc } is usually not much smaller than c { \ displaystyle c }. a number of famous conjectures and theorems in number theory would follow immediately from the abc conjecture or its versions. mathematician dorian goldfeld described the abc conjecture as " the most important unsolved problem in diophantine analysis ". the abc conjecture originated as the outcome of attempts by oesterle and masser to understand the szpiro conjecture about elliptic curves, which involves more geometric structures in its statement than the abc conjecture. the abc conjecture was shown to be equivalent to the modified szpiro's conjecture. various attempts to prove the abc conjecture have been made, but none have gained broad acceptance. shinichi mochizuki claimed to have a proof in 2012, but the conjecture is still regarded as unproven by the mainstream mathematical community. = = formulations = = before stating the conjecture, the notion of the radical of an integer must be introduced : for a positive integer n { \ displaystyle n }, the radical of n { \ displaystyle n }, denoted rad ( n ) { \ displaystyle { \ text { rad } } ( n ) }, is the product of the distinct prime factors of n { \ displaystyle n }. for example, rad ( 16 ) = rad ( 2 4 ) = rad ( 2 ) = 2 { \ displaystyle { \ text { rad } } ( 16 ) = { \ text { rad } } ( 2 ^ { 4 } ) = { \ text { rad } } ( 2 ) = 2 } rad ( 17 ) = 17 { \ displaystyle { \ text { rad } } ( 17 ) = 17 } rad ( 18 ) = rad ( 2 ⋅ 3 2 ) = 2 ⋅ 3 = 6 { \ displaystyle { \ text { rad } } ( 18 ) = { \ text { rad } } (
|
Abc conjecture
|
wikipedia
|
{ \ displaystyle \ mathrm { h } ( p ) } is the entropy of p ( which is the same as the cross - entropy of p with itself ). the relative entropy d kl ( p q ) { \ displaystyle d _ { \ text { kl } } ( p \ parallel q ) } can be thought of geometrically as a statistical distance, a measure of how far the distribution q is from the distribution p. geometrically it is a divergence : an asymmetric, generalized form of squared distance. the cross - entropy h ( p, q ) { \ displaystyle h ( p, q ) } is itself such a measurement ( formally a loss function ), but it cannot be thought of as a distance, since h ( p, p ) = : h ( p ) { \ displaystyle h ( p, p ) = : h ( p ) } is not zero. this can be fixed by subtracting h ( p ) { \ displaystyle h ( p ) } to make d kl ( p q ) { \ displaystyle d _ { \ text { kl } } ( p \ parallel q ) } agree more closely with our notion of distance, as the excess loss. the resulting function is asymmetric, and while this can be symmetrized ( see § symmetrised divergence ), the asymmetric form is more useful. see § interpretations for more on the geometric interpretation. relative entropy relates to " rate function " in the theory of large deviations. arthur hobson proved that relative entropy is the only measure of difference between probability distributions that satisfies some desired properties, which are the canonical extension to those appearing in a commonly used characterization of entropy. consequently, mutual information is the only measure of mutual dependence that obeys certain related conditions, since it can be defined in terms of kullback – leibler divergence. = = properties = = relative entropy is always non - negative, d kl ( p q ) ≥ 0, { \ displaystyle d _ { \ text { kl } } ( p \ parallel q ) \ geq 0, } a result known as gibbs'inequality, with d kl ( p q ) { \ displaystyle d _ { \ text { kl } } ( p \ parallel q ) } equals zero if and only if p = q { \ displaystyle p = q } as measures. in particular, if p ( d x ) = p (
|
Kullback–Leibler divergence
|
wikipedia
|
formerly cold water. chown, marcus ( june 2006 ). " why water freezes faster after heating ". new scientist. conover, emily ( 2017 ). " debate heats up over claims that hot water sometimes freezes faster than cold ". science news. 191 ( 2 ) : 14. retrieved 2 april 2018. dorsey, n. ernest ( 1948 ). " the freezing of supercooled water ". trans. am. philos. soc. 38 ( 3 ) : 247 – 326. doi : 10. 2307 / 1005602. hdl : 2027 / mdp. 39076006405018. jstor 1005602. an extensive study of freezing experiments. jeng, monwhea ( 2006 ). " the mpemba effect : when can hot water freeze faster than cold? ". american journal of physics. 74 ( 6 ) : 514 – 522. arxiv : physics / 0512262. bibcode : 2006amjph.. 74.. 514j. doi : 10. 1119 / 1. 2186331. knight, charles a. ( may 1996 ). " the mpemba effect : the freezing times of hot and cold water ". american journal of physics. 64 ( 5 ) : 524. bibcode : 1996amjph.. 64.. 524k. doi : 10. 1119 / 1. 18275. = = external links = = " heat questions ". hyperphysics. georgia state university. " mpemba effect : why hot water can freeze faster than cold ". a possible explanation of the mpemba effect tyrovolas, ilias j. ( 2019 ). " new explanation for the mpemba effect ". the 5th international electronic conference on entropy and its applications. p. 2. doi : 10. 3390 / ecea - 5 - 06658. " the mpemba effect : hot water may freeze faster than cold water ". an analysis of the mpemba effect london south bank university " the mpemba effect ". archived from the original on 9 october 2011. – history and analysis of the mpemba effect " the story of the mpemba effect told by the protagonists ". youtube. 10 january 2013. archived from the original on 12 december 2021. an historical interview with erasto b. mpemba, dr denis g. osborne and ray desouza " which freeze
|
Mpemba effect
|
wikipedia
|
diet and are found ubiquitously in plants. " while some studies have suggested flavonoids may have a role in cancer prevention, others have been inconclusive or suggested they may be harmful. = = = methionine = = = restriction of methionine has been suggested as a strategy in cancer growth control in cancers that depend on methionine for survival and proliferation. according to a 2012 review, the effect of methionine restriction on cancer has yet to be studied directly in humans and " there is still insufficient knowledge to give reliable nutritional advice ". reviews of epidemiological studies have found no association between dietary methionine and breast or pancreatic cancer risk. = = = mushrooms = = = according to cancer research uk, " there is currently no evidence that any type of mushroom or mushroom extract can prevent or cure cancer ", although research into some species continues. a 2020 review found that higher mushroom consumption is associated with lower risk of breast cancer. = = = dairy products = = = = = = whole grains = = = there is strong evidence that consumption of whole grains decreases risk of colorectal cancer. = = = saturated fat = = = = = = soy = = = the american cancer society have stated that " there is some evidence from human and lab studies that consuming traditional soy foods such as tofu may lower the risk of breast and prostate cancer, but overall the evidence is too limited to draw firm conclusions ". a 2023 review found that soy protein lowers breast cancer risk. = = = other = = = green tea consumption has no effect on cancer risk. a 2016 meta - analysis showed that women and men who drank coffee had a lower risk of liver cancer. an umbrella review of meta - analyses found that coffee was associated with a lower risk of liver and endometrial cancer. a 2014 systematic review found, " no firm evidence that vitamin d supplementation affects cancer occurrence in predominantly elderly community - dwelling women. " = = see also = = = = references = = = = external links = = " diet, healthy eating and cancer ". info. cancerresearchuk. org. cancer research uk. 23 march 2015. " epic ( european prospective investigation into cancer and nutrition ) study ". epic. iarc. fr. international agency for research on cancer : world health organization.
|
Diet and cancer
|
wikipedia
|
, operational and / or strategic tasks and initiatives. management teams are responsible for the total performance of the division they oversee with regards to day - to - day operations, delegation of tasks and the supervision of employees. the authority of these teams are based on the members position on the company's or institution's organizational chart. these management teams are constructed of managers from different divisions ( e. g. vice president of marketing, assistant director of operations ). an example of management teams are executive management teams, which consists of members at the top of the organization's hierarchy, such as chief executive officer, board of directors, board of trustees, etc., who establish the strategic initiatives that a company will undertake over a long term period ( ~ 3 – 5 years ). management teams have been effective by using their expertise to aid companies in adjusting to the current landscape of a global economy, which helps them compete with their rivals in their respective markets, produce unique initiatives that sets them apart from their rivals and empower the employees who are responsible for the success of the organization or institution. = = see also = = = = references = =
|
Team effectiveness
|
wikipedia
|
system involved in word formation. pragmatics is the study of the relationship between linguistic forms and speakers of the language, it also incorporates how speech is used to serve different functions. pragmatics can be defined as the ability to communicate one's feelings and desires to others. children's development of language also includes semantics which is the attachment of meaning to words. this happens in three stages. first, each word means an entire sentence. for example, a young child may say " mama " but the child may mean " here is mama ", " where is mama? ", or " i see mama. " in the second stage, words have meaning but do not have complete definitions. this stage occurs around age two or three. third, around age seven or eight, words have adult - like definitions and their meanings are more complete. a child learns the syntax of their language when they are able to join words together into sentences and understand multiple - word sentences said by other people. there appear to be six major stages in which a child's acquisition of syntax develops. first, is the use of sentence - like words in which the child communicates using one word with additional vocal and bodily cues. this stage usually occurs between 12 and 18 months of age. second, between 18 months to two years, there is the modification stage where children communicate concepts by modifying a topic word. the third stage, between two and three years old, involves the child using complete subject - predicate structures to communicate concepts. fourth, children make changes on basic sentence structure that enables them to communicate more complex concepts. this stage occurs between the ages of two and a half years to four years. the fifth stage of categorization involves children aged three and a half to seven years refining their sentences with more purposeful word choice that reflects their complex system of categorizing word types. finally, children use structures of language that involve more complicated syntactic relationships between the ages of five years old to ten years old. = = = sequential skills and milestones = = = infants begin with cooing and soft vowel sounds. shortly after birth, this system is developed as the infants begin to understand that their noises, or non - verbal communication, lead to a response from their caregiver. this will then progress into babbling around 5 months of age, with infants first babbling consonant and vowel sounds together that may sound like " ma " or " da ". at around 8 months of age, babbling increases to include repetition of sounds
|
Child development
|
wikipedia
|
acid sequence : a frameshift mutation is caused by insertion or deletion of a number of nucleotides that is not evenly divisible by three from a dna sequence. due to the triplet nature of gene expression by codons, the insertion or deletion can disrupt the reading frame, or the grouping of the codons, resulting in a completely different translation from the original. the earlier in the sequence the deletion or insertion occurs, the more altered the protein produced is. ( for example, the code ccu gac uac cua codes for the amino acids proline, aspartic acid, tyrosine, and leucine. if the u in ccu was deleted, the resulting sequence would be ccg acu acc uax, which would instead code for proline, threonine, threonine, and part of another amino acid or perhaps a stop codon ( where the x stands for the following nucleotide ). ) by contrast, any insertion or deletion that is evenly divisible by three is termed an in - frame mutation. a point substitution mutation results in a change in a single nucleotide and can be either synonymous or nonsynonymous. a synonymous substitution replaces a codon with another codon that codes for the same amino acid, so that the produced amino acid sequence is not modified. synonymous mutations occur due to the degenerate nature of the genetic code. if this mutation does not result in any phenotypic effects, then it is called silent, but not all synonymous substitutions are silent. ( there can also be silent mutations in nucleotides outside of the coding regions, such as the introns, because the exact nucleotide sequence is not as crucial as it is in the coding regions, but these are not considered synonymous substitutions. ) a nonsynonymous substitution replaces a codon with another codon that codes for a different amino acid, so that the produced amino acid sequence is modified. nonsynonymous substitutions can be classified as nonsense or missense mutations : a missense mutation changes a nucleotide to cause substitution of a different amino acid. this in turn can render the resulting protein nonfunctional. such mutations are responsible for diseases such as epidermolysis bullosa, sickle - cell disease, and sod1 - mediated als. on the other hand, if a missense mutation occurs in an amino acid codon that results in the
|
Mutation
|
wikipedia
|
global or regional headquarters in the london and are listed on the london stock exchange, and many banks and other financial institutions operate there or in edinburgh. = = see also = = = = references = = = = further reading = = porteous, bruce t. ; pradip tapadar ( december 2005 ). economic capital and financial risk management for financial services firms and conglomerates. palgrave macmillan. isbn 1 - 4039 - 3608 - 0. = = external links = = the role of the financial services sector in expanding economic opportunity | a report by christopher n. sutton and beth jenkins | john f. kennedy school of government | harvard university
|
Financial services
|
wikipedia
|
" grouping can often be supported by experimental crosses in which only certain pairs of species will produce hybrids. " the examples given below may support both uses of the term " species group. " often, such complexes do not become evident until a new species is introduced into the system, which breaks down existing species barriers. an example is the introduction of the spanish slug in northern europe, where interbreeding with the local black slug and red slug, which were traditionally considered clearly separate species that did not interbreed, shows that they may be actually just subspecies of the same species. where closely related species co - exist in sympatry, it is often a particular challenge to understand how the similar species persist without outcompeting each other. niche partitioning is one mechanism invoked to explain that. indeed, studies in some species complexes suggest that species divergence have gone in par with ecological differentiation, with species now preferring different microhabitats. similar methods also found that the amazonian frog eleutherodactylus ockendeni is actually at least three different species that diverged over 5 million years ago. a species flock may arise when a species penetrates a new geographical area and diversifies to occupy a variety of ecological niches, a process known as adaptive radiation. the first species flock to be recognized as such was the 13 species of darwin's finches on the galapagos islands described by charles darwin. = = practical implications = = = = = biodiversity estimates = = = it has been suggested that cryptic species complexes are very common in the marine environment. that suggestion came before the detailed analysis of many systems using dna sequence data but has been proven to be correct. the increased use of dna sequence in the investigation of organismal diversity ( also called phylogeography and dna barcoding ) has led to the discovery of a great many cryptic species complexes in all habitats. in the marine bryozoan celleporella hyalina, detailed morphological analyses and mating compatibility tests between the isolates identified by dna sequence analysis were used to confirm that these groups consisted of more than 10 ecologically distinct species, which had been diverging for many millions of years. = = = disease and pathogen control = = = pests, species that cause diseases and their vectors, have direct importance for humans. when they are found to be cryptic species complexes, the ecology and the virulence of each of these species need to be re - evaluated to devise appropriate control strategies as their diversity increases the capacity for more dangerous strains
|
Species complex
|
wikipedia
|
= the field k = q [ x ] / ( x 6 − 2 ) = q ( θ ) { \ displaystyle k = \ mathbb { q } [ x ] / ( x ^ { 6 } - 2 ) = \ mathbb { q } ( \ theta ) } for θ = ζ 2 6 { \ displaystyle \ theta = \ zeta { \ sqrt [ { 6 } ] { 2 } } } where ζ { \ displaystyle \ zeta } is a fixed 6th root of unity, provides a rich example for constructing explicit real and complex archimedean embeddings, and non - archimedean embeddings as wellpg 15 - 16. = = = archimedean places = = = here we use the standard notation r 1 { \ displaystyle r _ { 1 } } and r 2 { \ displaystyle r _ { 2 } } for the number of real and complex embeddings used, respectively ( see below ). calculating the archimedean places of a number field k { \ displaystyle k } is done as follows : let x { \ displaystyle x } be a primitive element of k { \ displaystyle k }, with minimal polynomial f { \ displaystyle f } ( over q { \ displaystyle \ mathbb { q } } ). over r { \ displaystyle \ mathbb { r } }, f { \ displaystyle f } will generally no longer be irreducible, but its irreducible ( real ) factors are either of degree one or two. since there are no repeated roots, there are no repeated factors. the roots r { \ displaystyle r } of factors of degree one are necessarily real, and replacing x { \ displaystyle x } by r { \ displaystyle r } gives an embedding of k { \ displaystyle k } into r { \ displaystyle \ mathbb { r } } ; the number of such embeddings is equal to the number of real roots of f { \ displaystyle f }. restricting the standard absolute value on r { \ displaystyle \ mathbb { r } } to k { \ displaystyle k } gives an archimedean absolute value on k { \ displaystyle k } ; such an absolute value is also referred to as a real place of k { \ displaystyle k }. on the other hand, the roots of factors of degree two are pairs of conjugate complex numbers, which allows for two conjugate em
|
Algebraic number field
|
wikipedia
|
ten thousand robots but will increase them to a million robots over a three - year period. lawyers have speculated that an increased prevalence of robots in the workplace could lead to the need to improve redundancy laws. kevin j. delaney said " robots are taking human jobs. but bill gates believes that governments should tax companies'use of them, as a way to at least temporarily slow the spread of automation and to fund other types of employment. " the robot tax would also help pay a guaranteed living wage to the displaced workers. the world bank's world development report 2019 puts forth evidence showing that while automation displaces workers, technological innovation creates more new industries and jobs on balance. = = contemporary uses = = at present, there are two main types of robots, based on their use : general - purpose autonomous robots and dedicated robots. robots can be classified by their specificity of purpose. a robot might be designed to perform one particular task extremely well, or a range of tasks less well. all robots by their nature can be re - programmed to behave differently, but some are limited by their physical form. for example, a factory robot arm can perform jobs such as cutting, welding, gluing, or acting as a fairground ride, while a pick - and - place robot can only populate printed circuit boards. = = = general - purpose autonomous robots = = = general - purpose autonomous robots can perform a variety of functions independently. general - purpose autonomous robots typically can navigate independently in known spaces, handle their own re - charging needs, interface with electronic doors and elevators and perform other basic tasks. like computers, general - purpose robots can link with networks, software and accessories that increase their usefulness. they may recognize people or objects, talk, provide companionship, monitor environmental quality, respond to alarms, pick up supplies and perform other useful tasks. general - purpose robots may perform a variety of functions simultaneously or they may take on different roles at different times of day. some such robots try to mimic human beings and may even resemble people in appearance ; this type of robot is called a humanoid robot. humanoid robots are still in a very limited stage, as no humanoid robot can, as of yet, actually navigate around a room that it has never been in. thus, humanoid robots are really quite limited, despite their intelligent behaviors in their well - known environments. = = = factory robots = = = = = = = car production = = = = over the last three decades, automobile factories have become dominated by robots. a typical
|
Cobot
|
wikipedia
|
two physical systems are in thermal equilibrium if there is no net flow of thermal energy between them when they are connected by a path permeable to heat. thermal equilibrium obeys the zeroth law of thermodynamics. a system is said to be in thermal equilibrium with itself if the temperature within the system is spatially uniform and temporally constant. systems in thermodynamic equilibrium are always in thermal equilibrium, but the converse is not always true. if the connection between the systems allows transfer of energy as'change in internal energy'but does not allow transfer of matter or transfer of energy as work, the two systems may reach thermal equilibrium without reaching thermodynamic equilibrium. = = two varieties of thermal equilibrium = = = = = relation of thermal equilibrium between two thermally connected bodies = = = the relation of thermal equilibrium is an instance of equilibrium between two bodies, which means that it refers to transfer through a selectively permeable partition of matter or work ; it is called a diathermal connection. according to lieb and yngvason, the essential meaning of the relation of thermal equilibrium includes that it is reflexive and symmetric. it is not included in the essential meaning whether it is or is not transitive. after discussing the semantics of the definition, they postulate a substantial physical axiom, that they call the " zeroth law of thermodynamics ", that thermal equilibrium is a transitive relation. they comment that the equivalence classes of systems so established are called isotherms. = = = internal thermal equilibrium of an isolated body = = = thermal equilibrium of a body in itself refers to the body when it is isolated. the background is that no heat enters or leaves it, and that it is allowed unlimited time to settle under its own intrinsic characteristics. when it is completely settled, so that macroscopic change is no longer detectable, it is in its own thermal equilibrium. it is not implied that it is necessarily in other kinds of internal equilibrium. for example, it is possible that a body might reach internal thermal equilibrium but not be in internal chemical equilibrium ; glass is an example. one may imagine an isolated system, initially not in its own state of internal thermal equilibrium. it could be subjected to a fictive thermodynamic operation of partition into two subsystems separated by nothing, no wall. one could then consider the possibility of transfers of energy as heat between the two subsystems. a long time after the fictive partition operation, the two subsystems will
|
Thermal equilibrium
|
wikipedia
|
, caroline, macdonald confirmed, with up to 20 or so more possible ). other potential hotspots are the result of shallow mantle material surfacing in areas of lithospheric break - up caused by tension and are thus a very different type of volcanism. estimates for the number of hotspots postulated to be fed by mantle plumes have ranged from about 20 to several thousand, with most geologists considering a few tens to exist. hawaii, reunion, yellowstone, galapagos, and iceland are some of the most active volcanic regions to which the hypothesis is applied. the plumes imaged to date vary widely in width and other characteristics, and are tilted, being not the simple, relatively narrow and purely thermal plumes many expected. only one, ( yellowstone ) has as yet been consistently modelled and imaged from deep mantle to surface. = = composition = = most hotspot volcanoes are basaltic ( e. g., hawaii, tahiti ). as a result, they are less explosive than subduction zone volcanoes, in which water is trapped under the overriding plate. where hotspots occur in continental regions, basaltic magma rises through the continental crust, which melts to form rhyolites. these rhyolites can form violent eruptions. for example, the yellowstone caldera was formed by some of the most powerful volcanic explosions in geologic history. however, when the rhyolite is completely erupted, it may be followed by eruptions of basaltic magma rising through the same lithospheric fissures ( cracks in the lithosphere ). an example of this activity is the ilgachuz range in british columbia, which was created by an early complex series of trachyte and rhyolite eruptions, and late extrusion of a sequence of basaltic lava flows. the hotspot hypothesis is now closely linked to the mantle plume hypothesis. the detailed compositional studies now possible on hotspot basalts have allowed linkage of samples over the wider areas often implicate in the later hypothesis, and its seismic imaging developments. = = contrast with subduction zone island arcs = = hotspot volcanoes are considered to have a fundamentally different origin from island arc volcanoes. the latter form over subduction zones, at converging plate boundaries. when one oceanic plate meets another, the denser plate is forced downward into a deep ocean trench. this plate, as it is subducted, releases water into the base of the over - riding plate,
|
Hotspot (geology)
|
wikipedia
|
bioenvironmental engineers ( bees ) within the united states air force ( usaf ) blend the understanding of fundamental engineering principles with a broad preventive medicine mission to identify, evaluate and recommend controls for hazards that could harm usaf airmen, employees, and their families. the information from these evaluations help bees design control measures and make recommendations that prevent illness and injury across multiple specialty areas, to include : occupational health, environmental health, radiation safety, and emergency response. bees are provided both initial and advanced instruction at the united states air force school of aerospace medicine at wright - patterson air force base in dayton, ohio. = = history = = during the 1970s, the united states air force ( usaf ) saw a need to implement measures to protect the health of their personnel. they took elements of military public health and spun off a separate arm called bioenvironmental engineering. from that point on, bioenvironmental engineering had taken the lead in protecting the health of usaf workers. the original group of bioenvironmental engineers ( bees ) came to the air force from the u. s. army in 1947 when the air force was formed. they were an outgrowth of the u. s. army sanitary corps. until 1964, air force bees were called sanitary and industrial hygiene engineers. they were medical service corps ( msc ) officers until the biomedical sciences corps ( bsc ) was created in 1965. between 1960 and 1970, the bee field grew from around 100 to 150 members. however, beginning in 1970, with the formation of the occupational safety and health administration ( osha ), the u. s. environmental protection agency ( epa ), and the nuclear regulatory commission, the career field experienced an exponential growth in federal regulations. these laws required bees to monitor air force operations for their effects on personnel and the environment. several major catastrophes and other events focused keen congressional interest on environment, safety and occupational health ( esoh ), leading to new, mandatory compliance programs. love canal, bhopal, atmospheric ozone depletion, and other incidents spawned new laws governing the installation restoration program ; hazard communication ; community - right - to - know ; process safety management ; and hazardous material inventory, control, and reduction. these have continually driven additional, corresponding requirements for bees. in the early 1980s, a major shift in functions occurred. the clinical and sanitary aspects of the bee program, ( communicable disease, sanitary surveys, vector control, and occupational medicine ) were turned over to the newly forming environmental health officers. this enabled
|
Bioenvironmental Engineering
|
wikipedia
|
in neuroscience, long - term potentiation ( ltp ) is a persistent strengthening of synapses based on recent patterns of activity. these are patterns of synaptic activity that produce a long - lasting increase in signal transmission between two neurons. the opposite of ltp is long - term depression, which produces a long - lasting decrease in synaptic strength. it is one of several phenomena underlying synaptic plasticity, the ability of chemical synapses to change their strength. as memories are thought to be encoded by modification of synaptic strength, ltp is widely considered one of the major cellular mechanisms that underlies learning and memory. ltp was discovered in the rabbit hippocampus by terje lømo in 1966 and has remained a popular subject of research since. many modern ltp studies seek to better understand its basic biology, while others aim to draw a causal link between ltp and behavioral learning. still, others try to develop methods, pharmacologic or otherwise, of enhancing ltp to improve learning and memory. ltp is also a subject of clinical research, for example, in the areas of alzheimer's disease and addiction medicine. = = history = = = = = early theories of learning = = = at the end of the 19th century, scientists generally recognized that the number of neurons in the adult brain ( roughly 100 billion ) did not increase significantly with age, giving neurobiologists good reason to believe that memories were generally not the result of new neuron production. with this realization came the need to explain how memories could form in the absence of new neurons. the spanish neuroanatomist santiago ramon y cajal was among the first to suggest a mechanism of learning that did not require the formation of new neurons. in his 1894 croonian lecture, he proposed that memories might instead be formed by strengthening the connections between existing neurons to improve the effectiveness of their communication. hebbian theory, introduced by donald hebb in 1949, echoed ramon y cajal's ideas, further proposing that cells may grow new connections or undergo metabolic and synaptic changes that enhance their ability to communicate and create a neural network of experiences : let us assume that the persistence or repetition of a reverberatory activity ( or " trace " ) tends to induce lasting cellular changes that add to its stability.... when an axon of cell a is near enough to excite a cell b and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes
|
Early long-term potentiation
|
wikipedia
|
a concordancer is a computer program that automatically constructs a concordance. the output of a concordancer may serve as input to a translation memory system for computer - assisted translation, or as an early step in machine translation. concordancers are also used in corpus linguistics to retrieve alphabetically or otherwise sorted lists of linguistic data from the corpus in question, which the corpus linguist then analyzes. a number of concordancers have been published, notably oxford concordance program ( ocp ), a concordancer first released in 1981 by oxford university computing services, which claims to be used in over 200 organisations worldwide. = = see also = = cocoa ( digital humanities ) cross - reference ctags kwic language industry statistically improbable phrase = = references = =
|
Concordancer
|
wikipedia
|
physical conditions which are within the site, are not weather conditions and which an experienced contractor would have judged at the contract date to have such a small chance of occurring that it would have been unreasonable for her / him to have allowed for them. ice conditions of contract sixth edition clause 12 ( 1 ) if during the execution of the works the contractor shall encounter physical conditions ( other than weather conditions or conditions due to weather conditions ) or artificial obstructions which conditions or obstructions could not in her / his opinion reasonably have been foreseen by an experienced contractor the contractor shall as early as practicable give written notice thereof to the engineer. = = z clauses = = employers often use additional conditions of contract ( " z clauses " ) to amend or delete contract provisions relating to certain obligations, and the efficiency and reform group of the cabinet office in the uk ( formerly the ogc ) has published generic public sector z clauses for use with nec contracts. a standard z clause relating to fair payment for sub - contractors ( often labelled " z5 " ) was recommended for public sector use in 2011, and additional public sector z clauses were later published to reflect the contract termination provisions and other requirements of the public contracts regulations 2015. excessive use of z clauses has been criticised as " onerous " and " poorly drafted " ; nec guidance states that " additional conditions should be used only when absolutely necessary to accommodate special needs ". = = guidance notes and further information = = guidance notes and flow charts are published by the ice, which are supplemented by the frequently asked questions sections of the nec website. prospective users of the nec3 contract are encouraged to study the faq's in order to avoid unintended contract provisions. the often unintended option c scenario where a contractor is paid monies in excess of the target cost plus maximum share provisions is specifically not addressed in the guidance notes or frequently asked questions. other common misinterpretations are minutes of meetings as communications, deleted work and paying for correcting defects. = = footnotes = = = = external links = = nec engineering and construction contract website free nec3 & nec4 guidance notes, flow charts & publications ne consult
|
New Engineering Contract
|
wikipedia
|
this construction establishes a categorical equivalence between lattice - ordered abelian groups with strong unit and mv - algebras. an effect algebra that is lattice - ordered and has the riesz decomposition property is an mv - algebra. conversely, any mv - algebra is a lattice - ordered effect algebra with the riesz decomposition property. = = relation to łukasiewicz logic = = c. c. chang devised mv - algebras to study many - valued logics, introduced by jan łukasiewicz in 1920. in particular, mv - algebras form the algebraic semantics of łukasiewicz logic, as described below. given an mv - algebra a, an a - valuation is a homomorphism from the algebra of propositional formulas ( in the language consisting of ⊕, ¬, { \ displaystyle \ oplus, \ lnot, } and 0 ) into a. formulas mapped to 1 ( that is, to ¬ { \ displaystyle \ lnot } 0 ) for all a - valuations are called a - tautologies. if the standard mv - algebra over [ 0, 1 ] is employed, the set of all [ 0, 1 ] - tautologies determines so - called infinite - valued łukasiewicz logic. chang's ( 1958, 1959 ) completeness theorem states that any mv - algebra equation holding in the standard mv - algebra over the interval [ 0, 1 ] will hold in every mv - algebra. algebraically, this means that the standard mv - algebra generates the variety of all mv - algebras. equivalently, chang's completeness theorem says that mv - algebras characterize infinite - valued łukasiewicz logic, defined as the set of [ 0, 1 ] - tautologies. the way the [ 0, 1 ] mv - algebra characterizes all possible mv - algebras parallels the well - known fact that identities holding in the two - element boolean algebra hold in all possible boolean algebras. moreover, mv - algebras characterize infinite - valued łukasiewicz logic in a manner analogous to the way that boolean algebras characterize classical bivalent logic ( see lindenbaum – tarski algebra ). in 1984, font, rodriguez and torrens introduced the wajsberg algebra as an alternative model for the infinite - valued łukasiewicz logic. wajsberg algebras and mv - algebras are term - equivalent. = = = mvn - algebras = = = in the 1940s, grigore moisil introduced his łukasie
|
MV-algebra
|
wikipedia
|
duhkha and presents an alternative one immediately after it, namely : duh - stha "'standing badly,'unsteady, disquieted ( lit. and fig. ) ; uneasy ", and so on. this form is also attested, and makes much better sense as the opposite of the rig veda sense of sukha, which monier - williams gives in full. = = = translation = = = the literal meaning of duhkha, as used in a general sense is " suffering " or " painful. " its exact translation depends on the context. contemporary translators of buddhist texts use a variety of english words to convey the aspects of dukh. early western translators of buddhist texts ( before the 1970s ) typically translated the pali term dukkha as " suffering. " later translators have emphasized that " suffering " is a too limited translation for the term duhkha, and have preferred to either leave the term untranslated, or to clarify that translation with terms such as anxiety, distress, frustration, unease, unsatisfactoriness, not having what one wants, having what one doesn't want, etc. in the sequence " birth is painful, " dukhka may be translated as " painful. " when related to vedana, " feeling, " dukkha ( " unpleasant, " " painful " ) is the opposite of sukkha ( " pleasure, " " pleasant " ), yet all feelings are dukkha in that they are impermanent, conditioned phenomena, which are unsatisfactory, incapable of providing lasting satisfaction. the term " unsatisfactoriness " then is often used to emphasize the unsatisfactoriness of " life under the influence of afflictions and polluted karma. " = = buddhism = = = = = early buddhism = = = duhkha is one of the three marks of existence, namely anitya ( " impermanent " ), duhkha ( " unsatisfactory " ), anatman ( without a lasting essence ). various sutras sum up how cognitive processes result in an aversion to unpleasant things and experiences ( duhkha ), forming a corrupted process together with the complementary process of clinging to and craving for pleasure ( suhkha ). this is expressed as samsara, an ongoing process of death and rebirth, but also more pointly and non - metaphysically in the process - formula of the five skandhas : birth is duh
|
Pain of paying
|
wikipedia
|
cells to produce ige antibodies, which in turn stimulate mast cells to release histamine, serotonin, and leukotriene to cause broncho - constriction, intestinal peristalsis, gastric fluid acidification to expel helminths. il - 5 from cd4 t cells will activate eosinophils to attack helminths. il - 10 suppresses th1 cells differentiation and function of dendritic cells. th2 overactivation against antigen will cause type i hypersensitivity which is an allergic reaction mediated by ige. allergic rhinitis, atopic dermatitis, and asthma belong to this category of overactivation. in addition to expressing different cytokines, th2 cells also differ from th1 cells in their cell surface glycans ( oligosaccharides ), which makes them less susceptible to some inducers of cell death. while we know about the types of cytokine patterns helper t cells tend to produce, we understand less about how the patterns themselves are decided. various evidence suggests that the type of apc presenting the antigen to the t cell has a major influence on its profile. other evidence suggests that the concentration of antigen presented to the t cell during primary activation influences its choice. the presence of some cytokines ( such as the ones mentioned above ) will also influence the response that will eventually be generated, but our understanding is nowhere near complete. = = = th17 helper cells = = = th17 helper cells are a subset of t helper cells developmentally distinct from th1 and th2 lineages. th17 cells produce interleukin 17 ( il - 17 ), a pro - inflammatory substance, as well as interleukins 21 and 22. this means that th17 cells are especially good at fighting extracellular pathogens and fungi, particularly during mucocutaneous immunity against candida spp. = = = thαβ helper cells = = = thαβ helper cells provide the host immunity against viruses. their differentiation is triggered by ifn α / β or il - 10. their key effector cytokine is il - 10. their main effector cells are nk cells as well as cd8 t cells, igg b cells, and il - 10 cd4 t cells. the key thαβ transcription factors are stat1 and stat3 as well as irfs. il - 10 from cd4 t cells activate nk cells'adcc
|
T helper cell
|
wikipedia
|
that have to be solved by the project are defined. the current and desired situations are analysed, and goals for the project are decided upon. in this phase, it is important to consider the needs of all parties, such as future users and their management. often, their expectations clash, causing problems later during development or during use of the system. = = = definition study = = = in this phase, a more in - depth study of the project is made. the organization is analysed to determine their needs and determine the impact of the system on the organization. the requirements for the system are discussed and decided upon. the feasibility of the project is determined. aspects that can be considered to determine feasibility are : advisable — are the resources ( both time and knowledge ) available to complete the project. significance — does the current system need to be replaced? technique — can the available equipment handle the requirements the system places on it? economics — are the costs of developing the system lower than the profit made from using it? organization — will the organization be able to use the new system? legal — does the new system conflict with existing laws? = = = basic design = = = in this phase, the design for the product is made. after the definition study has determined what the system needs to do, the design determines how this will be done. this often results in two documents : the functional design, or user interface design explaining what each part of the system does, and the high - level technical design, explaining how each part of the system is going to work. this phase combines the functional and technical design and only gives a broad design for the whole system. often, the architecture of the system is described here. sdm2 split this step in two parts, one for the bd phase, and one for the dd phase, in order to create a global design document. = = = detailed design = = = in this phase, the design for the product is described technically in the jargon needed for software developers ( and later, the team responsible for support of the system in the o & s phase ). after the basic design has been signed off, the technical detailed design determines how this will be developed with software. this often results in a library of source documentation : the functional design per function, and the technical design per function, explaining how each part of the system is going to work, and how they relate to each other. in sdm2, this phase elaborates on the global design by creating more detailed designs
|
Cap Gemini SDM
|
wikipedia
|
##orine ) thought to be helpful for tooth enamel strength. a few more trace elements may play some role in the health of mammals. boron and silicon are notably necessary for plants but have uncertain roles in animals. the elements aluminium and silicon, although very common in the earth's crust, are conspicuously rare in the human body. below is a periodic table highlighting nutritional elements. = = see also = = abundances of the elements ( data page ) abundance of elements in earth's crust natural abundance – isotopic abundance goldschmidt classification – geochemical classification primordial nuclide – nuclides predating the earth's formation ( found on earth ) radiative levitation – stellar phenomenon list of data references for chemical elements = = references = = = = = footnotes = = = = = = notes = = = = = = notations = = = " rare earth elements — critical resources for high technology | usgs fact sheet 087 - 02 ". geopubs. wr. usgs. gov. " imagine the universe! dictionary ". 3 december 2003. archived from the original on 3 december 2003. = = external links = = list of elements in order of abundance in the earth's crust ( only correct for the twenty most common elements ) cosmic abundance of the elements and nucleosynthesis webelements. com lists of elemental abundances for the universe, sun, meteorites, earth, ocean, streamwater, etc.
|
Abundance of the chemical elements
|
wikipedia
|
anion. in both of these the central bi atom is octahedrally coordinated with little or no distortion, in contravention to vsepr theory. the steric activity of the lone pair has long been assumed to be due to the orbital having some p character, i. e. the orbital is not spherically symmetric. more recent theoretical work shows that this is not always necessarily the case. for example, the litharge structure of pbo contrasts to the more symmetric and simpler rock - salt structure of pbs, and this has been explained in terms of pbii – anion interactions in pbo leading to an asymmetry in electron density. similar interactions do not occur in pbs. another example are some thallium ( i ) salts where the asymmetry has been ascribed to s electrons on tl interacting with antibonding orbitals. = = references = = = = external links = = chemistry guide an explanation of the inert pair effect.
|
Inert-pair effect
|
wikipedia
|
example, let f ( z ) = sin ( π z ) { \ displaystyle f ( z ) = \ sin ( \ pi z ) }. then one says that sin ( π z ) { \ displaystyle \ sin ( \ pi z ) } is of exponential type π { \ displaystyle \ pi }, since π { \ displaystyle \ pi } is the smallest number that bounds the growth of sin ( π z ) { \ displaystyle \ sin ( \ pi z ) } along the imaginary axis. so, for this example, carlson's theorem cannot apply, as it requires functions of exponential type less than π { \ displaystyle \ pi }. similarly, the euler – maclaurin formula cannot be applied either, as it, too, expresses a theorem ultimately anchored in the theory of finite differences. = = formal definition = = a holomorphic function f ( z ) { \ displaystyle f ( z ) } is said to be of exponential type σ > 0 { \ displaystyle \ sigma > 0 } if for every ε > 0 { \ displaystyle \ varepsilon > 0 } there exists a real - valued constant a ε { \ displaystyle a _ { \ varepsilon } } such that | f ( z ) | ≤ a ε e ( σ + ε ) | z | { \ displaystyle | f ( z ) | \ leq a _ { \ varepsilon } e ^ { ( \ sigma + \ varepsilon ) | z | } } for | z | → ∞ { \ displaystyle | z | \ to \ infty } where z ∈ c { \ displaystyle z \ in \ mathbb { c } }. we say f ( z ) { \ displaystyle f ( z ) } is of exponential type if f ( z ) { \ displaystyle f ( z ) } is of exponential type σ { \ displaystyle \ sigma } for some σ > 0 { \ displaystyle \ sigma > 0 }. the number τ ( f ) = σ = lim sup | z | → ∞ | z | − 1 log | f ( z ) | { \ displaystyle \ tau ( f ) = \ sigma = \ displaystyle \ limsup _ { | z | \ rightarrow \ infty } | z | ^ { - 1 } \ log | f ( z ) | } is the exponential type of f ( z ) { \ displaystyle f ( z ) }. the
|
Exponential type
|
wikipedia
|
nephrotoxicity and neurotoxicity. the small size ( < 100 nm ) and large surface area of functionalized nanomagnets offer advantages properties compared to hemoperfusion, which is a clinically used technique for the purification of blood and is based on surface adsorption. these advantages include high loading capacity, high selectivity towards the target compound, fast diffusion, low hydrodynamic resistance, and low dosage requirements. = = tissue engineering = = nanotechnology may be used as part of tissue engineering to help reproduce, repair, or reshape damaged tissue using suitable nanomaterial - based scaffolds and growth factors. if successful, tissue engineering may replace conventional treatments like organ transplants or artificial implants. nanoparticles such as graphene, carbon nanotubes, molybdenum disulfide and tungsten disulfide are being used as reinforcing agents to fabricate mechanically strong biodegradable polymeric nanocomposites for bone tissue engineering applications. the addition of these nanoparticles to the polymer matrix at low concentrations ( ~ 0. 2 weight % ) significantly improves in the compressive and flexural mechanical properties of polymeric nanocomposites. these nanocomposites may potentially serve as novel, mechanically strong, lightweight bone implants. for example, a flesh welder was demonstrated to fuse two pieces of chicken meat into a single piece using a suspension of gold - coated nanoshells activated by an infrared laser. this could be used to weld arteries during surgery. another example is nanonephrology, the use of nanomedicine on the kidney. the full potential and implications of nanotechnology uses within the tissue engineering are not yet fully understood, despite research spanning the past two decades. = = vaccine development = = today, a significant proportion of vaccines against viral diseases are created using nanotechnology. solid lipid nanoparticles represent a novel delivery system for some vaccines against sars - cov - 2 ( the virus that causes covid - 19 ). in recent decades, nanosized adjuvants have been widely used to enhance immune responses to targeted vaccine antigens. inorganic nanoparticles of aluminum, silica and clay, as well as organic nanoparticles based on polymers and lipids, are commonly used adjuvants within modern vaccine formulations. nanoparticles of natural polymers such as chitosan are commonly used adju
|
Nanomedicine
|
wikipedia
|
early detection of children with developmental - behavioral delays and disabilities is essential to ensure that the benefits of early intervention are maximized. = = background = = early intervention has been proven to help prevent school failure, reduce the need for expensive special education services, is associated with graduating from high school, avoiding teen pregnancy and violent crime, becoming employed when an adult, etc. recent research from head start showed that for every $ 1 spent on early intervention, society as a whole saves $ 17. 00. in the us, early intervention is guaranteed under the individuals with disabilities education act ( idea ) beginning at birth. because almost all children receive health care, primary care providers ( e. g., nurses, family medicine physicians, and pediatricians ) are charged by their various professional societies, by the centers for medicare and medicaid services, the centers for disease control, and by idea to search for difficulties and make needed referrals. so what are the methods used to detect children with difficulties and how effective are they? = = developmental - behavioral screening = = screening tools are brief measures designed to sort those who probably have problems from those who do not. screens are meant to be used on the asymptomatic and are not necessary when problems are obvious. screens do not lead to a diagnosis but rather to a probability of a problem. the kind of problem that may exist is generally not defined by a screening test. the screens used in primary care are generally broad - band in nature, meaning that they tap a range of developmental domains, typically expressive and receptive language, fine and gross motor skills, self - help, social - emotional, and for older children pre - academic and academic skills. in contrast, narrow - band screens focus only on a single condition such mental health problems, and may parse via factor scores, the probability, for example of depression and anxiety, versus attention deficits, versus disorders of conduct. typically, broad - band screens are used first and may be the only type of measure used to make referrals in primary care, referrals which are then followed up by in — depth or diagnostic testing and often with narrow - band screens used alongside them. screening measures require careful construction, research, and a high level of proof. high quality screens are ones that have been standardized ( meaning administered in exactly the same way every time ) on a large current ( meaning in the last decade ) nationally representative sample. screens must be shown to be reliable ( meaning that two different examiners get virtually the same results, and
|
Developmental-behavioral surveillance and screening
|
wikipedia
|
. in 2022, apple announced that a portion of the iphone 14 would be manufactured in tamil nadu, india, as a response to china's " zero - covid " policy that has negatively affected global supply chains for many industries. apple has stated that they plan to shift 25 % of iphone production to india by 2025. = = hardware = = apple directly sub - contracts hardware production to external oem companies, maintaining a high degree of control over the end product. the iphone contains most of the hardware parts of a typical modern smartphone. some hardware elements, such as 3d touch and the taptic engine, are unique to the iphone. the main hardware of the iphone is the touchscreen, with current models offering screens of 4. 7 inches and larger. all iphones include a rear - facing camera ; the front - facing camera dates back to the iphone 4. the iphone 7 plus introduced multiple lenses to the rear - facing camera. a range of sensors are also included on the device, such as a proximity sensor, ambient light sensor, accelerometer, gyroscopic sensor, magnetometer, facial recognition sensor or fingerprint sensor ( depending on the model ) and barometer. in 2022, apple added satellite communications to the iphone, with the release of the iphone 14 and iphone 14 pro. = = software = = = = = operating system = = = the iphone runs ios. it is based on macos's darwin and many of its userland apis, with cocoa replaced by cocoa touch, and appkit replaced by uikit. the graphics stack runs on metal, apple's low - level graphics api. the iphone comes with a set of bundled applications developed by apple, and supports downloading third - party applications through the app store. apple provides free updates to ios over - the - air, or through finder and itunes on a computer. major ios releases have historically accompanied new iphone models. the most recent version is ios 18. = = = app store and third - party apps = = = at wwdc 2007 on june 11, 2007, apple announced that the iphone would support third - party ajax web applications that share the look and feel of the iphone interface. on october 17, 2007, steve jobs, in an open letter posted to apple's " hot news " weblog, announced that a software development kit ( sdk ) would be made available to third - party developers in february 2008. the iphone sdk was officially announced and released on march 6, 2008. the app
|
Icophone
|
wikipedia
|
science book : the ghosts of evolution : nonsensical fruit, missing partners, and other ecological anachronisms. in shaping the book's title, barlow drew upon a 1992 essay by paul s. martin titled " the last entire earth ". martin had written : in the shadows along the trail i keep an eye out for the ghosts, the beasts of the ice age. what is the purpose of the thorns on the mesquites in my backyard in tucson? why do they and honey locusts have sugary pods so attractive to livestock? whose foot is devil's claw intended to intercept? such musings add magic to a walk and may help to liberate us from tunnel vision, the hubris of the present, the misleading notion that nature is self - evident. the honey locust mentioned in martin's excerpt is a native tree of eastern north america. because it is favored for planting along urban streets and parking lots, barlow was very familiar with it while she was working on her book in new york city. its long, curving pods became a prominent part of her book. later, other writers also popularized its lost partnership with ice age " ghosts " ( extinct fauna ). one animal - with - animal form of evolutionary anachronism also gained popular attention. as reported in the new york times, " pronghorn's speed may be legacy of past predators ", john a. byers hypothesized that the antelope - like pronghorn of america's grasslands was still running from a pleistocene ghost that had been much faster than america's native wolves. this ghost was the american cheetah. = = megafaunal dispersal syndrome = = seed dispersal syndromes are complexes of fruit traits that enable plants to disperse seeds by wind, water, or mobile animals. the kind of fruits that birds are attracted to eat are usually small, with only a thin protective skin, and the colors are red or dark shades of blue or purple. fruits categorized as mammal syndrome are bigger than bird fruits. they may possess a tough rind or husk and emit a strong odor when ripe. because mammals ( other than primates ) tend to have poor color vision, these fruits usually retain a dull coloration of brown, burnished yellow, orange, or will remain green when ripe. the megafaunal dispersal syndrome refers to those attributes of fruits that evolved in order to attract megafauna ( animals that weigh or weighed more than 44 kilograms ) as primary dispersal agents
|
Evolutionary anachronism
|
wikipedia
|
one - on - one time with clients, dangerous materials, or impromptu appointments may not work well for a parent with children at home. thus, not all professions lend themselves to work - at - home parenting. without good organization, the wahp may experience decreased productivity due to added responsibilities and unexpected interruptions. internet businesses or'virtual assistants'are well - suited as work - at - home businesses. the center for women's business research, a non - profit organization, found that generation x mothers are the most likely to work from home. the center also reports that between 1997 and 2004, employment at female - owned companies grew by 24. 2 %, more than twice the rate of the 11. 6 % logged by all businesses. types of work that wahps may engage in include remote work, freelancing on project such as articles, graphic design or consulting, or working as an independent contractor, running home - party businesses, managing companies from home, and providing business and marketing support. = = history = = the concept of the wahp has been around for as long as small businesses have. in pre - industrial societies, merchants and artisans often worked out of or close to their homes. children typically remained in the care of a parent during the day and were often present while the parents worked. societal changes in the 1800s, such as compulsory education and the industrial revolution, made working from home with children around less common. entrepreneurship saw a resurgence in the 1980s, with more of an emphasis on work - life balance. among the long - traditional groups of wahps are those professionals in private practice with home offices such as physicians, therapists, music teachers and tutors. the term wahp began gaining popularity in the late 1990s especially as the growth of the internet allowed for small business owners and entrepreneurs to have greater options for starting and running their businesses. remote work opportunities have since increased with advances in technology. in 2008, wahm magazine, a digital magazine, was established specifically for work - at - home parents, designed to address the issues of the complete lifestyle of work - at - home parents regardless of field or industry, and has a mission to validate, empower, encourage, educate and support wahps in their personal, professional and lifestyle goals. during the covid - 19 lockdowns, many parents have to juggle paid employment and full - time daycare, which is likely to limit their productivity and the anticipated benefits of working from home. however, changes in technology and firm culture have increased the likelihood of working from
|
Work at home parent
|
wikipedia
|
##bouchure = = = the water must be " blown " into the hydraulophone by way of a pump which can be hand - operated, wind - operated, water - powered, or electric. unlike woodwind instruments in which there is one mouthpiece at the entrance to the flute chamber, hydraulophones have mouthpieces at every exit port from the chamber. whereas internal ducted flutes have one fipple mechanism for the mouth of the player, along with several finger holes that share the one fipple mechanism, the hydraulophone has a separate mouth / mouthpiece for each finger hole. a typical park hydraulophone for installation in public spaces has 12 mouths, whereas a concert hydraulophone typically has 45 mouths. embouchure is controlled by way of the instrument's mouth, not the player's mouth such that the player can sing along with the hydraulophone ( i. e. a player can sing and play the instrument at the same time ). moreover, the instrument provides the unique capability of polyphonic embouchure, where a player can dynamically " sculpt " each note by the shape and position of each finger inserted into each of the mouths. for example, the sound is different when fingering the center of a water jet than when fingering the water jet near the periphery of the circular mouth's opening. = = relationships to other instruments = = = = = woodwind = = = the hydraulophone is similar to a woodwind instrument, but it runs on incompressible ( or less compressible ) fluid rather than a compressible gas - like air. in this context, hydraulophones are sometimes called " woodwater " instruments regardless of whether or not they are made of wood ( as woodwind instruments are often not made of wood ). = = = pipe organ = = = many hydraulophones include a separate water - filled pipe for each note, and have sound - production means similar to pipe organs ( but with water rather than air ), while maintaining the flute like user interface ( finger embouchure holes ). this form of hydraulophone is similar to an organ, but has water flowing through the pipes instead of air flowing through the pipes. = = = piano = = = on a concert hydraulophone, the finger holes are arranged like the keys on a piano, i. e. there is a row of uniformly spaced holes close to the player, and a row of holes that are in groups of 2,
|
Hydraulophone
|
wikipedia
|
follow pauli's principle and fermi – dirac statistics. in general, for an ensemble of non - interacting fermions, also known as a fermi gas, each particle can be treated independently with a single - fermion energy given by the purely kinetic term, e = p 2 2 m, { \ displaystyle e = { \ frac { p ^ { 2 } } { 2m } }, } where p is the momentum of one particle and m its mass. every possible momentum state of an electron within this volume up to the fermi momentum pf being occupied. the degeneracy pressure at zero temperature can be computed as p = 2 3 e tot v = 2 3 p f 5 10 π 2 m 3, { \ displaystyle p = { \ frac { 2 } { 3 } } { \ frac { e _ { \ text { tot } } } { v } } = { \ frac { 2 } { 3 } } { \ frac { p _ { \ text { f } } ^ { 5 } } { 10 \ pi ^ { 2 } m \ hbar ^ { 3 } } }, } where v is the total volume of the system and etot is the total energy of the ensemble. specifically for the electron degeneracy pressure, m is substituted by the electron mass me and the fermi momentum is obtained from the fermi energy, so the electron degeneracy pressure is given by p e = ( 3 π 2 ) 2 / 3 2 5 m e ρ e 5 / 3, { \ displaystyle p _ { \ text { e } } = { \ frac { ( 3 \ pi ^ { 2 } ) ^ { 2 / 3 } \ hbar ^ { 2 } } { 5m _ { \ text { e } } } } { \ rho _ { \ text { e } } } ^ { 5 / 3 }, } where ρe is the free electron density ( the number of free electrons per unit volume ). for the case of a metal, one can prove that this equation remains approximately true for temperatures lower than the fermi temperature, about 106 kelvins. when particle energies reach relativistic levels, a modified formula is required. the relativistic degeneracy pressure is proportional to ρe4 / 3. = = examples = = = = = metals = = = for the case of electrons in crystalline solid, several approximations
|
Electron degeneracy pressure
|
wikipedia
|
level of need in the hierarchy is safety, which could be interpreted to mean adequate housing or living in a safe neighborhood. the next three levels in maslow's theory relate to intellectual and psycho - emotional needs : love and belonging, esteem ( which refers to competence and mastery ), and finally the highest order need, self - actualization. although maslow's theory is widely known, in the workplace it has proven to be a poor predictor of employee behavior. maslow theorized that people will not seek to satisfy a higher level need until their lower level needs are met. there has been little empirical support for the idea that employees in the workplace strive to meet their needs only in the hierarchical order prescribed by maslow. building on maslow's theory, clayton alderfer ( 1959 ) collapsed the levels in maslow's theory from five to three : existence, relatedness and growth. this theory, called the erg theory, does not propose that employees attempt to satisfy these needs in a strictly hierarchical manner. empirical support for this theory has been mixed. = = = = need for achievement = = = = atkinson & mcclelland's need for achievement theory is the most relevant and applicable need - based theory in the i – o psychologist's arsenal. unlike other need - based theories, which try to interpret every need, need for achievement allows the i – o psychologist to concentrate research into a tighter focus. studies show those who have a high need for achievement prefer moderate levels of risk, seek feedback, and are likely to immerse themselves in their work. achievement motivation can be broken down into three types : achievement – seeks position advancement, feedback, and sense of accomplishment authority – need to lead, make an impact and be heard by others affiliation – need for friendly social interactions and to be liked. because most individuals have a combination of these three types ( in various proportions ), an understanding of these achievement motivation characteristics can be a useful assistance to management in job placement, recruitment, etc. the theory is referred to as need for achievement because these individuals are theorized to be the most effective employees and leaders in the workplace. these individuals strive to achieve their goals and advance in the organization. they tend to be dedicated to their work and strive hard to succeed. such individuals also demonstrate a strong desire for increasing their knowledge and for feedback on their performance, often in the form of performance appraisal. the need for achievement is in many ways similar to the need for mastery and self - actualization in maslow '
|
Work motivation
|
wikipedia
|
a recognized functional safety standard. in europe, that standard is iec en 61508, or one of the industry specific standards derived from iec en 61508, or for simple systems some other standard like iso 13849. verification that the system meets the assigned sil, asil, pl or agpl by determining the probability of dangerous failure, checking minimum levels of redundancy, and reviewing systematic capability ( sc ). these three metrics have been called " the three barriers ". failure modes of a device are typically determined by failure mode and effects analysis of the system ( fmea ). failure probabilities for each failure mode are typically determined using failure mode, effects, and diagnostic analysis fmeda. conduct functional safety audits to examine and assess the evidence that the appropriate safety lifecycle management techniques were applied consistently and thoroughly in the relevant lifecycle stages of product. neither safety nor functional safety can be determined without considering the system as a whole and the environment with which it interacts. functional safety is inherently end - to - end in scope. modern systems often have software intensively commanding and controlling safety - critical functions. therefore, software functionality and correct software behavior must be part of the functional safety engineering effort to ensure acceptable safety risk at the system level. = = certifying functional safety = = any claim of functional safety for a component, subsystem or system should be independently certified to one of the recognized functional safety standards. a certified product can then be claimed to be functionally safe to a particular safety integrity level or a performance level in a specific range of applications : the certificate and the assessment report is provided to the customers describing the scope and limits of performance. = = = certification bodies = = = functional safety is a technically challenging field. certifications should be done by independent organizations with experience and strong technical depth ( electronics, programmable electronics, mechanical, and probabilistic analysis ). functional safety certification is performed by accredited certification bodies ( cb ). accreditation is awarded to a cb organization by an accreditation body ( ab ). in most countries there is one ab. in the united states, the american national standards institute ( ansi ) is the ab for functional safety accreditation. in the united kingdom, the united kingdom accreditation service ( ukas ) provides functional safety accreditation. abs are members of the international accreditation forum ( iaf ) for work in management systems, products, services, and personnel accreditation or the international laboratory accreditation cooperation ( ilac ) for laboratory testing accreditation. a multilateral recognition arrangement ( mla )
|
Functional safety
|
wikipedia
|
information and information systems against unauthorized access, use, disclosure, disruption, modification, or destruction, ensuring confidentiality, integrity, and availability. title iii of fisma 2002 tasked nist with developing information security and risk management standards, guidelines, and requirements. the rmf, outlined in nist special publication 800 - 37 and first published in february 2010, is designed to help organizations manage cybersecurity risks and comply with various u. s. laws and regulations, including the federal information security modernization act of 2014, the privacy act of 1974, and federal information processing standards, among others. in december 2019, revision 2 of the nist special publication 800 - 37 was published, introducing a prepare step to the overall process. = = risks = = throughout its lifecycle, an information system will face various types of risk that can impact its security posture. the rmf process aids in the early identification and resolution of these risks. broadly, risks can be classified as infrastructure, project, application, information asset, business continuity, outsourcing, external, and strategic risks. infrastructure risks pertain to the reliability of computers and networks, while project risks involve budgeting, timelines, and system quality. application risks relate to system performance and capacity. information asset risks concern the potential loss or unauthorized disclosure of data. business continuity risks focus on maintaining system reliability and uptime. outsourcing risks involve the impact of third - party service providers on the system. external risks are factors beyond the information system's control that can impact the system's security. strategic risks are associated with the need for information system functions to align with the business strategy that the system supports. = = revision 2 updates = = the key objectives for the update to rmf revision 2 included the following : improve communication between risk management activities at the executive ( c - suite ) level and those at the system and operational levels ; institutionalize critical risk management preparatory activities at all levels to facilitate more effective and cost - efficient rmf execution ; demonstrate how the nist cybersecurity framework can be aligned with the rmf and implemented through established nist risk management processes ; integrate privacy risk management into the rmf to better address privacy protection responsibilities ; promote the development of trustworthy, secure software and systems by aligning system engineering processes in nist sp 800 - 160 volume 1, with relevant tasks in the rmf ; incorporate security - related supply chain risk management ( scrm ) concepts into the rmf, addressing risks such as counterfeit components, tampering, malicious code
|
Risk Management Framework
|
wikipedia
|
products of hilbert spaces and the algebras acting on them. von neumann, j. ( 1940 ), " on rings of operators iii ", annals of mathematics, second series, 41 ( 1 ) : 94 – 161, doi : 10. 2307 / 1968823, jstor 1968823. this shows the existence of factors of type iii. von neumann, j. ( 1943 ), " on some algebraical properties of operator rings ", annals of mathematics, second series, 44 ( 4 ) : 709 – 715, doi : 10. 2307 / 1969106, jstor 1969106. this shows that some apparently topological properties in von neumann algebras can be defined purely algebraically. von neumann, j. ( 1949 ), " on rings of operators. reduction theory ", annals of mathematics, second series, 50 ( 2 ) : 401 – 485, doi : 10. 2307 / 1969463, jstor 1969463. this discusses how to write a von neumann algebra as a sum or integral of factors. von neumann, john ( 1961 ), taub, a. h. ( ed. ), collected works, volume iii : rings of operators, ny : pergamon press. reprints von neumann's papers on von neumann algebras. wassermann, a. j. ( 1991 ), operators on hilbert space
|
Finite von Neumann algebra
|
wikipedia
|
##line the structure determination process, resulted in an array of technical advances. several methods developed during psi - 1 enhanced expression of recombinant proteins in systems like escherichia coli, pichia pastoris and insect cell lines. new streamlined approaches to cell cloning, expression and protein purification were also introduced, in which robotics and software platforms were integrated into the protein production pipeline to minimize required manpower, increase speed, and lower costs. = = phase 2 = = the second phase of the protein structure initiative ( psi - 2 ) lasted from july 2005 to june 2010. its goal was to use methods introduced in psi - 1 to determine a large number of proteins and continue development in streamlining the structural genomics pipeline. psi - 2 had a five - year budget of $ 325 million provided by nigms with support from the national center for research resources. by the end of this phase, the protein structure initiative had solved over 4, 800 protein structures ; over 4, 100 of these were unique. the number of sponsored research centers grew to 14 during psi - 2. four centers were selected as large scale centers, with a mandate to place 15 % effort on targets nominated by the broader research community, 15 % on targets of biomedical relevance, and 70 % on broad structural coverage ; these centers were the joint center for structural genomics ( jcsg ), the midwest center for structural genomics ( mcsg ), the northeast structural genomics consortium ( nesg ), and the new york sgx research center for structural genomics ( nysgxrc ). the new centers participating in psi - 2 included four specialized centers : accelerated technologies center for gene to 3d structure ( atcg3d ), the center for eukaryotic structural genomics ( cesg ), the center for high - throughput structural biology ( chtsb ), a branch of the structural genomics of pathogenic protozoa consortium taking that institution's place ), the center for structures of membrane proteins ( csmp ), and the new york consortium on membrane protein structure ( nycomps ). two homology modeling centers, the joint center for molecular modeling ( jcmm ) and new methods for high - resolution comparative modeling ( nmhrcm ) were also added, as well as two resource centers, the psi materials repository ( psi - mr ) and the psi structural biology knowledgebase ( sbkb ). the tb structural genomics consortium was removed from the roster of supported research centers in the transition from psi - 1 to psi - 2.
|
Protein Structure Initiative
|
wikipedia
|
it has become possible to analyze large datasets with greater accuracy and efficiency. however, method also has its own limitations, such as the lack of training data, imbalanced data, and overfitting. = = future directions = = combining various experimental and computational techniques can provide comprehensive insights into complex structures. integrating data from x - ray crystallography, nmr spectroscopy, and computational modeling enhances accuracy and reliability. continued progress in computational simulations, including quantum chemistry and molecular dynamics, will allow researchers to study larger and more complex systems, aiding in predicting and understanding novel structures. open - access databases and collaborative efforts enable researchers worldwide to share structural data, accelerating scientific progress and fostering innovation. structural chemistry can contribute to the design of eco - friendly materials and catalysts, promoting sustainable practices in the chemical industry. structural chemistry can contribute to the design of eco - friendly materials and catalysts, promoting sustainable practices in the chemical industry. recent development of metal - free nanostructured catalysts is one of the advancements in the field of structural chemistry that has the potential to drive organic transformations in a sustainable manner. = = see also = = chemical structure = = references = =
|
Structural chemistry
|
wikipedia
|
probability, embedded in a verbal description of a judgement problem, and demonstrated that people's intuitive judgement deviated from the rule. the " linda problem " below gives an example. the deviation is then explained by a heuristic. this research, called the heuristics - and - biases program, challenged the idea that human beings are rational actors and first gained worldwide attention in 1974 with the science paper " judgment under uncertainty : heuristics and biases " and although the originally proposed heuristics have been refined over time, this research program has changed the field by permanently setting the research questions. the original ideas by herbert simon were taken up in the 1990s by gerd gigerenzer and others. according to their perspective, the study of heuristics requires formal models that allow predictions of behavior to be made ex ante. their program has three aspects : what are the heuristics humans use? ( the descriptive study of the " adaptive toolbox " ) under what conditions should humans rely on a given heuristic? ( the prescriptive study of ecological rationality ) how to design heuristic decision aids that are easy to understand and execute? ( the engineering study of intuitive design, for example with human - centered design or user - centered design approaches. ) among others, this program has shown that heuristics can lead to fast, frugal, and accurate decisions in many real - world situations that are characterized by uncertainty. these two different research programs have led to two kinds of models of heuristics, formal models and informal ones. formal models describe the decision process in terms of an algorithm, which allows for mathematical proofs and computer simulations. in contrast, informal models are verbal descriptions. = = formal models of heuristics = = list of formal models of heuristics : elimination by aspects heuristic fast - and - frugal trees fluency heuristic gaze heuristic recognition heuristic satisficing similarity heuristic take - the - best heuristic tallying = = = simon's satisficing strategy = = = herbert simon's satisficing heuristic can be used to choose one alternative from a set of alternatives in situations of uncertainty. here, uncertainty means that the total set of alternatives and their consequences is not known or knowable. for instance, professional real - estate entrepreneurs rely on satisficing to decide in which location to invest to develop new commercial areas : "
|
Heuristic (psychology)
|
wikipedia
|
72 spring attractions and numerous micro spring holes spread over the city centre. = = see also = = fountain list of spa towns oasis petroleum seep soakage spring line settlement spring supply water cycle well = = references = = = = further reading = = lamoreaux, philip e. ; tanner, judy t, eds. ( 2001 ), springs and bottled water of the world : ancient history, source, occurrence, quality and use, berlin, heidelberg, new york : springer - verlag, isbn 3 - 540 - 61841 - 4 springs of missouri, vineyard and feder, missouri department of natural resources, division of geology and land survey in cooperation with u. s. geological survey and missouri department of conservation, 1982 cohen, stan ( revised 1981 edition ), springs of the virginias : a pictorial history, charleston, west virginia : quarrier press. = = external links = = " the science of springs " " what is a spring? " find a spring
|
Spring (hydrology)
|
wikipedia
|
rate in the area around it. due to the increased mutation rate, the nearby a allele may be mutated into a new, advantageous allele, a * - - m - - - - - - a - - - > - - m - - - - - - a * - - the individual in which this chromosome lies will now have a selective advantage over other individuals of this species, so the allele a * will spread through the population by the normal processes of natural selection. m, due to its proximity to a *, will be dragged through into the general population. this process only works when m is very close to the allele it has mutated. a greater distance would increase the chance of recombination separating m from a *, leaving m alone with any deleterious mutations it may have caused. for this reason, evolution of mutators is generally expected to happen largely in asexual species where recombination cannot disrupt linkage disequilibrium. = = = neutral theory of molecular evolution = = = the neutral theory of molecular evolution assumes that most new mutations are either deleterious ( and quickly purged by selection ) or else neutral, with very few being adaptive. it also assumes that the behavior of neutral allele frequencies can be described by the mathematics of genetic drift. genetic hitchhiking has therefore been viewed as a major challenge to neutral theory, and an explanation for why genome - wide versions of the mcdonald – kreitman test appear to indicate a high proportion of mutations becoming fixed for reasons connected to selection. = = references = =
|
Genetic hitchhiking
|
wikipedia
|
in linguistics, functional shift occurs when an existing word takes on a new syntactic function. if no change in form occurs, it is called a zero derivation. for example, the word like, formerly only used as a preposition in comparisons ( as in " eats like a pig " ), is now also used in the same way as the subordinating conjunction as in many dialects of english ( as in " sounds like he means it " ). the boundary between functional shift and conversion ( the derivation of a new word from an existing word of identical form ) is not well - defined, but it could be construed that conversion changes the lexical meaning and functional shift changes the syntactic meaning. shakespeare uses functional shift, for example using a noun to serve as a verb. researchers found that this technique allows the brain to understand what a word means before it understands the function of the word within a sentence. = = references = =
|
Functional shift
|
wikipedia
|
rather the flow remained constant in the direction that it was flowing before or showed other characteristics like turbulent separated flows. saucer blowouts indicate a deceleration of wind flow along the deflation basin as the structure widens over time by reversing flows eroding the sides and expanding upwind. due to rapid deceleration, saucers tend to form short, wide, radial depositional slopes. when wind flow enters a saucer shape blowout, the wind speed decreases upon entering the blowout and accelerates at the downwind side of the formation. a zone of separation develops along the lee slope as the wind enters the blowout and decrease in speed, yet it accelerates again as it re - attaches at the basin and flow up to the depositional lobe, where sand becomes evacuated. even though they are more influences blowout structures have on their morphology, both types basically tend to have deflation basins eroded until they reach their non - erodible base level. a study conducted by hesp ( 1982 ) indicates that depositional length is not correlated with the eroded depth but rather the blowout width. in other words, as the depositional lobe increases, the blowout width also increases by a ratio of 1 : 2 to 1 : 3 in saucer blowouts and 1 : 4 in trough blowouts. = = see also = = aeolian processes – processes due to wind activity sand dune stabilization – coastal management practice redfieldia, also known as blowout grass – genus of grasses medanos – type of sand dunepages displaying wikidata descriptions as a fallback sand dune ecology – ecology of sand dunespages displaying wikidata descriptions as a fallback sandhills ( nebraska ) – temperate grasslands, savannas, and shrublands ecoregion of nebraska, united states yardang – streamlined aeolian landform = = references = = = = external links = = the bibliography of aeolian research
|
Blowout (geomorphology)
|
wikipedia
|
from seawater becoming heated after seeping through cracks to places where hot magma is close to the seabed. the under - water hot springs may gush forth at temperatures of over 340 °c ( 640 °f ) and support unique communities of organisms in their immediate vicinity. the basis for this teeming life is chemosynthesis, a process by which microbes convert such substances as hydrogen sulfide or ammonia into organic molecules. these bacteria and archaea are the primary producers in these ecosystems and support a diverse array of life. about 350 species of organism, dominated by molluscs, polychaete worms and crustaceans, had been discovered around hydrothermal vents by the end of the twentieth century, most of them being new to science and endemic to these habitat types. besides providing locomotion opportunities for winged animals and a conduit for the dispersal of pollen grains, spores and seeds, the atmosphere can be considered to be a habitat - type in its own right. there are metabolically active microbes present that actively reproduce and spend their whole existence airborne, with hundreds of thousands of individual organisms estimated to be present in a cubic meter of air. the airborne microbial community may be as diverse as that found in soil or other terrestrial environments, however, these organisms are not evenly distributed, their densities varying spatially with altitude and environmental conditions. aerobiology has not been studied much, but there is evidence of nitrogen fixation in clouds, and less clear evidence of carbon cycling, both facilitated by microbial activity. there are other examples of extreme habitat types where specially adapted lifeforms exist ; tar pits teeming with microbial life ; naturally occurring crude oil pools inhabited by the larvae of the petroleum fly ; hot springs where the temperature may be as high as 71 °c ( 160 °f ) and cyanobacteria create microbial mats ; cold seeps where the methane and hydrogen sulfide issue from the ocean floor and support microbes and higher animals such as mussels which form symbiotic associations with these anaerobic organisms ; salt pans that harbour salt - tolerant bacteria, archaea and also fungi such as the black yeast hortaea werneckii and basidiomycete wallemia ichthyophaga ; ice sheets in antarctica which support fungi thelebolus spp., glacial ice with a variety of bacteria and fungi ; and snowfields on which algae grow. = = habitat change = = whether from natural processes or the activities of man, landscapes and their associated habitat types change over time. there
|
Habitat
|
wikipedia
|
, 6, 1, 1, 1, 1, 1, 11, 1,... ] quotients convergents sm / half ey decimal sm / full ey named cycle 5 ; 5 / 1 = 5 pentalunex 1 6 / 1 = 6 12 / 1 semester 6 41 / 7 = 5. 857142857 hepton 1 47 / 8 = 5. 875 47 / 4 octon 1 88 / 15 = 5. 866666667 tzolkinex 1 135 / 23 = 5. 869565217 tritos 1 223 / 38 = 5. 868421053 223 / 19 saros 1 358 / 61 = 5. 868852459 716 / 61 inex 11 4161 / 709 = 5. 868829337 selebit 1 4519 / 770 = 5. 868831169 4519 / 385 square year... each of these is an eclipse cycle. less accurate cycles may be constructed by combinations of these. = = eclipse cycles = = this table summarizes the characteristics of various eclipse cycles, and can be computed from the numerical results of the preceding paragraphs ; cf. meeus ( 1997 ) ch. 9. more details are given in the comments below, and several notable cycles have their own pages. many other cycles have been noted, some of which have been named. the number of days given is the average. the actual number of days and fractions of days between two eclipses varies because of the variation in the speed of the moon and of the sun in the sky. the variation is less if the number of anomalistic months is near a whole number, and if the number of anomalistic years is near a whole number. ( see graphs lower down of semester and hipparchic cycle. ) any eclipse cycle, and indeed the interval between any two eclipses, can be expressed as a combination of saros ( s ) and inex ( i ) intervals. these are listed in the column " formula ". = = = notes = = = fortnight half a synodic month ( 29. 53 days ). when there is an eclipse, there is a fair chance that at the next syzygy there will be another eclipse : the sun and moon will have moved about 15° with respect to the nodes ( the moon being opposite to where it was the previous time ), but the luminaries may still be within bounds to make an eclipse
|
Periodicity of solar eclipses
|
wikipedia
|
macroeconomics is a branch of economics that deals with the performance, structure, behavior, and decision - making of an economy as a whole. this includes regional, national, and global economies. macroeconomists study topics such as output / gdp ( gross domestic product ) and national income, unemployment ( including unemployment rates ), price indices and inflation, consumption, saving, investment, energy, international trade, and international finance. macroeconomics and microeconomics are the two most general fields in economics. the focus of macroeconomics is often on a country ( or larger entities like the whole world ) and how its markets interact to produce large - scale phenomena that economists refer to as aggregate variables. in microeconomics the focus of analysis is often a single market, such as whether changes in supply or demand are to blame for price increases in the oil and automotive sectors. from introductory classes in " principles of economics " through doctoral studies, the macro / micro divide is institutionalized in the field of economics. most economists identify as either macro - or micro - economists. macroeconomics is traditionally divided into topics along different time frames : the analysis of short - term fluctuations over the business cycle, the determination of structural levels of variables like inflation and unemployment in the medium ( i. e. unaffected by short - term deviations ) term, and the study of long - term economic growth. it also studies the consequences of policies targeted at mitigating fluctuations like fiscal or monetary policy, using taxation and government expenditure or interest rates, respectively, and of policies that can affect living standards in the long term, e. g. by affecting growth rates. macroeconomics as a separate field of research and study is generally recognized to start in 1936, when john maynard keynes published his the general theory of employment, interest and money, but its intellectual predecessors are much older. since world war ii, various macroeconomic schools of thought like keynesians, monetarists, new classical and new keynesian economists have made contributions to the development of the macroeconomic research mainstream. = = basic macroeconomic concepts = = macroeconomics encompasses a variety of concepts and variables, but above all the three central macroeconomic variables are output, unemployment, and inflation. : 39 besides, the time horizon varies for different types of macroeconomic topics, and this distinction is crucial for many research and policy debates. : 54 a further important dimension is that of an economy's openness, economic theory distinguishing sharply between closed economies and open economies. : 373 = =
|
Macroeconomics
|
wikipedia
|
k ), methionine ( met, m ), phenylalanine ( phe, f ), proline ( pro, p ), serine ( ser, s ), threonine ( thr, t ), tryptophan ( trp, w ), tyrosine ( tyr, y ), and valine ( val, v ). = = differences from the standard code = = = = see also = = list of all genetic codes : translation tables 1 to 16, and 21 to 31. the genetic codes database. = = references = = this article incorporates text from the united states national library of medicine, which is in the public domain.
|
Karyorelict nuclear code
|
wikipedia
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 1