text
stringlengths
21
172k
source
stringlengths
32
113
Decorrelationis a general term for any process that is used to reduceautocorrelationwithin a signal, orcross-correlationwithin a set of signals, while preserving other aspects of the signal.[citation needed]A frequently used method of decorrelation is the use of a matchedlinear filterto reduce theautocorrelationof a signal as far as possible. Since the minimum possible autocorrelation for a given signal energy is achieved by equalising the power spectrum of the signal to be similar to that of awhite noisesignal, this is often referred to assignal whitening. Most decorrelation algorithms arelinear, but there are alsonon-lineardecorrelation algorithms. Many data compression algorithms incorporate a decorrelation stage.[citation needed]For example, manytransform codersfirst apply a fixed linear transformation that would, on average, have the effect of decorrelating a typical signal of the class to be coded, prior to any later processing. This is typically aKarhunen–Loève transform, or a simplified approximation such as thediscrete cosine transform. By comparison,sub-band codersdo not generally have an explicit decorrelation step, but instead exploit the already-existing reduced correlation within each of the sub-bands of the signal, due to the relative flatness of each sub-band of the power spectrum in many classes of signals. Linear predictive coderscan be modelled as an attempt to decorrelate signals by subtracting the best possible linear prediction from the input signal, leaving a whitened residual signal. Decorrelation techniques can also be used for many other purposes, such as reducingcrosstalkin a multi-channel signal, or in the design ofecho cancellers. Inimage processingdecorrelation techniques can be used to enhance or stretch,colourdifferences found in eachpixelof an image. This is generally termed as 'decorrelation stretching'.[1] The concept of decorrelation can be applied in many other fields. Inneuroscience, decorrelation is used in the analysis of theneural networksin the human visual system. Incryptography, it is used in cipher design (seeDecorrelation theory) and in the design ofhardware random number generators. Thiscomputational physics-related article is astub. You can help Wikipedia byexpanding it. Thissignal processing-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Decorrelation
Acar, or anautomobile, is amotor vehiclewithwheels. Most definitions of cars state that they run primarily onroads,seatone to eight people, have four wheels, and mainly transportpeoplerather thancargo.[1][2]There are around one billion cars in use worldwide.[citation needed] The French inventorNicolas-Joseph Cugnotbuilt the first steam-powered road vehicle in 1769, while the Swiss inventorFrançois Isaac de Rivazdesigned and constructed the first internal combustion-powered automobile in 1808. The modern car—a practical, marketable automobile for everyday use—was invented in 1886, when the German inventorCarl Benzpatented hisBenz Patent-Motorwagen. Commercial cars became widely available during the 20th century. The 1901Oldsmobile Curved Dashand the 1908Ford Model T, both American cars, are widely considered the first mass-produced[3][4]and mass-affordable[5][6][7]cars, respectively. Cars were rapidly adopted in the US, where they replacedhorse-drawn carriages.[8]In Europe and other parts of the world, demand for automobiles did not increase untilafter World War II.[9]In the 21st century, car usage is still increasing rapidly, especially in China, India, and othernewly industrialised countries.[10][11] Cars have controls fordriving,parking,passengercomfort, and a variety oflamps. Over the decades, additional features and controls have been added to vehicles, making them progressively more complex. These includerear-reversing cameras,air conditioning,navigation systems, andin-car entertainment. Most cars in use in the early 2020s are propelled by aninternal combustion engine, fueled by thecombustionoffossil fuels.Electric cars, which were invented early in thehistory of the car, became commercially available in the 2000s and are predicted to cost less to buy than petrol-driven cars before 2025.[12][13]The transition from fossil fuel-powered cars to electric cars features prominently in mostclimate change mitigation scenarios,[14]such asProject Drawdown's 100 actionable solutions for climate change.[15] There arecosts and benefits to car use. The costs to the individual include acquiring the vehicle, interest payments (if the car is financed), repairs andmaintenance, fuel,depreciation, driving time, parking fees, taxes, andinsurance.[16]The costs to society include resources used to produce cars and fuel, maintaining roads,land-use,road congestion,air pollution,noise pollution,public health, anddisposing of the vehicle at the end of its life.Traffic collisionsare the largest cause of injury-related deaths worldwide.[17]Personal benefits include on-demand transportation, mobility, independence, and convenience.[18]Societal benefits include economic benefits, such as job and wealth creation from theautomotive industry, transportation provision, societal well-being from leisure and travel opportunities. People's ability to move flexibly from place to place hasfar-reaching implications for the nature of societies.[19] TheEnglishwordcaris believed to originate fromLatincarrus/carrum"wheeled vehicle" or (viaOld North French)Middle Englishcarre"two-wheeled cart", both of which in turn derive fromGaulishkarros"chariot".[20][21]It originally referred to any wheeledhorse-drawn vehicle, such as acart,carriage, orwagon.[22]The word also occurs in other Celtic languages.[23] "Motor car", attested from 1895, is the usual formal term inBritish English.[2]"Autocar", a variant likewise attested from 1895 and literally meaning "self-propelledcar", is now considered archaic.[24]"Horseless carriage" is attested from 1895.[25] "Automobile", aclassical compoundderived fromAncient Greekautós(αὐτός) "self" and Latinmobilis"movable", entered English fromFrenchand was first adopted by theAutomobile Club of Great Britainin 1897.[26]It fell out of favour in Britain and is now used chiefly inNorth America,[27]where the abbreviated form "auto" commonly appears as an adjective in compound formations like "auto industry" and "auto mechanic".[28][29] In 1649,Hans HautschofNurembergbuilt a clockwork-driven carriage.[32][33]The first steam-powered vehicle was designed byFerdinand Verbiest, aFlemishmember of aJesuit mission in Chinaaround 1672. It was a 65-centimetre-long (26 in) scale-model toy for theKangxi Emperorthat was unable to carry a driver or a passenger.[18][34][35]It is not known with certainty if Verbiest's model was successfully built or run.[35] Nicolas-Joseph Cugnotis widely credited with building the first full-scale, self-propelled mechanical vehicle in about 1769; he created a steam-powered tricycle.[36]He also constructed two steam tractors for the French Army, one of which is preserved in theFrench National Conservatory of Arts and Crafts.[36]His inventions were limited by problems with water supply and maintaining steam pressure.[36]In 1801,Richard Trevithickbuilt and demonstrated hisPuffing Devilroad locomotive, believed by many to be the first demonstration of a steam-powered road vehicle. It was unable to maintain sufficient steam pressure for long periods and was of little practical use. The development of external combustion (steam) engines is detailed as part of the history of the car but often treated separately from the development of cars in their modern understanding. A variety of steam-powered road vehicles were used during the first part of the 19th century, includingsteam cars,steam buses,phaetons, andsteam rollers. In the United Kingdom, sentiment against them led to theLocomotive Actsof 1865. In 1807,Nicéphore Niépceand his brother Claude created what was probably the world's firstinternal combustion engine(which they called aPyréolophore), but installed it in a boat on the riverSaonein France.[37]Coincidentally, in 1807, the Swiss inventorFrançois Isaac de Rivazdesigned his own "de Rivaz internal combustion engine", and used it to develop the world's first vehicle to be powered by such an engine. The Niépces' Pyréolophore was fuelled by a mixture ofLycopodium powder(dried spores of theLycopodiumplant), finely crushed coal dust and resin that were mixed with oil, whereas de Rivaz used a mixture ofhydrogenandoxygen.[37]Neither design was successful, as was the case with others, such asSamuel Brown,Samuel Morey, andEtienne Lenoir,[38]who each built vehicles (usually adapted carriages or carts) powered by internal combustion engines.[39] In November 1881, French inventorGustave Trouvédemonstrated a three-wheeled car powered by electricity at theInternational Exposition of Electricity.[40]Although several other German engineers (includingGottlieb Daimler,Wilhelm Maybach, andSiegfried Marcus) were working on cars at about the same time, the year 1886 is regarded as the birth year of the modern car—a practical, marketable automobile for everyday use—when the GermanCarl Benzpatented hisBenz Patent-Motorwagen; he is generally acknowledged as the inventor of the car.[39][41][42] In 1879, Benz was granted a patent for his first engine, which had been designed in 1878. Many of his other inventions made the use of the internal combustion engine feasible for powering a vehicle. His firstMotorwagenwas built in 1885 inMannheim, Germany. He was awarded the patent for its invention as of his application on 29 January 1886 (under the auspices of his major company,Benz & Cie., which was founded in 1883). Benz began promotion of the vehicle on 3 July 1886, and about 25 Benz vehicles were sold between 1888 and 1893, when his first four-wheeler was introduced along with a cheaper model. They also were powered withfour-strokeengines of his own design. Emile Roger of France, already producing Benz engines under license, now added the Benz car to his line of products. Because France was more open to the early cars, initially more were built and sold in France through Roger than Benz sold in Germany. In August 1888,Bertha Benz, the wife and business partner of Carl Benz, undertook the firstroad tripby car, to prove the road-worthiness of her husband's invention.[43] In 1896, Benz designed and patented the first internal-combustionflat engine, calledboxermotor. During the last years of the 19th century, Benz was the largest car company in the world with 572 units produced in 1899 and, because of its size, Benz & Cie., became ajoint-stock company. The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed toTatra) in 1897, thePräsidentautomobil. Daimler and Maybach foundedDaimler Motoren Gesellschaft(DMG) inCannstattin 1890, and sold their first car in 1892 under the brand nameDaimler. It was a horse-drawn stagecoach built by another manufacturer, which they retrofitted with an engine of their design. By 1895, about 30 vehicles had been built by Daimler and Maybach, either at the Daimler works or in the Hotel Hermann, where they set up shop after disputes with their backers. Benz, Maybach, and the Daimler team seem to have been unaware of each other's early work. They never worked together; by the time of the merger of the two companies, Daimler and Maybach were no longer part of DMG. Daimler died in 1900 and later that year, Maybach designed an engine namedDaimler-Mercedesthat was placed in a specially ordered model built to specifications set byEmil Jellinek. This was a production of a small number of vehicles for Jellinek to race and market in his country. Two years later, in 1902, a new model DMG car was produced and the model was named Mercedes after the Maybach engine, which generated 35 hp. Maybach quit DMG shortly thereafter and opened a business of his own. Rights to theDaimlerbrand name were sold to other manufacturers. In 1890,Émile LevassorandArmand Peugeotof France began producing vehicles with Daimler engines, and so laid the foundation of theautomotive industry in France. In 1891,Auguste Doriotand his Peugeot colleague Louis Rigoulot completed the longest trip by a petrol-driven vehicle when their self-designed and built Daimler poweredPeugeot Type 3completed 2,100 kilometres (1,300 mi) fromValentigneyto Paris and Brest and back again. They were attached to the firstParis–Brest–Parisbicycle race, but finished six days after the winning cyclist,Charles Terront. The first design for an American car with a petrol internal combustion engine was made in 1877 byGeorge SeldenofRochester, New York. Selden applied for a patent for a car in 1879, but the patent application expired because the vehicle was never built. After a delay of 16 years and a series of attachments to his application, on 5 November 1895, Selden was granted a US patent (U.S. patent 549,160) for atwo-strokecar engine,which hindered, more than encouraged, development of cars in the United States. His patent was challenged byHenry Fordand others, and overturned in 1911. In 1893, the first running, petrol-drivenAmerican carwas built and road-tested by theDuryea brothersofSpringfield, Massachusetts. The first public run of theDuryea Motor Wagontook place on 21 September 1893, on Taylor Street inMetro CenterSpringfield.[44][45]Studebaker, subsidiary of a long-established wagon and coach manufacturer, started to build cars in 1897[46]: 66and commenced sales of electric vehicles in 1902 and petrol vehicles in 1904.[47] In Britain, there had been several attempts to build steam cars with varying degrees of success, withThomas Ricketteven attempting a production run in 1860.[48]Santlerfrom Malvern is recognised by the Veteran Car Club of Great Britain as having made the first petrol-driven car in the country in 1894,[49]followed byFrederick William Lanchesterin 1895, but these were both one-offs.[49]The first production vehicles in Great Britain came from theDaimler Company, a company founded byHarry J. Lawsonin 1896, after purchasing the right to use the name of the engines. Lawson's company made its first car in 1897, and they bore the name Daimler.[49] In 1892, German engineerRudolf Dieselwas granted a patent for a "New Rational Combustion Engine". In 1897, he built the firstdiesel engine.[39]Steam-, electric-, and petrol-driven vehicles competed for a few decades, with petrol internal combustion engines achieving dominance in the 1910s. Although variouspistonless rotary enginedesigns have attempted to compete with the conventionalpistonandcrankshaftdesign, onlyMazda's version of theWankel enginehas had more than very limited success. All in all, it is estimated that over 100,000 patents created the modern automobile and motorcycle.[50] Large-scale,production-linemanufacturing of affordable cars was started byRansom Oldsin 1901 at hisOldsmobilefactory inLansing, Michigan, and based upon stationaryassembly linetechniques pioneered byMarc Isambard Brunelat thePortsmouth Block Mills, England, in 1802. The assembly line style of mass production and interchangeable parts had been pioneered in the US byThomas Blanchardin 1821, at theSpringfield ArmoryinSpringfield, Massachusetts.[51]This concept was greatly expanded byHenry Ford, beginning in 1913 with the world's firstmovingassembly line for cars at theHighland Park Ford Plant. As a result, Ford's cars came off the line in 15-minute intervals, much faster than previous methods, increasing productivity eightfold, while using less manpower (from 12.5 manhours to 1 hour 33 minutes).[52]It was so successful,paintbecame a bottleneck. OnlyJapan blackwould dry fast enough, forcing the company to drop the variety of colours available before 1913, until fast-dryingDucolacquerwas developed in 1926. This is the source of Ford'sapocryphalremark, "any color as long as it's black".[52]In 1914, an assembly line worker could buy a Model T with four months' pay.[52] Ford's complex safety procedures—especially assigning each worker to a specific location instead of allowing them to roam about—dramatically reduced the rate of injury.[53]The combination of high wages and high efficiency is called "Fordism" and was copied by most major industries. The efficiency gains from the assembly line also coincided with the economic rise of the US. The assembly line forced workers to work at a certain pace with very repetitive motions which led to more output per worker while other countries were using less productive methods. In the automotive industry, its success was dominating, and quickly spread worldwide seeing the founding of Ford France and Ford Britain in 1911, Ford Denmark 1923, Ford Germany 1925; in 1921,Citroënwas the first native European manufacturer to adopt the production method. Soon, companies had to have assembly lines, or risk going bankrupt; by 1930, 250 companies which did not, had disappeared.[52] Development of automotive technology was rapid, due in part to the hundreds of small manufacturers competing to gain the world's attention. Key developments included electricignitionand the electric self-starter (both byCharles Kettering, for theCadillacMotor Company in 1910–1911), independentsuspension, and four-wheel brakes. Since the 1920s, nearly all cars have been mass-produced to meet market needs, so marketing plans often have heavily influenced car design. It wasAlfred P. Sloanwho established the idea of different makes of cars produced by one company, called theGeneral Motors Companion Make Program, so that buyers could "move up" as their fortunes improved. Reflecting the rapid pace of change, makes shared parts with one another so larger production volume resulted in lower costs for each price range. For example, in the 1930s,LaSalles, sold byCadillac, used cheaper mechanical parts made byOldsmobile; in the 1950s,Chevroletshared bonnet, doors, roof, and windows withPontiac; by the 1990s, corporatepowertrainsand sharedplatforms(with interchangeablebrakes, suspension, and other parts) were common. Even so, only major makers could afford high costs, and even companies with decades of production, such asApperson,Cole,Dorris,Haynes, or Premier, could not manage: of some two hundred American car makers in existence in 1920, only 43 survived in 1930, and with theGreat Depression, by 1940, only 17 of those were left.[52] In Europe, much the same would happen.Morrisset up its production line atCowleyin 1924, and soon outsold Ford, while beginning in 1923 to follow Ford's practice ofvertical integration, buyingHotchkiss'British subsidiary (engines),Wrigley(gearboxes), and Osberton (radiators), for instance, as well as competitors, such asWolseley: in 1925, Morris had 41 per cent of total British car production. Most British small-car assemblers, fromAbbeytoXtra, had gone under. Citroën did the same in France, coming to cars in 1919; between them and other cheap cars in reply such asRenault's 10CV andPeugeot's5CV, they produced 550,000 cars in 1925, andMors,Hurtu, and others could not compete.[52]Germany's first mass-manufactured car, theOpel 4PSLaubfrosch(Tree Frog), came off the line atRüsselsheimin 1924, soon making Opel the top car builder in Germany, with 37.5 per cent of the market.[52] In Japan, car production was very limited before World War II. Only a handful of companies were producing vehicles in limited numbers, and these were small, three-wheeled for commercial uses, likeDaihatsu, or were the result of partnering with European companies, likeIsuzubuilding theWolseley A-9in 1922.Mitsubishiwas also partnered withFiatand built theMitsubishi Model Abased on a Fiat vehicle.Toyota,Nissan,Suzuki,Mazda, andHondabegan as companies producing non-automotive products before the war, switching to car production during the 1950s. Kiichiro Toyoda's decision to takeToyoda Loom Worksinto automobile manufacturing would create what would eventually becomeToyota Motor Corporation, the largest automobile manufacturer in the world.Subaru, meanwhile, was formed from a conglomerate of six companies who banded together asFuji Heavy Industries, as a result of having been broken up underkeiretsulegislation. Most cars in use in the early 2020s run onpetrolburnt in aninternal combustion engine(ICE). Some cities ban older more polluting petrol-driven cars and some countries plan to ban sales in future. However, some environmental groups say thisphase-out of fossil fuel vehiclesmust be brought forwards to limit climate change. Production of petrol-fuelled cars peaked in 2017.[55][56] Other hydrocarbon fossil fuels also burnt bydeflagration(rather thandetonation) in ICE cars includediesel,autogas, andCNG. Removal offossil fuel subsidies,[57][58]concerns aboutoil dependence, tighteningenvironmental lawsand restrictions ongreenhouse gas emissionsare propelling work on alternative power systems for cars. This includeshybrid vehicles,plug-in electric vehiclesandhydrogen vehicles. Out of all cars sold in 2021, nine per cent were electric, and by the end of that year there were more than 16 millionelectric carson the world's roads.[59]Despite rapid growth, less than two per cent of cars on the world's roads werefully electricandplug-in hybridcars by the end of 2021.[59]Cars for racing orspeed recordshave sometimes employedjetorrocketengines, but these are impractical for common use.Oil consumptionhas increased rapidly in the 20th and 21st centuries because there are more cars; the1980s oil gluteven fuelled the sales of low-economy vehicles inOECDcountries. TheBRICcountries are adding to this consumption.[citation needed] In almost all hybrid (evenmild hybrid) and pure electric carsregenerative brakingrecovers and returns to a battery some energy which would otherwise be wasted by friction brakes getting hot.[60]Although all cars must have friction brakes (frontdisc brakesand either disc ordrum rear brakes[61]) for emergency stops, regenerative braking improves efficiency, particularly in city driving.[62] Cars are equipped with controls used for driving, passenger comfort, and safety, normally operated by a combination of the use of feet and hands, and occasionally by voice on 21st-century cars. These controls include asteering wheel, pedals for operating the brakes and controlling the car's speed (and, in a manual transmission car, a clutch pedal), a shift lever or stick for changing gears, and a number of buttons and dials for turning on lights, ventilation, and other functions. Modern cars' controls are now standardised, such as the location for the accelerator and brake, but this was not always the case. Controls are evolving in response to new technologies, for example, theelectric carand the integration of mobile communications. Some of the original controls are no longer required. For example, all cars once had controls for the choke valve, clutch,ignition timing, and a crank instead of an electricstarter. However, new controls have also been added to vehicles, making them more complex. These includeair conditioning,navigation systems, andin-car entertainment. Another trend is the replacement of physical knobs and switches by secondary controls with touchscreen controls such asBMW'siDriveandFord'sMyFord Touch. Another change is that while early cars' pedals were physically linked to the brake mechanism and throttle, in the early 2020s, cars have increasingly replaced these physical linkages with electronic controls. Cars are typically equipped with interior lighting which can be toggled manually or be set to light up automatically with doors open, anentertainment systemwhich originated fromcar radios, sidewayswindowswhich can be lowered or raised electrically (manually on earlier cars), and one or multipleauxiliary power outletsfor supplying portable appliances such asmobile phones, portable fridges,power inverters, and electrical air pumps from the on-board electrical system.[63][64][a]More costly upper-class andluxury carsare equipped with features earlier such as massage seats andcollision avoidance systems.[65][66] Dedicated automotive fuses and circuit breakersprevent damage fromelectrical overload. Cars are typically fitted with multiple types of lights. These includeheadlights, which are used to illuminate the way ahead and make the car visible to other users, so that the vehicle can be used at night; in some jurisdictions,daytime running lights; red brake lights to indicate when the brakes are applied; amber turn signal lights to indicate the turn intentions of the driver; white-coloured reverse lights to illuminate the area behind the car (and indicate that the driver will be or is reversing); and on some vehicles, additional lights (e.g., side marker lights) to increase the visibility of the car. Interior lights on the ceiling of the car are usually fitted for the driver and passengers. Some vehicles also have a boot light and, more rarely, an engine compartment light. During the late 20th and early 21st century, cars increased in weight due to batteries,[68]modern steel safety cages, anti-lock brakes, airbags, and "more-powerful—if more efficient—engines"[69]and, as of 2019[update], typically weigh between 1 and 3 tonnes (1.1 and 3.3 short tons; 0.98 and 2.95 long tons).[70]Heavier cars are safer for the driver from a crash perspective, but more dangerous for other vehicles and road users.[69]The weight of a car influences fuel consumption and performance, with more weight resulting in increased fuel consumption and decreased performance. TheWuling Hongguang Mini EV, a typicalcity car, weighs about 700 kilograms (1,500 lb). Heavier cars include SUVs and extended-length SUVs like theSuburban. Cars have also become wider.[71] Some places tax heavier cars more:[72]as well as improving pedestrian safety this can encourage manufacturers to use materials such as recycledaluminiuminstead of steel.[73]It has been suggested that one benefit of subsidisingcharging infrastructureis that cars can use lighter batteries.[74] Most cars are designed to carry multiple occupants, often with four or five seats. Cars with five seats typically seat two passengers in the front and three in the rear.Full-size carsand largesport utility vehiclescan often carry six, seven, or more occupants depending on the arrangement of the seats. On the other hand,sports carsare most often designed with only two seats. Utility vehicles likepickup trucks, combine seating with extra cargo or utility functionality. The differing needs for passenger capacity and their luggage or cargo space has resulted in the availability of a large variety of body styles to meet individual consumer requirements that include, among others, thesedan/saloon,hatchback,station wagon/estate,coupe, andminivan. Traffic collisions are the largest cause of injury-related deaths worldwide.[17]Mary Wardbecame one of the first documented car fatalities in 1869 inParsonstown, Ireland,[75]andHenry Blissone of the US's first pedestrian car casualties in 1899 in New York City.[76]There are now standard tests for safety in new cars, such as theEuroandUSNCAP tests,[77]and insurance-industry-backed tests by theInsurance Institute for Highway Safety(IIHS).[78]However, not all such tests consider the safety of people outside the car, such as drivers of other cars, pedestrians and cyclists.[79] The costs of car usage, which may include the cost of: acquiring the vehicle, repairs andauto maintenance, fuel,depreciation, driving time,parking fees, taxes, and insurance,[16]are weighed against the cost of the alternatives, and the value of the benefits—perceived and real—of vehicle usage. The benefits may include on-demand transportation, mobility, independence, and convenience,[18]andemergency power.[81]During the 1920s, cars had another benefit: "[c]ouples finally had a way to head off on unchaperoned dates, plus they had a private space to snuggle up close at the end of the night."[82] Similarly the costs to society of car use may include;maintaining roads,land use,air pollution,noise pollution,road congestion,public health, health care, and of disposing of the vehicle at the end of its life; and can be balanced against the value of the benefits to society that car use generates. Societal benefits may include: economy benefits, such as job and wealth creation, of car production and maintenance, transportation provision, society wellbeing derived from leisure and travel opportunities, and revenue generation from thetaxopportunities. The ability of humans to move flexibly from place to place has far-reaching implications for the nature of societies.[19] Car production and use has a large number of environmental impacts: it causes localair pollutionplastic pollutionand contributes togreenhouse gas emissionsandclimate change.[85]Cars and vans caused 10% of energy-relatedcarbon dioxideemissions in 2022.[86]As of 2023[update],electric carsproduce about half the emissions over their lifetime as diesel and petrol cars. This is set to improve as countries produce more of their electricity fromlow-carbon sources.[87]Cars consume almost a quarter of world oil production as of 2019.[55]Cities planned around cars are often less dense, which leads to further emissions, as they are lesswalkablefor instance.[85]A growing demand for large SUVs is driving up emissions from cars.[88] Cars are a major cause ofair pollution,[89]which stems fromexhaust gasin diesel and petrol cars and fromdust from brakes, tyres, and road wear. Electric cars do not produce tailpipe emissions, but are generally heavier and therefore produce slightly moreparticulate matter.[90]Heavy metalsand microplastics (from tyres) are also released into the environment, during production, use and at the end of life. Mining related to car manufacturing and oil spills both causewater pollution.[85] Animals and plants are often negatively affected by cars viahabitat destructionandfragmentationfrom the road network and pollution. Animals are also killed every year on roads by cars, referred to asroadkill.[85]More recent road developments are including significant environmental mitigation in their designs, such as green bridges (designed to allowwildlife crossings) and creatingwildlife corridors. Governments use fiscal policies, such asroad tax, to discourage the purchase and use of more polluting cars;[91]Vehicle emission standardsban the sale of new highly pollution cars.[92]Many countriesplan to stop selling fossil cars altogetherbetween 2025 and 2050.[93]Various cities have implementedlow-emission zones, banning old fossil fuel andAmsterdamis planning to ban fossil fuel cars completely.[94][95]Some cities make it easier for people to choose other forms of transport, such ascycling.[94]Many Chinese cities limit licensing of fossil fuel cars,[96] Mass production of personal motor vehicles in the United States and other developed countries with extensive territories such as Australia, Argentina, and France vastly increased individual and group mobility and greatly increased and expanded economic development in urban, suburban, exurban and rural areas.[citation needed]Growth in the popularity of cars andcommutinghas led totraffic congestion.[97]Moscow,Istanbul,Bogotá,Mexico CityandSão Paulowere the world's most congested cities in 2018 according to INRIX, a data analytics company.[98] In the United States, thetransport divideandcar dependencyresulting from domination ofcar-based transport systemspresents barriers to employment in low-income neighbourhoods,[99]with many low-income individuals and families forced to run cars they cannot afford in order to maintain their income.[100]Dependency on automobiles byAfrican Americansmay result in exposure to the hazards ofdriving while blackand other types ofracial discriminationrelated to buying, financing and insuring them.[101] Air pollution from cars increases the risk oflung cancerandheart disease. It can also harm pregnancies: more children areborn too earlyor with lowerbirth weight.[85]Children are extra vulnerable to air pollution, as their bodies are still developing and air pollution in children is linked to the development ofasthma,childhood cancer, and neurocognitive issues such asautism.[102][85]The growth in popularity of the car allowed cities tosprawl, therefore encouraging more travel by car, resulting in inactivity andobesity, which in turn can lead to increased risk of a variety of diseases.[103]When places are designed around cars, children have fewer opportunities to go places by themselves, and lose opportunities to become more independent.[104][85] Although intensive development of conventionalbattery electric vehiclesis continuing into the 2020s,[105]other carpropulsiontechnologies that are under development includewireless charging,[106]hydrogen cars,[107][108]and hydrogen/electric hybrids.[109]Research into alternative forms of power includes usingammoniainstead of hydrogen infuel cells.[110] New materials which may replace steel car bodies include aluminium,[111]fiberglass,carbon fiber,biocomposites, andcarbon nanotubes.[112]Telematicstechnology is allowing more and more people to share cars, on apay-as-you-gobasis, throughcar shareandcarpoolschemes. Communication is also evolving due toconnected carsystems.[113]Open-source carsare not widespread.[114] Fully autonomous vehicles, also known as driverless cars, already exist asrobotaxis[115][116]but have a long way to go before they are in general use.[117] Car-sharearrangements andcarpoolingare also increasingly popular, in the US and Europe.[118]For example, in the US, some car-sharing services have experienced double-digit growth in revenue and membership growth between 2006 and 2007. Services like car sharing offer residents to "share" a vehicle rather than own a car in already congested neighbourhoods.[119] The automotive industry designs, develops, manufactures, markets, and sells the world'smotor vehicles, more than three-quarters of which are cars. In 2020, there were 56 million cars manufactured worldwide,[120]down from 67 million the previous year.[121]Theautomotive industry in Chinaproduces by far the most (20 million in 2020), followed by Japan (seven million), then Germany, South Korea and India.[122]The largest market is China, followed by the US. Around the world, there are about a billion cars on the road;[123]they burn over a trillion litres (0.26×10^12US gal; 0.22×10^12imp gal) of petrol and diesel fuel yearly, consuming about 50exajoules(14,000TWh) of energy.[124]The numbers of cars are increasing rapidly in China and India.[125]In the opinion of some, urban transport systems based around the car have proved unsustainable, consuming excessive energy, affecting the health of populations, and delivering a declining level of service despite increasing investment. Many of these negative effects fall disproportionately on those social groups who are also least likely to own and drive cars.[126][127]Thesustainable transportmovement focuses on solutions to these problems. The car industry is also facing increasing competition from the public transport sector, as some people re-evaluate their private vehicle usage. In July 2021, theEuropean Commissionintroduced the "Fit for 55" legislation package, outlining crucial directives for the automotive sector's future.[128][129]According to this package, by 2035, all newly sold cars in the European market must beZero-emissions vehicles.[130][131][132] Established alternatives for some aspects of car use includepublic transportsuch as busses,trolleybusses, trains,subways,tramways,light rail, cycling, andwalking.Bicycle sharing systemshave been established in China and many European cities, includingCopenhagenandAmsterdam. Similar programmes have been developed in large US cities.[133][134]Additional individual modes of transport, such aspersonal rapid transitcould serve as an alternative to cars if they prove to be socially accepted.[135]A study which checked the costs and the benefits of introducingLow Traffic NeighbourhoodinLondonfound the benefits overpass the costs approximately by 100 times in the first 20 years and the difference is growing over time.[136] General: Effects: Mitigation:
https://en.wikipedia.org/wiki/Automobile
Incryptography, aFeistel cipher(also known asLuby–Rackoff block cipher) is asymmetric structureused in the construction ofblock ciphers, named after theGerman-bornphysicistand cryptographerHorst Feistel, who did pioneering research while working forIBM; it is also commonly known as aFeistel network. A large number ofblock ciphersuse the scheme, including the USData Encryption Standard, the Soviet/RussianGOSTand the more recentBlowfishandTwofishciphers. In a Feistel cipher, encryption and decryption are very similar operations, and both consist of iteratively running a function called a "round function" a fixed number of times. Many modern symmetric block ciphers are based on Feistel networks. Feistel networks were first seen commercially in IBM'sLucifercipher, designed byHorst FeistelandDon Coppersmithin 1973. Feistel networks gained respectability when the U.S. Federal Government adopted theDES(a cipher based on Lucifer, with changes made by theNSA) in 1976. Like other components of the DES, the iterative nature of the Feistel construction makes implementing the cryptosystem in hardware easier (particularly on the hardware available at the time of DES's design). A Feistel network uses around function, a function which takes two inputs – a data block and a subkey – and returns one output of the same size as the data block.[1]In each round, the round function is run on half of the data to be encrypted, and its output is XORed with the other half of the data. This is repeated a fixed number of times, and the final output is the encrypted data. An important advantage of Feistel networks compared to other cipher designs such assubstitution–permutation networksis that the entire operation is guaranteed to be invertible (that is, encrypted data can be decrypted), even if the round function is not itself invertible. The round function can be made arbitrarily complicated, since it does not need to be designed to be invertible.[2]: 465[3]: 347Furthermore, theencryptionanddecryptionoperations are very similar, even identical in some cases, requiring only a reversal of thekey schedule. Therefore, the size of the code or circuitry required to implement such a cipher is nearly halved. Unlike substitution-permutation networks, Feistel networks also do not depend on a substitution box that could cause timing side-channels in software implementations. The structure and properties of Feistel ciphers have been extensively analyzed bycryptographers. Michael LubyandCharles Rackoffanalyzed the Feistel cipher construction and proved that if the round function is a cryptographically securepseudorandom function, withKiused as the seed, then 3 rounds are sufficient to make the block cipher apseudorandom permutation, while 4 rounds are sufficient to make it a "strong" pseudorandom permutation (which means that it remains pseudorandom even to an adversary who getsoracleaccess to its inverse permutation).[4]Because of this very important result of Luby and Rackoff, Feistel ciphers are sometimes called Luby–Rackoff block ciphers. Further theoretical work has generalized the construction somewhat and given more precise bounds for security.[5][6] LetF{\displaystyle \mathrm {F} }be the round function and letK0,K1,…,Kn{\displaystyle K_{0},K_{1},\ldots ,K_{n}}be the sub-keys for the rounds0,1,…,n{\displaystyle 0,1,\ldots ,n}respectively. Then the basic operation is as follows: Split the plaintext block into two equal pieces: (L0{\displaystyle L_{0}},R0{\displaystyle R_{0}}). For each roundi=0,1,…,n{\displaystyle i=0,1,\dots ,n}, compute where⊕{\displaystyle \oplus }meansXOR. Then the ciphertext is(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}. Decryption of a ciphertext(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}is accomplished by computing fori=n,n−1,…,0{\displaystyle i=n,n-1,\ldots ,0} Then(L0,R0){\displaystyle (L_{0},R_{0})}is the plaintext again. The diagram illustrates both encryption and decryption. Note the reversal of the subkey order for decryption; this is the only difference between encryption and decryption. Unbalanced Feistel ciphers use a modified structure whereL0{\displaystyle L_{0}}andR0{\displaystyle R_{0}}are not of equal lengths.[7]TheSkipjackcipher is an example of such a cipher. TheTexas Instrumentsdigital signature transponderuses a proprietary unbalanced Feistel cipher to performchallenge–response authentication.[8] TheThorp shuffleis an extreme case of an unbalanced Feistel cipher in which one side is a single bit. This has better provable security than a balanced Feistel cipher but requires more rounds.[9] The Feistel construction is also used in cryptographic algorithms other than block ciphers. For example, theoptimal asymmetric encryption padding(OAEP) scheme uses a simple Feistel network to randomize ciphertexts in certainasymmetric-key encryptionschemes. A generalized Feistel algorithm can be used to create strong permutations on small domains of size not a power of two (seeformat-preserving encryption).[9] Whether the entire cipher is a Feistel cipher or not, Feistel-like networks can be used as a component of a cipher's design. For example,MISTY1is a Feistel cipher using a three-round Feistel network in its round function,Skipjackis a modified Feistel cipher using a Feistel network in its G permutation, andThreefish(part ofSkein) is a non-Feistel block cipher that uses a Feistel-like MIX function. Feistel or modified Feistel: Generalised Feistel:
https://en.wikipedia.org/wiki/Feistel_network
Instatisticsandmachine learning,leakage(also known asdata leakageortarget leakage) is the use ofinformationin the model training process which would not be expected to be available atpredictiontime, causing the predictive scores (metrics) tooverestimatethe model's utility when run in a production environment.[1] Leakage is often subtle and indirect, making it hard to detect and eliminate. Leakage can cause a statistician or modeler to select a suboptimal model, which could be outperformed by a leakage-free model.[1] Leakage can occur in many steps in the machine learning process. The leakage causes can be sub-classified into two possible sources of leakage for a model: features and training examples.[1] Feature or column-wise leakage is caused by the inclusion of columns which are one of the following: a duplicate label, a proxy for the label, or the label itself. These features, known asanachronisms, will not be available when the model is used for predictions, and result in leakage if included when the model is trained.[2] For example, including a "MonthlySalary" column when predicting "YearlySalary"; or "MinutesLate" when predicting "IsLate". Row-wise leakage is caused by improper sharing of information between rows of data. Types of row-wise leakage include: A 2023 review found data leakage to be "a widespread failure mode in machine-learning (ML)-based science", having affected at least 294 academic publications across 17 disciplines, and causing a potentialreproducibility crisis.[5] Data leakage in machine learning can be detected through various methods, focusing on performance analysis, feature examination, data auditing, and model behavior analysis. Performance-wise, unusually high accuracy or significant discrepancies between training and test results often indicate leakage.[6]Inconsistent cross-validation outcomes may also signal issues. Feature examination involves scrutinizing feature importance rankings and ensuring temporal integrity in time series data. A thorough audit of the data pipeline is crucial, reviewing pre-processing steps, feature engineering, and data splitting processes.[7]Detecting duplicate entries across dataset splits is also important. For language models, the Min-K% method can detect the presence of data in a pretraining dataset. It presents a sentence suspected to be present in the pretraining dataset, and computes the log-likelihood of each token, then compute the average of the lowest K of these. If this exceeds a threshold, then the sentence is likely present.[8][9]This method is improved by comparing against a baseline of the mean and variance.[10] Analyzing model behavior can reveal leakage. Models relying heavily on counter-intuitive features or showing unexpected prediction patterns warrant investigation. Performance degradation over time when tested on new data may suggest earlier inflated metrics due to leakage. Advanced techniques include backward feature elimination, where suspicious features are temporarily removed to observe performance changes. Using a separate hold-out dataset for final validation before deployment is advisable.[7]
https://en.wikipedia.org/wiki/Leakage_(machine_learning)
Post-Quantum Cryptography Standardization[1]is a program and competition byNISTto update their standards to includepost-quantum cryptography.[2]It was announced at PQCrypto 2016.[3]23 signature schemes and 59 encryption/KEMschemes were submitted by the initial submission deadline at the end of 2017[4]of which 69 total were deemed complete and proper and participated in the first round. Seven of these, of which 3 are signature schemes, have advanced to the third round, which was announced on July 22, 2020.[citation needed] On August 13, 2024, NIST released final versions of the first three Post Quantum Crypto Standards: FIPS 203, FIPS 204, and FIPS 205.[5] Academic research on the potential impact of quantum computing dates back to at least 2001.[6]A NIST published report from April 2016 cites experts that acknowledge the possibility of quantum technology to render the commonly usedRSAalgorithm insecure by 2030.[7]As a result, a need to standardizequantum-securecryptographic primitives was pursued. Since most symmetric primitives are relatively easy to modify in a way that makes them quantum resistant, efforts have focused on public-key cryptography, namelydigital signaturesandkey encapsulation mechanisms. In December 2016 NIST initiated a standardization process by announcing a call for proposals.[8] The competition is now in its third round out of expected four, where in each round some algorithms are discarded and others are studied more closely. NIST hopes to publish the standardization documents by 2024, but may speed up the process if major breakthroughs inquantum computingare made. It is currently undecided whether the future standards will be published asFIPSor as NIST Special Publication (SP). Under consideration were:[9](strikethroughmeans it had been withdrawn) Candidates moving on to the second round were announced on January 30, 2019. They are:[33] On July 22, 2020, NIST announced seven finalists ("first track"), as well as eight alternate algorithms ("second track"). The first track contains the algorithms which appear to have the most promise, and will be considered for standardization at the end of the third round. Algorithms in the second track could still become part of the standard, after the third round ends.[53]NIST expects some of the alternate candidates to be considered in a fourth round. NIST also suggests it may re-open the signature category for new schemes proposals in the future.[54] On June 7–9, 2021, NIST conducted the third PQC standardization conference, virtually.[55]The conference included candidates' updates and discussions on implementations, on performances, and on security issues of the candidates. A small amount of focus was spent on intellectual property concerns. AfterNIST's announcement regarding the finalists and the alternate candidates, various intellectual property concerns were voiced, notably surroundinglattice-based schemessuch asKyberandNewHope. NIST holds signed statements from submitting groups clearing any legal claims, but there is still a concern that third parties could raise claims. NIST claims that they will take such considerations into account while picking the winning algorithms.[56] During this round, some candidates have shown to be vulnerable to some attack vectors. It forces these candidates to adapt accordingly: On July 5, 2022, NIST announced the first group of winners from its six-year competition.[60][61] On July 5, 2022, NIST announced four candidates for PQC Standardization Round 4.[62] On August 13, 2024, NIST released final versions of its first three Post Quantum Crypto Standards.[5]According to the release announcement: While there have been no substantive changes made to the standards since the draft versions, NIST has changed the algorithms’ names to specify the versions that appear in the three finalized standards, which are: On March 11, 2025 NIST released HQC as the fifth algorithm for post-quantum asymmetric encryption as used for key encapsulation / exchange.[66]The new algorithm is as a backup for ML-KEM, the main algorithm for general encryption. HQC is based on different math than ML-KEM, thus mitigating weakness if found.[67]The draft standard incorporating the HQC algorithm is expected in early 2026 with the final in 2027. NIST received 50 submissions and deemed 40 to be complete and proper according to the submission requirements.[68]Under consideration are:[69](strikethroughmeans it has been withdrawn) NIST deemed 14 submissions to pass to the second round.[127]
https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography_Standardization
Inmathematics, aninvariant subspaceof alinear mappingT:V→Vi.e. from somevector spaceVto itself, is asubspaceWofVthat is preserved byT. More generally, an invariant subspace for a collection of linear mappings is a subspace preserved by each mapping individually. Consider a vector spaceV{\displaystyle V}and a linear mapT:V→V.{\displaystyle T:V\to V.}A subspaceW⊆V{\displaystyle W\subseteq V}is called aninvariant subspace forT{\displaystyle T}, or equivalently,T-invariant, ifTtransforms any vectorv∈W{\displaystyle \mathbf {v} \in W}back intoW. In formulas, this can be writtenv∈W⟹T(v)∈W{\displaystyle \mathbf {v} \in W\implies T(\mathbf {v} )\in W}or[1]TW⊆W.{\displaystyle TW\subseteq W{\text{.}}} In this case,Trestrictsto anendomorphismofW:[2]T|W:W→W;T|W(w)=T(w).{\displaystyle T|_{W}:W\to W{\text{;}}\quad T|_{W}(\mathbf {w} )=T(\mathbf {w} ){\text{.}}} The existence of an invariant subspace also has amatrix formulation. Pick abasisCforWand complete it to a basisBofV. With respect toB, the operatorThas formT=[T|WT120T22]{\displaystyle T={\begin{bmatrix}T|_{W}&T_{12}\\0&T_{22}\end{bmatrix}}}for someT12andT22, whereT|W{\displaystyle T|_{W}}here denotes the matrix ofT|W{\displaystyle T|_{W}}with respect to the basisC. Any linear mapT:V→V{\displaystyle T:V\to V}admits the following invariant subspaces: These are the improper and trivial invariant subspaces, respectively. Certain linear operators have no proper non-trivial invariant subspace: for instance,rotationof a two-dimensionalrealvector space. However, theaxisof a rotation in three dimensions is always an invariant subspace. IfUis a 1-dimensional invariant subspace for operatorTwith vectorv∈U, then the vectorsvandTvmust belinearly dependent. Thus∀v∈U∃α∈R:Tv=αv.{\displaystyle \forall \mathbf {v} \in U\;\exists \alpha \in \mathbb {R} :T\mathbf {v} =\alpha \mathbf {v} {\text{.}}}In fact, the scalarαdoes not depend onv. The equation above formulates aneigenvalueproblem. AnyeigenvectorforTspans a 1-dimensional invariant subspace, and vice-versa. In particular, a nonzeroinvariant vector(i.e. afixed pointofT) spans an invariant subspace of dimension 1. As a consequence of thefundamental theorem of algebra, every linear operator on a nonzerofinite-dimensionalcomplexvector space has an eigenvector. Therefore, every such linear operator in at least two dimensions has a proper non-trivial invariant subspace. Determining whether a given subspaceWis invariant underTis ostensibly a problem of geometric nature. Matrix representation allows one to phrase this problem algebraically. WriteVas thedirect sumW⊕W′; a suitableW′can always be chosen by extending a basis ofW. The associatedprojection operatorPontoWhas matrix representation A straightforward calculation shows thatWisT-invariant if and only ifPTP=TP. If 1 is theidentity operator, then1-Pis projection ontoW′. The equationTP=PTholds if and only if both im(P) and im(1 −P) are invariant underT. In that case,Thas matrix representationT=[T1100T22]:im⁡(P)⊕im⁡(1−P)→im⁡(P)⊕im⁡(1−P).{\displaystyle T={\begin{bmatrix}T_{11}&0\\0&T_{22}\end{bmatrix}}:{\begin{matrix}\operatorname {im} (P)\\\oplus \\\operatorname {im} (1-P)\end{matrix}}\rightarrow {\begin{matrix}\operatorname {im} (P)\\\oplus \\\operatorname {im} (1-P)\end{matrix}}\;.} Colloquially, a projection that commutes withT"diagonalizes"T. As the above examples indicate, the invariant subspaces of a given linear transformationTshed light on the structure ofT. WhenVis a finite-dimensional vector space over analgebraically closed field, linear transformations acting onVare characterized (up to similarity) by theJordan canonical form, which decomposesVinto invariant subspaces ofT. Many fundamental questions regardingTcan be translated to questions about invariant subspaces ofT. The set ofT-invariant subspaces ofVis sometimes called theinvariant-subspace latticeofTand writtenLat(T). As the name suggests, it is a (modular)lattice, withmeets and joinsgiven by (respectively)set intersectionandlinear span. Aminimal elementinLat(T)in said to be aminimal invariant subspace. In the study of infinite-dimensional operators,Lat(T)is sometimes restricted to only theclosedinvariant subspaces. Given a collectionTof operators, a subspace is calledT-invariant if it is invariant under eachT∈T. As in the single-operator case, the invariant-subspace lattice ofT, writtenLat(T), is the set of allT-invariant subspaces, and bears the same meet and join operations. Set-theoretically, it is the intersectionLat(T)=⋂T∈TLat(T).{\displaystyle \mathrm {Lat} ({\mathcal {T}})=\bigcap _{T\in {\mathcal {T}}}{\mathrm {Lat} (T)}{\text{.}}} LetEnd(V)be the set of all linear operators onV. ThenLat(End(V))={0,V}. Given arepresentationof agroupGon a vector spaceV, we have a linear transformationT(g) :V→Vfor every elementgofG. If a subspaceWofVis invariant with respect to all these transformations, then it is asubrepresentationand the groupGacts onWin a natural way. The same construction applies torepresentations of an algebra. As another example, letT∈ End(V)andΣbe the algebra generated by {1,T}, where 1 is the identity operator. Then Lat(T) = Lat(Σ). Just as the fundamental theorem of algebra ensures that every linear transformation acting on a finite-dimensional complex vector space has a non-trivial invariant subspace, thefundamental theorem of noncommutative algebraasserts that Lat(Σ) contains non-trivial elements for certain Σ. Theorem(Burnside)—AssumeVis a complex vector space of finite dimension. For every proper subalgebraΣofEnd(V),Lat(Σ)contains a non-trivial element. One consequence is that every commuting family inL(V) can be simultaneouslyupper-triangularized. To see this, note that an upper-triangular matrix representation corresponds to aflagof invariant subspaces, that a commuting family generates a commuting algebra, and thatEnd(V)is not commutative whendim(V) ≥ 2. IfAis analgebra, one can define aleft regular representationΦ onA: Φ(a)b=abis ahomomorphismfromAtoL(A), the algebra of linear transformations onA The invariant subspaces of Φ are precisely the left ideals ofA. A left idealMofAgives a subrepresentation ofAonM. IfMis a leftidealofAthen the left regular representation Φ onMnow descends to a representation Φ' on thequotient vector spaceA/M. If [b] denotes anequivalence classinA/M, Φ'(a)[b] = [ab]. The kernel of the representation Φ' is the set {a∈A|ab∈Mfor allb}. The representation Φ' isirreducibleif and only ifMis amaximalleft ideal, since a subspaceV⊂A/Mis an invariant under {Φ'(a) |a∈A} if and only if itspreimageunder thequotient map,V+M, is a left ideal inA. The invariant subspace problem concerns the case whereVis a separableHilbert spaceover thecomplex numbers, of dimension > 1, andTis abounded operator. The problem is to decide whether every suchThas a non-trivial, closed, invariant subspace. It is unsolved. In the more general case whereVis assumed to be aBanach space,Per Enflo(1976) found an example of an operator without an invariant subspace. A concrete example of an operator without an invariant subspace was produced in 1985 byCharles Read. Related to invariant subspaces are so-called almost-invariant-halfspaces (AIHS's). A closed subspaceY{\displaystyle Y}of a Banach spaceX{\displaystyle X}is said to bealmost-invariantunder an operatorT∈B(X){\displaystyle T\in {\mathcal {B}}(X)}ifTY⊆Y+E{\displaystyle TY\subseteq Y+E}for some finite-dimensional subspaceE{\displaystyle E}; equivalently,Y{\displaystyle Y}is almost-invariant underT{\displaystyle T}if there is afinite-rank operatorF∈B(X){\displaystyle F\in {\mathcal {B}}(X)}such that(T+F)Y⊆Y{\displaystyle (T+F)Y\subseteq Y}, i.e. ifY{\displaystyle Y}is invariant (in the usual sense) underT+F{\displaystyle T+F}. In this case, the minimum possible dimension ofE{\displaystyle E}(or rank ofF{\displaystyle F}) is called thedefect. Clearly, every finite-dimensional and finite-codimensional subspace is almost-invariant under every operator. Thus, to make things non-trivial, we say thatY{\displaystyle Y}is a halfspace whenever it is a closed subspace with infinite dimension and infinite codimension. The AIHS problem asks whether every operator admits an AIHS. In the complex setting it has already been solved; that is, ifX{\displaystyle X}is a complex infinite-dimensional Banach space andT∈B(X){\displaystyle T\in {\mathcal {B}}(X)}thenT{\displaystyle T}admits an AIHS of defect at most 1. It is not currently known whether the same holds ifX{\displaystyle X}is a real Banach space. However, some partial results have been established: for instance, anyself-adjoint operatoron an infinite-dimensional real Hilbert space admits an AIHS, as does any strictly singular (or compact) operator acting on a real infinite-dimensional reflexive space.
https://en.wikipedia.org/wiki/Invariant_subspace
Incomputer science, theEarley parseris analgorithmforparsingstringsthat belong to a givencontext-free language, though (depending on the variant) it may suffer problems with certain nullable grammars.[1]The algorithm, named after its inventorJay Earley, is achart parserthat usesdynamic programming; it is mainly used for parsing incomputational linguistics. It was first introduced in his dissertation[2]in 1968 (and later appeared in abbreviated, more legible form in a journal).[3] Earley parsers are appealing because they can parse all context-free languages, unlikeLR parsersandLL parsers, which are more typically used incompilersbut which can only handle restricted classes of languages. The Earley parser executes in cubic time in the general caseO(n3){\displaystyle {O}(n^{3})}, wherenis the length of the parsed string, quadratic time forunambiguous grammarsO(n2){\displaystyle {O}(n^{2})},[4]and linear time for alldeterministic context-free grammars. It performs particularly well when the rules are writtenleft-recursively. The following algorithm describes the Earley recogniser. The recogniser can be modified to create a parse tree as it recognises, and in that way can be turned into a parser. In the following descriptions, α, β, and γ represent anystringofterminals/nonterminals(including theempty string), X and Y represent single nonterminals, andarepresents a terminal symbol. Earley's algorithm is a top-downdynamic programmingalgorithm. In the following, we use Earley's dot notation: given aproductionX → αβ, the notation X → α • β represents a condition in which α has already been parsed and β is expected. Input position 0 is the position prior to input. Input positionnis the position after accepting thenth token. (Informally, input positions can be thought of as locations attokenboundaries.) For every input position, the parser generates astate set. Each state is atuple(X → α • β,i), consisting of (Earley's original algorithm included a look-ahead in the state; later research showed this to have little practical effect on the parsing efficiency, and it has subsequently been dropped from most implementations.) A state is finished when its current position is the last position of the right side of the production, that is, when there is no symbol to the right of the dot • in the visual representation of the state. The state set at input positionkis called S(k). The parser is seeded with S(0) consisting of only the top-level rule. The parser then repeatedly executes three operations:prediction,scanning, andcompletion. Duplicate states are not added to the state set, only new ones. These three operations are repeated until no new states can be added to the set. The set is generally implemented as a queue of states to process, with the operation to be performed depending on what kind of state it is. The algorithm accepts if (X → γ •, 0) ends up in S(n), where (X → γ) is the top level-rule andnthe input length, otherwise it rejects. Adapted from Speech and Language Processing[5]byDaniel Jurafskyand James H. Martin, Consider the following simple grammar for arithmetic expressions: With the input: This is the sequence of state sets: The state (P → S •, 0) represents a completed parse. This state also appears in S(3) and S(1), which are complete sentences. Earley's dissertation[6]briefly describes an algorithm for constructing parse trees by adding a set of pointers from each non-terminal in an Earley item back to the items that caused it to be recognized. ButTomitanoticed[7]that this does not take into account the relations between symbols, so if we consider the grammar S → SS | b and the string bbb, it only notes that each S can match one or two b's, and thus produces spurious derivations for bb and bbbb as well as the two correct derivations for bbb. Another method[8]is to build the parse forest as you go, augmenting each Earley item with a pointer to a shared packed parse forest (SPPF) node labelled with a triple (s, i, j) where s is a symbol or an LR(0) item (production rule with dot), and i and j give the section of the input string derived by this node. A node's contents are either a pair of child pointers giving a single derivation, or a list of "packed" nodes each containing a pair of pointers and representing one derivation. SPPF nodes are unique (there is only one with a given label), but may contain more than one derivation forambiguousparses. So even if an operation does not add an Earley item (because it already exists), it may still add a derivation to the item's parse forest. SPPF nodes are never labeled with a completed LR(0) item: instead they are labelled with the symbol that is produced so that all derivations are combined under one node regardless of which alternative production they come from. Philippe McLean and R. Nigel Horspool in their paper"A Faster Earley Parser"combine Earley parsing with LR parsing and achieve an improvement in an order of magnitude.
https://en.wikipedia.org/wiki/Earley_parser
High-availability clusters(also known asHA clusters,fail-over clusters) are groups ofcomputersthat supportserverapplicationsthat can be reliably utilized witha minimum amount of down-time. They operate by usinghigh availability softwareto harnessredundantcomputers in groups orclustersthat provide continued service when system components fail. Without clustering, if a server running a particular application crashes, the application will be unavailable until the crashed server is fixed. HA clustering remedies this situation by detecting hardware/software faults, and immediately restarting the application on another system without requiring administrative intervention, a process known asfailover. As part of this process, clustering software may configure the node before starting the application on it. For example, appropriate file systems may need to be imported and mounted, network hardware may have to be configured, and some supporting applications may need to be running as well.[1] HA clusters are often used for criticaldatabases, file sharing on a network, business applications, and customer services such aselectronic commercewebsites. HA cluster implementations attempt to build redundancy into a cluster to eliminate single points of failure, including multiple network connections and data storage which is redundantly connected viastorage area networks. HA clusters usually use aheartbeatprivate network connection which is used to monitor the health and status of each node in the cluster. One subtle but serious condition all clustering software must be able to handle issplit-brain, which occurs when all of the private links go down simultaneously, but the cluster nodes are still running. If that happens, each node in the cluster may mistakenly decide that every other node has gone down and attempt to start services that other nodes are still running. Having duplicate instances of services may cause data corruption on the shared storage. HA clusters often also usequorumwitness storage (local or cloud) to avoid this scenario. A witness device cannot be shared between two halves of a split cluster, so in the event that all cluster members cannot communicate with each other (e.g., failed heartbeat), if a member cannot access the witness, it cannot become active. Not every application can run in a high-availability cluster environment, and the necessary design decisions need to be made early in the software design phase. In order to run in a high-availability cluster environment, an application must satisfy at least the following technical requirements, the last two of which are critical to its reliable function in a cluster, and are the most difficult to satisfy fully: The most common size for an HA cluster is a two-node cluster, since that is the minimum required to provide redundancy, but many clusters consist of many more, sometimes dozens of nodes. The attached diagram is a good overview of a classic HA cluster, with the caveat that it does not make any mention of quorum/witness functionality (see above). Such configurations can sometimes be categorized into one of the following models: The termslogical hostorcluster logical hostis used to describe thenetwork addressthat is used to access services provided by the cluster. This logical host identity is not tied to a single cluster node. It is actually a network address/hostname that is linked with the service(s) provided by the cluster. If a cluster node with a running database goes down, the database will be restarted on another cluster node. HA clusters usually use all available techniques to make the individual systems and shared infrastructure as reliable as possible. These include: These features help minimize the chances that the clustering failover between systems will be required. In such a failover, the service provided is unavailable for at least a little while, so measures to avoid failover are preferred. Systems that handle failures in distributed computing have different strategies to cure a failure. For instance, theApache CassandraAPIHectordefines three ways to configure a failover:
https://en.wikipedia.org/wiki/High-availability_cluster
Crowdfixingis a specific way ofcrowdsourcing, in which people gather together to fix public spaces of thelocal community. The main aim is to fight against deterioration of public places. Crowdfixing actions include (but are not limited to) cleaningflashmobs, mowing, repairing structures, and removing unsafe elements. Placemaking, a concept originated in the 1960s that focused on planning, management and design of public places, was the philosophical background to the crowdfixing movement. According to placemaking, in the modern times all the resources needed to create community-friendly, enjoyablepublic spacesand keep them in good conditions are available, but decision-making processes exclude citizens' preferences. Crowdfixing promotes the idea of public spaces as belonging to the local community, in opposition to the concept of areas merely administrated and owned by theState. Crowdfixing also tries to create better conditions for people to interact by providing them with onlinetoolsand mechanisms that allow them to set the different stages required to fix public spaces by improving the communication processes.
https://en.wikipedia.org/wiki/Crowdfixing
Asource portis a software project based on thesource codeof agame enginethat allows the game to be played onoperating systemsorcomputing platformswith which the game was not originally compatible. Source ports are oftencreated by fansafter the original developer hands over the maintenance support for a game by releasing itssource codeto the public (seeList of commercial video games with later released source code). In some cases, the source code used to create a source port must be obtained throughreverse engineering, in situations where the original source was never formally released by the game's developers. The term was coined after the release of the source code toDoom. Due to copyright issues concerning the sound library used by the original DOS version, id Software released only the source code to the Linux version of the game.[1][2]Since the majority of Doom players were DOS users the first step for a fan project was toportthe Linuxsourcecode to DOS.[3]A source port typically only includes the engine portion of the game and requires that the data files of the game in question already be present on users' systems. Source ports share the similarity withunofficial patchesthat both don't change the original gameplay as such projects are by definitionmods. However many source ports add support for gameplay mods, which is usually optional (e.g.DarkPlacesconsists of a source port engine and a gameplay mod that are even distributed separately[4]). While the primary goal of any source port is compatibility with newer hardware, many projects support other enhancements. Common examples of additions include support for higher video resolutions and differentaspect ratios, hardware accelerated renderers (OpenGLand/orDirect3D), enhanced input support (including the ability to map controls onto additional input devices), 3D character models (in case of2.5Dgames), higher resolution textures, support to replaceMIDIwithdigital audio(MP3,Ogg Vorbis, etc.), and enhancedmultiplayersupport using theInternet. Several source ports have been created for various games specifically to address online multiplayer support. Most older games were not created to take advantage of the Internet and the low latency, high bandwidth Internet connections available to computer gamers today. Furthermore, old games may use outdated network protocols to create multiplayer connections, such asIPXprotocol, instead ofInternet Protocol. Another problem was games that required a specificIP addressfor connecting with another player. This requirement made it difficult to quickly find a group of strangers to play with — the way that online games are most commonly played today. To address this shortcoming, specific source ports such asSkulltagadded "lobbies", which are basically integratedchat roomsin which players can meet and post the location of games they are hosting or may wish to join. Similar facilities may be found in newer games and online game services such as Valve'sSteam, Blizzard'sbattle.net, andGameSpy Arcade. If the source code of a software is not available, alternative approaches to achieve portability areEmulation,Engine remakes, andStatic recompilation.
https://en.wikipedia.org/wiki/Source_port
ASPARQCodeis amatrix code(or two-dimensionalbar code)encodingstandard that is based on the physicalQR Codedefinition created by Japanese corporationDenso-Wave. The QR Code standard as defined by Denso-Wave in ISO/IEC 18004 covers the physical encoding method of a binary data stream.[1]However, the Denso-Wave standard lacks an encoding standard for interpreting the data stream on the application layer for decoding URLs, phone numbers, and all other data types.NTT Docomohas established de facto standards for encoding some data types such as URLs, and contact information in Japan, but not all applications in other countries adhere to this convention as listed by the open-source project "zxing" for QR Code data types.[2][3] The SPARQCode encoding standard specifies a convention for the following encoding data types. The SPARQCode convention also recommends but does not require the inclusion of visual pictograms to denote the type of encoded data. The use of the SPARQCode is free of any license. The termSPARQCodeitself is atrademarkof MSKYNET, but has chosen to open it to be royalty-free.[4]
https://en.wikipedia.org/wiki/SPARQCode
Achatbotis asoftwareapplication or web interface that is designed to mimic humanconversationthrough text or voice interactions.[1][2][3]Modern chatbots are typicallyonlineand usegenerative artificial intelligencesystems that are capable of maintaining a conversation with a user innatural languageand simulating the way a human would behave as a conversational partner. Such chatbots often usedeep learningandnatural language processing, but simpler chatbots have existed for decades. Thislist of chatbotsis a general overview of notable chatbot applications and web interfaces.
https://en.wikipedia.org/wiki/List_of_chatbots
Incomputability theory, aprimitive recursive functionis, roughly speaking, a function that can be computed by acomputer programwhoseloopsare all"for" loops(that is, an upper bound of the number of iterations of every loop is fixed before entering the loop). Primitive recursive functions form a strictsubsetof thosegeneral recursive functionsthat are alsototal functions. The importance of primitive recursive functions lies in the fact that mostcomputable functionsthat are studied innumber theory(and more generally in mathematics) are primitive recursive. For example,additionanddivision, thefactorialandexponential function, and the function which returns thenth prime are all primitive recursive.[1]In fact, for showing that a computable function is primitive recursive, it suffices to show that itstime complexityis bounded above by a primitive recursive function of the input size.[2]It is hence not particularly easy to devise acomputable functionthat isnotprimitive recursive; some examples are shown in section§ Limitationsbelow. The set of primitive recursive functions is known asPRincomputational complexity theory. A primitive recursive function takes a fixed number of arguments, each anatural number(nonnegative integer: {0, 1, 2, ...}), and returns a natural number. If it takesnarguments it is calledn-ary. The basic primitive recursive functions are given by theseaxioms: More complex primitive recursive functions can be obtained by applying theoperationsgiven by these axioms: Interpretation: Theprimitive recursive functionsare the basic functions and those obtained from the basic functions by applying these operations a finite number of times. A definition of the 2-ary functionAdd{\displaystyle Add}, to compute the sum of its arguments, can be obtained using the primitive recursion operatorρ{\displaystyle \rho }. To this end, the well-known equations are "rephrased in primitive recursive function terminology": In the definition ofρ(g,h){\displaystyle \rho (g,h)}, the first equation suggests to chooseg=P11{\displaystyle g=P_{1}^{1}}to obtainAdd(0,y)=g(y)=y{\displaystyle Add(0,y)=g(y)=y}; the second equation suggests to chooseh=S∘P23{\displaystyle h=S\circ P_{2}^{3}}to obtainAdd(S(x),y)=h(x,Add(x,y),y)=(S∘P23)(x,Add(x,y),y)=S(Add(x,y)){\displaystyle Add(S(x),y)=h(x,Add(x,y),y)=(S\circ P_{2}^{3})(x,Add(x,y),y)=S(Add(x,y))}. Therefore, the addition function can be defined asAdd=ρ(P11,S∘P23){\displaystyle Add=\rho (P_{1}^{1},S\circ P_{2}^{3})}. As a computation example, GivenAdd{\displaystyle Add}, the 1-ary functionAdd∘(P11,P11){\displaystyle Add\circ (P_{1}^{1},P_{1}^{1})}doubles its argument,(Add∘(P11,P11))(x)=Add(x,x)=x+x{\displaystyle (Add\circ (P_{1}^{1},P_{1}^{1}))(x)=Add(x,x)=x+x}. In a similar way as addition, multiplication can be defined byMul=ρ(C01,Add∘(P23,P33)){\displaystyle Mul=\rho (C_{0}^{1},Add\circ (P_{2}^{3},P_{3}^{3}))}. This reproduces the well-known multiplication equations: and The predecessor function acts as the "opposite" of the successor function and is recursively defined by the rulesPred(0)=0{\displaystyle Pred(0)=0}andPred(S(n))=n{\displaystyle Pred(S(n))=n}. A primitive recursive definition isPred=ρ(C00,P12){\displaystyle Pred=\rho (C_{0}^{0},P_{1}^{2})}. As a computation example, The limited subtraction function (also called "monus", and denoted "−.{\displaystyle {\stackrel {.}{-}}}") is definable from the predecessor function. It satisfies the equations Since the recursion runs over the second argument, we begin with a primitive recursive definition of the reversed subtraction,RSub(y,x)=x−.y{\displaystyle RSub(y,x)=x{\stackrel {.}{-}}y}. Its recursion then runs over the first argument, so its primitive recursive definition can be obtained, similar to addition, asRSub=ρ(P11,Pred∘P23){\displaystyle RSub=\rho (P_{1}^{1},Pred\circ P_{2}^{3})}. To get rid of the reversed argument order, then defineSub=RSub∘(P22,P12){\displaystyle Sub=RSub\circ (P_{2}^{2},P_{1}^{2})}. As a computation example, In some settings it is natural to consider primitive recursive functions that take as inputs tuples that mix numbers withtruth values(that ist{\displaystyle t}for true andf{\displaystyle f}for false),[citation needed]or that produce truth values as outputs.[4]This can be accomplished by identifying the truth values with numbers in any fixed manner. For example, it is common to identify the truth valuet{\displaystyle t}with the number1{\displaystyle 1}and the truth valuef{\displaystyle f}with the number0{\displaystyle 0}. Once this identification has been made, thecharacteristic functionof a setA{\displaystyle A}, which always returns1{\displaystyle 1}or0{\displaystyle 0}, can be viewed as a predicate that tells whether a number is in the setA{\displaystyle A}. Such an identification of predicates with numeric functions will be assumed for the remainder of this article. As an example for a primitive recursive predicate, the 1-ary functionIsZero{\displaystyle IsZero}shall be defined such thatIsZero(x)=1{\displaystyle IsZero(x)=1}ifx=0{\displaystyle x=0}, andIsZero(x)=0{\displaystyle IsZero(x)=0}, otherwise. This can be achieved by definingIsZero=ρ(C10,C02){\displaystyle IsZero=\rho (C_{1}^{0},C_{0}^{2})}. Then,IsZero(0)=ρ(C10,C02)(0)=C10(0)=1{\displaystyle IsZero(0)=\rho (C_{1}^{0},C_{0}^{2})(0)=C_{1}^{0}(0)=1}and e.g.IsZero(8)=ρ(C10,C02)(S(7))=C02(7,IsZero(7))=0{\displaystyle IsZero(8)=\rho (C_{1}^{0},C_{0}^{2})(S(7))=C_{0}^{2}(7,IsZero(7))=0}. Using the propertyx≤y⟺x−.y=0{\displaystyle x\leq y\iff x{\stackrel {.}{-}}y=0}, the 2-ary functionLeq{\displaystyle Leq}can be defined byLeq=IsZero∘Sub{\displaystyle Leq=IsZero\circ Sub}. ThenLeq(x,y)=1{\displaystyle Leq(x,y)=1}ifx≤y{\displaystyle x\leq y}, andLeq(x,y)=0{\displaystyle Leq(x,y)=0}, otherwise. As a computation example, Once a definition ofLeq{\displaystyle Leq}is obtained, the converse predicate can be defined asGeq=Leq∘(P22,P12){\displaystyle Geq=Leq\circ (P_{2}^{2},P_{1}^{2})}. Then,Geq(x,y)=Leq(y,x){\displaystyle Geq(x,y)=Leq(y,x)}is true (more precisely: has value 1) if, and only if,x≥y{\displaystyle x\geq y}. The 3-ary if-then-else operator known from programming languages can be defined byIf=ρ(P22,P34){\displaystyle {\textit {If}}=\rho (P_{2}^{2},P_{3}^{4})}. Then, for arbitraryx{\displaystyle x}, and That is,If(x,y,z){\displaystyle {\textit {If}}(x,y,z)}returns the then-part,y{\displaystyle y}, if the if-part,x{\displaystyle x}, is true, and the else-part,z{\displaystyle z}, otherwise. Based on theIf{\displaystyle {\textit {If}}}function, it is easy to define logical junctors. For example, definingAnd=If∘(P12,P22,C02){\displaystyle And={\textit {If}}\circ (P_{1}^{2},P_{2}^{2},C_{0}^{2})}, one obtainsAnd(x,y)=If(x,y,0){\displaystyle And(x,y)={\textit {If}}(x,y,0)}, that is,And(x,y){\displaystyle And(x,y)}is trueif, and only if, bothx{\displaystyle x}andy{\displaystyle y}are true (logical conjunctionofx{\displaystyle x}andy{\displaystyle y}). Similarly,Or=If∘(P12,C12,P22){\displaystyle Or={\textit {If}}\circ (P_{1}^{2},C_{1}^{2},P_{2}^{2})}andNot=If∘(P11,C01,C11){\displaystyle Not={\textit {If}}\circ (P_{1}^{1},C_{0}^{1},C_{1}^{1})}lead to appropriate definitions ofdisjunctionandnegation:Or(x,y)=If(x,1,y){\displaystyle Or(x,y)={\textit {If}}(x,1,y)}andNot(x)=If(x,0,1){\displaystyle Not(x)={\textit {If}}(x,0,1)}. Using the above functionsLeq{\displaystyle Leq},Geq{\displaystyle Geq}andAnd{\displaystyle And}, the definitionEq=And∘(Leq,Geq){\displaystyle Eq=And\circ (Leq,Geq)}implements the equality predicate. In fact,Eq(x,y)=And(Leq(x,y),Geq(x,y)){\displaystyle Eq(x,y)=And(Leq(x,y),Geq(x,y))}is true if, and only if,x{\displaystyle x}equalsy{\displaystyle y}. Similarly, the definitionLt=Not∘Geq{\displaystyle Lt=Not\circ Geq}implements the predicate "less-than", andGt=Not∘Leq{\displaystyle Gt=Not\circ Leq}implements "greater-than". Exponentiationandprimality testingare primitive recursive. Given primitive recursive functionse{\displaystyle e},f{\displaystyle f},g{\displaystyle g}, andh{\displaystyle h}, a function that returns the value ofg{\displaystyle g}whene≤f{\displaystyle e\leq f}and the value ofh{\displaystyle h}otherwise is primitive recursive. By usingGödel numberings, the primitive recursive functions can be extended to operate on other objects such as integers andrational numbers. If integers are encoded by Gödel numbers in a standard way, the arithmetic operations including addition, subtraction, and multiplication are all primitive recursive. Similarly, if the rationals are represented by Gödel numbers then thefieldoperations are all primitive recursive. The following examples and definitions are fromKleene (1952, pp. 222–231). Many appear with proofs. Most also appear with similar names, either as proofs or as examples, inBoolos, Burgess & Jeffrey (2002, pp. 63–70) they add the logarithm lo(x, y) or lg(x, y) depending on the exact derivation. In the following the mark " ' ", e.g. a', is the primitive mark meaning "the successor of", usually thought of as " +1", e.g. a +1 =defa'. The functions 16–20 and #G are of particular interest with respect to converting primitive recursive predicates to, and extracting them from, their "arithmetical" form expressed asGödel numbers. The broader class ofpartial recursive functionsis defined by introducing anunbounded search operator. The use of this operator may result in apartial function, that is, a relation withat mostone value for each argument, but does not necessarily haveanyvalue for any argument (seedomain). An equivalent definition states that a partial recursive function is one that can be computed by aTuring machine. A total recursive function is a partial recursive function that is defined for every input. Every primitive recursive function is total recursive, but not all total recursive functions are primitive recursive. TheAckermann functionA(m,n) is a well-known example of a total recursive function (in fact, provable total), that is not primitive recursive. There is a characterization of the primitive recursive functions as a subset of the total recursive functions using the Ackermann function. This characterization states that a function is primitive recursiveif and only ifthere is a natural numbermsuch that the function can be computed by a Turingmachine that always haltswithin A(m,n) or fewer steps, wherenis the sum of the arguments of the primitive recursive function.[5] An important property of the primitive recursive functions is that they are arecursively enumerablesubset of the set of alltotal recursive functions(which is not itself recursively enumerable). This means that there is a single computable functionf(m,n) that enumerates the primitive recursive functions, namely: fcan be explicitly constructed by iteratively repeating all possible ways of creating primitive recursive functions. Thus, it is provably total. One can use adiagonalizationargument to show thatfis not recursive primitive in itself: had it been such, so would beh(n) =f(n,n)+1. But if this equals some primitive recursive function, there is anmsuch thath(n) =f(m,n) for alln, and thenh(m) =f(m,m), leading to contradiction. However, the set of primitive recursive functions is not thelargestrecursively enumerable subset of the set of all total recursive functions. For example, the set of provably total functions (in Peano arithmetic) is also recursively enumerable, as one can enumerate all the proofs of the theory. While all primitive recursive functions are provably total, the converse is not true. Primitive recursive functions tend to correspond very closely with our intuition of what a computable function must be. Certainly the initial functions are intuitively computable (in their very simplicity), and the two operations by which one can create new primitive recursive functions are also very straightforward. However, the set of primitive recursive functions does not include every possible total computable function—this can be seen with a variant ofCantor's diagonal argument. This argument provides a total computable function that is not primitive recursive. A sketch of the proof is as follows: Now define the "evaluator function"ev{\displaystyle ev}with two arguments, byev(i,j)=fi(j){\displaystyle ev(i,j)=f_{i}(j)}. Clearlyev{\displaystyle ev}is total and computable, since one can effectively determine the definition offi{\displaystyle f_{i}}, and being a primitive recursive functionfi{\displaystyle f_{i}}is itself total and computable, sofi(j){\displaystyle f_{i}(j)}is always defined and effectively computable. However a diagonal argument will show that the functionev{\displaystyle ev}of two arguments is not primitive recursive. This argument can be applied to any class of computable (total) functions that can be enumerated in this way, as explained in the articleMachine that always halts. Note however that thepartialcomputable functions (those that need not be defined for all arguments) can be explicitly enumerated, for instance by enumerating Turing machine encodings. Other examples of total recursive but not primitive recursive functions are known: Instead ofCnk{\displaystyle C_{n}^{k}}, alternative definitions use just one 0-aryzero functionC00{\displaystyle C_{0}^{0}}as a primitive function that always returns zero, and built the constant functions from the zero function, the successor function and the composition operator. Robinson[6]considered various restrictions of the recursion rule. One is the so-callediteration rulewhere the functionhdoes not have access to the parametersxi(in this case, we may assume without loss of generality that the functiongis just the identity, as the general case can be obtained by substitution): He proved that the class of all primitive recursive functions can still be obtained in this way. Another restriction considered by Robinson[6]ispure recursion, wherehdoes not have access to the induction variabley: Gladstone[7]proved that this rule is enough to generate all primitive recursive functions. Gladstone[8]improved this so that even the combination of these two restrictions, i.e., thepure iterationrule below, is enough: Further improvements are possible: Severin[9]prove that even the pure iteration rulewithout parameters, namely suffices to generate allunaryprimitive recursive functions if we extend the set of initial functions with truncated subtractionx ∸ y. We getallprimitive recursive functions if we additionally include + as an initial function. Some additional forms of recursion also define functions that are in fact primitive recursive. Definitions in these forms may be easier to find or more natural for reading or writing.Course-of-values recursiondefines primitive recursive functions. Some forms ofmutual recursionalso define primitive recursive functions. The functions that can be programmed in theLOOP programming languageare exactly the primitive recursive functions. This gives a different characterization of the power of these functions. The main limitation of the LOOP language, compared to aTuring-complete language, is that in the LOOP language the number of times that each loop will run is specified before the loop begins to run. An example of a primitive recursive programming language is one that contains basic arithmetic operators (e.g. + and −, or ADD and SUBTRACT), conditionals and comparison (IF-THEN, EQUALS, LESS-THAN), and bounded loops, such as the basicfor loop, where there is a known or calculable upper bound to all loops (FOR i FROM 1 TO n, with neither i nor n modifiable by the loop body). No control structures of greater generality, such aswhile loopsor IF-THEN plusGOTO, are admitted in a primitive recursive language. TheLOOP language, introduced in a 1967 paper byAlbert R. MeyerandDennis M. Ritchie,[10]is such a language. Its computing power coincides with the primitive recursive functions. A variant of the LOOP language isDouglas Hofstadter'sBlooPinGödel, Escher, Bach. Adding unbounded loops (WHILE, GOTO) makes the languagegeneral recursiveandTuring-complete, as are all real-world computer programming languages. The definition of primitive recursive functions implies that their computation halts on every input (after a finite number of steps). On the other hand, thehalting problemisundecidablefor general recursive functions. The primitive recursive functions are closely related to mathematicalfinitism, and are used in several contexts in mathematical logic where a particularly constructive system is desired.Primitive recursive arithmetic(PRA), a formal axiom system for the natural numbers and the primitive recursive functions on them, is often used for this purpose. PRA is much weaker thanPeano arithmetic, which is not a finitistic system. Nevertheless, many results innumber theoryand inproof theorycan be proved in PRA. For example,Gödel's incompleteness theoremcan be formalized into PRA, giving the following theorem: Similarly, many of the syntactic results in proof theory can be proved in PRA, which implies that there are primitive recursive functions that carry out the corresponding syntactic transformations of proofs. In proof theory andset theory, there is an interest in finitisticconsistency proofs, that is, consistency proofs that themselves are finitistically acceptable. Such a proof establishes that the consistency of a theoryTimplies the consistency of a theorySby producing a primitive recursive function that can transform any proof of an inconsistency fromSinto a proof of an inconsistency fromT. One sufficient condition for a consistency proof to be finitistic is the ability to formalize it in PRA. For example, many consistency results in set theory that are obtained byforcingcan be recast as syntactic proofs that can be formalized in PRA. Recursive definitionshad been used more or less formally in mathematics before, but the construction of primitive recursion is traced back toRichard Dedekind's theorem 126 of hisWas sind und was sollen die Zahlen?(1888). This work was the first to give a proof that a certain recursive construction defines a unique function.[11][12][13] Primitive recursive arithmeticwas first proposed byThoralf Skolem[14]in 1923. The current terminology was coined byRózsa Péter(1934) afterAckermannhad proved in 1928 that the function which today is named after him was not primitive recursive, an event which prompted the need to rename what until then were simply called recursive functions.[12][13]
https://en.wikipedia.org/wiki/Primitive_recursive_function
The following is a list ofweb serviceprotocols.
https://en.wikipedia.org/wiki/List_of_web_service_protocols
Indecision theory, theodds algorithm(orBruss algorithm) is a mathematical method for computing optimal strategies for a class of problems that belong to the domain ofoptimal stoppingproblems. Their solution follows from theodds strategy, and the importance of the odds strategy lies in its optimality, as explained below. The odds algorithm applies to a class of problems calledlast-successproblems. Formally, the objective in these problems is to maximize the probability of identifying in a sequence of sequentially observed independent events the last event satisfying a specific criterion (a "specific event"). This identification must be done at the time of observation. No revisiting of preceding observations is permitted. Usually, a specific event is defined by the decision maker as an event that is of true interest in the view of "stopping" to take a well-defined action. Such problems are encountered in several situations. Two different situations exemplify the interest in maximizing the probability to stop on a last specific event. Consider a sequence ofn{\displaystyle n}independent events. Associate with this sequence another sequence of independent eventsI1,I2,…,In{\displaystyle I_{1},\,I_{2},\,\dots ,\,I_{n}}with values 1 or 0. HereIk=1{\displaystyle \,I_{k}=1}, called a success, stands for the event that the kth observation is interesting (as defined by the decision maker), andIk=0{\displaystyle \,I_{k}=0}for non-interesting. These random variablesI1,I2,…,In{\displaystyle I_{1},\,I_{2},\,\dots ,\,I_{n}}are observed sequentially and the goal is to correctly select the last success when it is observed. Letpk=P(Ik=1){\displaystyle \,p_{k}=P(\,I_{k}\,=1)}be the probability that the kth event is interesting. Further letqk=1−pk{\displaystyle \,q_{k}=\,1-p_{k}}andrk=pk/qk{\displaystyle \,r_{k}=p_{k}/q_{k}}. Note thatrk{\displaystyle \,r_{k}}represents theoddsof the kth event turning out to be interesting, explaining the name of the odds algorithm. The odds algorithm sums up the odds in reverse order until this sum reaches or exceeds the value 1 for the first time. If this happens at indexs, it savessand the corresponding sum If the sum of the odds does not reach 1, it setss= 1. At the same time it computes The output is The odds strategy is the rule to observe the events one after the other and to stop on the first interesting event from indexsonwards (if any), wheresis the stopping threshold of output a. The importance of the odds strategy, and hence of the odds algorithm, lies in the following odds theorem. The odds theorem states that The odds algorithm computes the optimalstrategyand the optimalwin probabilityat the same time. Also, the number of operations of the odds algorithm is (sub)linear in n. Hence no quicker algorithm can possibly exist for all sequences, so that the odds algorithm is, at the same time, optimal as an algorithm. Bruss 2000devised the odds algorithm, and coined its name. It is also known as Bruss algorithm (strategy). Free implementations can be found on the web. Applications reach from medical questions inclinical trialsover sales problems,secretary problems,portfolioselection, (one way) search strategies, trajectory problems and theparking problemto problems in online maintenance and others. There exists, in the same spirit, an Odds Theorem for continuous-time arrival processes withindependent incrementssuch as thePoisson process(Bruss 2000). In some cases, the odds are not necessarily known in advance (as in Example 2 above) so that the application of the odds algorithm is not directly possible. In this case each step can usesequential estimatesof the odds. This is meaningful, if the number of unknown parameters is not large compared with the number n of observations. The question of optimality is then more complicated, however, and requires additional studies. Generalizations of the odds algorithm allow for different rewards for failing to stop and wrong stops as well as replacing independence assumptions by weaker ones (Ferguson 2008). Bruss & Paindaveine 2000discussed a problem of selecting the lastk{\displaystyle k}successes. Tamaki 2010proved a multiplicative odds theorem which deals with a problem of stopping at any of the lastℓ{\displaystyle \ell }successes. A tight lower bound of win probability is obtained byMatsui & Ano 2014. Matsui & Ano 2017discussed a problem of selectingk{\displaystyle k}out of the lastℓ{\displaystyle \ell }successes and obtained a tight lower bound of win probability. Whenℓ=k=1,{\displaystyle \ell =k=1,}the problem is equivalent to Bruss' odds problem. Ifℓ=k≥1,{\displaystyle \ell =k\geq 1,}the problem is equivalent to that inBruss & Paindaveine 2000. A problem discussed byTamaki 2010is obtained by settingℓ≥k=1.{\displaystyle \ell \geq k=1.} A player is allowedr{\displaystyle r}choices, and he wins if any choice is the last success. For classical secretary problem,Gilbert & Mosteller 1966discussed the casesr=2,3,4{\displaystyle r=2,3,4}. The odds problem withr=2,3{\displaystyle r=2,3}is discussed byAno, Kakinuma & Miyoshi 2010. For further cases of odds problem, seeMatsui & Ano 2016. An optimal strategy for this problem belongs to the class of strategies defined by a set of threshold numbers(a1,a2,...,ar){\displaystyle (a_{1},a_{2},...,a_{r})}, wherea1>a2>⋯>ar{\displaystyle a_{1}>a_{2}>\cdots >a_{r}}. Specifically, imagine that you haver{\displaystyle r}letters of acceptance labelled from1{\displaystyle 1}tor{\displaystyle r}. You would haver{\displaystyle r}application officers, each holding one letter. You keep interviewing the candidates and rank them on a chart that every application officer can see. Now officeri{\displaystyle i}would send their letter of acceptance to the first candidate that is better than all candidates1{\displaystyle 1}toai{\displaystyle a_{i}}. (Unsent letters of acceptance are by default given to the last applicants, the same as in the standard secretary problem.) Whenr=2{\displaystyle r=2},Ano, Kakinuma & Miyoshi 2010showed that the tight lower bound of win probability is equal toe−1+e−32.{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}.}For general positive integerr{\displaystyle r},Matsui & Ano 2016proved that the tight lower bound of win probability is the win probability of thesecretary problem variant where one must pick the top-k candidates using just k attempts. Whenr=3,4,5{\displaystyle r=3,4,5}, tight lower bounds of win probabilities are equal toe−1+e−32+e−4724{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}},e−1+e−32+e−4724+e−27611152{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}+e^{-{\frac {2761}{1152}}}}ande−1+e−32+e−4724+e−27611152+e−41626371474560,{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}+e^{-{\frac {2761}{1152}}}+e^{-{\frac {4162637}{1474560}}},}respectively. For further numerical cases forr=6,...,10{\displaystyle r=6,...,10}, and an algorithm for general cases, seeMatsui & Ano 2016.
https://en.wikipedia.org/wiki/Odds_algorithm
ARMInstruction Set Simulator, also known asARMulator, is one of the software development tools provided by the development systems business unit ofARM Limitedto all users of ARM-based chips. It owes its heritage to the early development of the instruction set bySophie Wilson. Part of this heritage is still visible in the provision of aTube BBC Micromodel in ARMulator. ARMulator is written inCand provides more than just an instruction set simulator, it provides a virtual platform for system emulation. It comes ready to emulate an ARM processor and certain ARMcoprocessors. If the processor is part of anembedded system, then licensees may extend ARMulator to add their own implementations of the additional hardware to the ARMulator model. ARMulator provides a number of services to help with the time-based behaviour and event scheduling and ships with examples of memory mapped and co-processor expansions. This way, they can use ARMulator to emulate their entireembedded system. A key limitation for ARMulator is that it can only simulate a single ARM CPU at one time, although almost all ARM cores up toARM11are available. Performance of ARMulator is good for the technology employed, it's about 1000 host (PC) instructions per ARM instruction. This means that emulated speeds of 1 MHz were normal for PCs of the mid to late 90s. Accuracy is good too, although it is classed as cycle count accurate rather than cycle accurate, this is because the ARM pipeline isn't fully modeled (although register interlocks are). Resolution is to an instruction, as a consequence when single stepping the register interlocks are ignored and different cycle counts are returned than if the program had simply run, this was unavoidable. Testing ARMulator was always a time-consuming challenge, the full ARM architecture validation suites being employed. At over 1 million lines of C code it was a fairly hefty product. ARMulator allows runtime debugging using either armsd (ARM Symbolic Debugger), or either of the graphical debuggers that were shipped in SDT and the later ADS products. ARMulator suffered from being an invisible tool with a text file configuration (armul.conf) that many found complex to configure. ARMulator II formed the basis for the high accuracy, cycle callable co-verification models of ARM processors, these CoVs models (seeCycle Accurate Simulator) were the basis of many CoVerification systems for ARM processors. ARMulator was available on a very broad range of platforms through its life, includingMac,RISC OSplatforms,DEC Alpha,HP-UX,Solaris,SunOS,Windows,Linux. In the mid-1990s there was reluctance to support Windows platforms; pre-Windows 95 it was a relatively challenging platform. Through the late 1990s and early 2000s support was removed for all but Solaris, Windows and Linux - although undoubtedly the code base remains littered with pragmas such as #ifdef RISCOS. ARMulator II shipped in early ARM toolkits as well as the later SDT 2.5, SDT 2.5.1, ADS 1.0, ADS 1.1, ADS 1.2, RCVT 1.0 and also separately as RVISS. Special models were produced during the development of CPUs, notably theARM9E, ARM10 andARM11, these models helped with architectural decisions such as Thumb-2 and TrustZone. ARMulator has been gradually phased out and has been replaced byJust-in-time compilation-based high performance CPU and system models (See FastSim link below). ARMulator I was made open source and is the basis for the GNU version of ARMulator. Key differences are in the memory interface and services, also the instruction decode is done differently. The GNU ARMulator is available as part of theGDBdebugger in the ARM GNU Tools. ARMulator II formed the basis for the high accuracy, cycle callable co-verification models of ARM processors, these CoVs models (see Cycle Accurate Simulator) were the basis of many CoVerification systems for ARM processors. Mentor Graphic's Seamless have the market leading CoVs system that supports many ARM cores, and many other CPUs. ARMulator II shipped in early ARM toolkits as well as the later SDT 2.5, SDT 2.5.1, ADS 1.0, ADS 1.1, ADS 1.2, RVCT 1.0 and also separately as RVISS. Key contributors to ARMulator II were Mike Williams, Louise Jameson, Charles Lavender, Donald Sinclair, Chris Lamb and Rebecca Bryan (who worked on ARMulator as both an engineer and later as product manager). Significant input was also made by Allan Skillman, who was working on ARM CoVerification models at the time. A key contributor to ARMulator I wasDave Jaggar. Special models were produced during the development of CPUs, notably the ARM9E, ARM10 and ARM11, these models helped with architectural decisions such as Thumb-2 and TrustZone.
https://en.wikipedia.org/wiki/ARMulator
Reed's lawis the assertion ofDavid P. Reedthat theutilityof largenetworks, particularlysocial networks, canscale exponentiallywith the size of the network.[1] The reason for this is that the number of possible sub-groups of network participants is 2N−N− 1, whereNis the number of participants. This grows much more rapidly than either so that even if the utility of groups available to be joined is very small on a per-group basis, eventually thenetwork effectof potential group membership can dominate the overall economics of the system. Given asetAofNpeople, it has 2Npossible subsets. This is not difficult to see, since we can form each possible subset by simply choosing for each element ofAone of two possibilities: whether to include that element, or not. However, this includes the (one) empty set, andNsingletons, which are not properly subgroups. So 2N−N− 1 subsets remain, which is exponential, like 2N. From David P. Reed's, "The Law of the Pack" (Harvard Business Review, February 2001, pp 23–4): Reed's Law is often mentioned when explaining competitive dynamics of internet platforms. As the law states that a network becomes more valuable when people can easily form subgroups to collaborate, while this value increases exponentially with the number of connections, business platform that reaches a sufficient number of members can generatenetwork effectsthat dominate the overall economics of the system.[2] Other analysts of network value functions, includingAndrew Odlyzko, have argued that both Reed's Law and Metcalfe's Law[3]overstate network value because they fail to account for the restrictive impact of human cognitive limits on network formation. According to this argument, the research aroundDunbar's numberimplies a limit on the number of inbound and outbound connections a human in a group-forming network can manage, so that the actual maximum-value structure is much sparser than the set-of-subsets measured by Reed's law or the complete graph measured by Metcalfe's law.
https://en.wikipedia.org/wiki/Reed%27s_law
ThePact of Forgetting(Spanish:Pacto del Olvido) is the political decision by both leftist and rightist parties of Spain to avoid confronting directly the legacy ofFrancoismafter the death ofFrancisco Francoin 1975.[1]The Pact of Forgetting was an attempt to move on from theCivil Warand subsequent repression and to concentrate on the future of Spain.[2]In making a smooth transition from autocracy and totalitarianism to democracy, the Pact ensured that there were no prosecutions for persons responsible for human rights violations or similar crimes committed during the Francoist period. On the other hand, Francoist public memorials, such as the mausoleum of theValley of the Fallen, fell into disuse for official occasions.[3]Also, the celebration of "Day of Victory" during the Franco era was changed to "Armed Forces Day" so respect was paid to bothNationalistandRepublicanparties of the Civil War. The pact underpinned thetransition to democracyof the 1970s[4]and ensured that difficult questions about the recent past were suppressed for fear of endangering 'national reconciliation' and the restoration of liberal-democratic freedoms. Responsibility for the Spanish Civil War, and for the repression that followed, was not to be placed upon any particular social or political group. "In practice, this presupposed suppressing painful memories derived from the post civil war division of the population into 'victors' and 'vanquished'".[5]While many historians accept that the pact served a purpose at the time of transition,[6]there is more controversy as to whether it should still be adhered to. Paul Preston takes the view that Franco had time to impose his own version of history, which still prevents contemporary Spain from "looking upon its recent violent past in an open and honest way".[7]In 2006, two-thirds of Spaniards favored a "fresh investigation into the war".[8] "It is estimated that 400,000 people spent time in prisons, camps, or forced labor battalions".[9]Some historians believe that the repression committed by the Francoist State was most severe and prevalent in the immediate years after theSpanish Civil Warand through the 1940s. During this time of the repression, there was an escalation of torture, illegal detention, and execution. This style of repression remained frequent until the end of theSpanish State. Especially during 1936–1939, Nationalist Forces seized control of cities and towns in the Franco-led military coup and would hunt down any protesters or those who were labeled as a threat to the government and believed to sympathize with the Republican cause.[10]"Waves of these individuals were condemned on mere hearsay without trial, loaded onto trucks, taken to deserted areas outside city boundaries, summarily shot, and buried in mass, shallow graves that began dotting the Spanish countryside in the wake of the advancing Nationalist."[11] Advances in DNA technology gave scope for the identification of the remains of Republicans executed by Franco supporters. The year 2000 saw the foundation of theAssociation for the Recovery of Historical Memorywhich grew out of the quest by a sociologist,Emilio Silva-Barrera, to locate and identify the remains of his grandfather, who was shot by Franco's forces in 1936. Such projects have been the subject of political debate in Spain, and are referenced for example in the 2021 filmParallel Mothers. There have been other notable references to the Civil War in the arts since the year 2000 (for example,Javier Cercas' 2001 novelSoldiers of Salamis). However, the subject of the Civil War had not been "off limits" in the arts in previous decades; for example, Francoist repression is referenced in the 1973 filmSpirit of the Beehive,[citation needed]and arguably[by whom?]the pact is mainly a political construct. The clearest and most explicit expression of the Pact is theSpanish 1977 Amnesty Law.[12] The Pact was challenged by the socialist government elected in 2004, which under prime ministerJose Luis Rodriguez Zapateropassed theHistorical Memory Law. Among other measures, the Historical Memory Law rejected the legitimacy of laws passed and trials conducted by the Francoist regime. The Law repealed some Francoist laws and ordered the removal of remainingsymbols of Francoismfrom public buildings.[8] The Historical Memory Law has been criticised by some on the left (for not going far enough) and also by some on the right (for example, as a form of "vengeance").[13]After thePartido Populartook power in 2011 it did not repeal the Historical Memory Law, but it closed the government office dedicated to the exhumation of victims of Francoist repression.[14]UnderMariano Rajoy, the government was not willing to spend public money on exhumations in Spain,[15]although the Partido Popular supported the repatriation of the remains of Spanish soldiers who fought in theBlue Divisionfor Hitler. In 2010 there was a judicial controversy pertaining to the 1977 Spanish Amnesty Law. Spanish judgeBaltasar Garzónchallenged the Pact of Forgetting by saying that those who committedcrimes against humanityduring theSpanish Stateare not subject to the amnesty law or statutes of limitation. Relatives of those who were executed or went missing during the Franco regime demanded justice for their loved ones. Some of those who were targeted and buried in mass graves during the Franco regime were teachers, farmers, shop owners, women who did not marry in church and those on the losing side of war.[16]However, the Spanish Supreme Court challenged the investigations by Garzón. They investigated the judge for alleged abuse of power, knowingly violating the amnesty law, following a complaint from Miguel Bernard, the secretary general of a far-right group in Spain called "Manos Limpias". Bernard had criticized Garzón by saying:[17] [Garzón] cannot prosecute Francoism. It's already history, and only historians can judge that period. He uses justice for his own ego. He thought that, by prosecuting Francoism, he could become the head of the International Criminal Court and even win the Nobel Peace Prize. Although Garzón was eventually cleared of abuse of power in this instance, the Spanish judiciary upheld the Amnesty Law, discontinuing his investigations into Francoist crimes.[7] In 2022 theDemocratic Memory Lawenacted by the government ofPedro Sánchezfurther dealt with the legacy of Francoism and included measures such as to make the government responsible for exhuming and identifying the bodies of those killed by the fascist regime and buried in unmarked graves, to create an official register of victims and to remove a number of remaining Francoist symbols from the country. TheUnited Nationshas repeatedly urged Spain to repeal the amnesty law, for example in 2012,[18]and most recently in 2013.[19]This is on the basis that under international law amnesties do not apply to crimes against humanity. According to theInternational Covenant on Civil and Political Rights, Article 7, "no one shall be subjected to torture or to cruel,inhuman or degrading treatmentor punishment".[20]Furthermore, Judge Garzón had drawn attention to Article 15, which does not admit political exceptions to punishing individuals for criminal acts. It has also been argued that crimes during the Franco era, or at least those of the Civil War period, were not yet illegal. This is because international law regarding crimes of humanity developed in the aftermath of the Second World War and for crimes prior to that period the principle ofnullum crimen sine lege, or "no crime without a law", could be said to apply.[20] In 2013, an Argentinian judge was investigating Franco-era crimes under the international legal principle ofuniversal justice.[19][21] In Poland, which underwent a laterdemocratic transition, the Spanish agreement not to prosecute politically-motivated wrongdoing juridically and not to use the past in daily politics was seen as the example to follow.[22]In the 1990s theprogressivemedia hailed the Spanish model, which reportedly refrained from revanchism and from the vicious circle of "settling accounts".[23]The issue was highly related to the debate on "decommunization" in general and on "lustration" in particular; the latter was about measures intended against individuals involved in the pre-1989 regime. Liberal and left-wing media firmly opposed any such plan, and they referred the Spanish pattern as the civilized way of moving from one political system to another.[24]In a debate about transition from communism, held by two opinion leadersVaclav HavelandAdam Michnik, the Spanish model was highly recommended.[25]Later, the policies of prime minister Zapatero were viewed as dangerous "playing with fire",[26]and pundits ridiculed him as the one who was "rattling with skeletons pulled from cupboards" and "winning the civil war lost years ago"; they compared him toJarosław Kaczyński[27]and leaders of allegedly sectarian, fanatically anti-communist, nationalistic, Catholic groupings.[28]However, during the 2010s the left-wing media were gradually abandoning their early criticism of prime minister Zapatero;[29]they were rather agonizing about Rajoy and his strategy to park the "historical memory" politics in obscurity.[30]With the threat of "lustration" now gone, progressist authors have effectively made a U-turn; currently they are rather skeptical about the alleged "pact of forgetting"[31]and advocate the need to make further legislative steps advanced by theSánchezgovernment on the path towards "democratic memory".[32]The Polish right, which in the 1990s was rather muted about the solution adopted in Spain, since then remains consistently highly critical about the "historical memory" politics of bothPSOEandPPgovernments.[33]
https://en.wikipedia.org/wiki/Pact_of_forgetting
In common usage,randomnessis the apparent or actual lack of definitepatternorpredictabilityin information.[1][2]A random sequence of events,symbolsor steps often has noorderand does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if there is a knownprobability distribution, the frequency of different outcomes over repeated events (or "trials") is predictable.[note 1]For example, when throwing twodice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4. In this view, randomness is not haphazardness; it is a measure of uncertainty of an outcome. Randomness applies to concepts of chance,probability, andinformation entropy. The fields of mathematics, probability, and statistics use formal definitions of randomness, typically assuming that there is some 'objective' probability distribution. In statistics, arandom variableis an assignment of a numerical value to each possible outcome of anevent space. This association facilitates the identification and the calculation of probabilities of the events. Random variables can appear inrandom sequences. Arandom processis a sequence of random variables whose outcomes do not follow adeterministicpattern, but follow an evolution described byprobability distributions. These and other constructs are extremely useful inprobability theoryand the variousapplications of randomness. Randomness is most often used instatisticsto signify well-defined statistical properties.Monte Carlo methods, which rely on random input (such as fromrandom number generatorsorpseudorandom number generators), are important techniques in science, particularly in the field ofcomputational science.[3]By analogy,quasi-Monte Carlo methodsusequasi-random number generators. Random selection, when narrowly associated with asimple random sample, is a method of selecting items (often called units) from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, with a bowl containing just 10 red marbles and 90 blue marbles, a random selection mechanism would choose a red marble with probability 1/10. A random selection mechanism that selected 10 marbles from this bowl would not necessarily result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable, a random selection mechanism requires equal probabilities for any item to be chosen. That is, if the selection process is such that each member of a population, say research subjects, has the same probability of being chosen, then we can say the selection process is random.[2] According toRamsey theory, pure randomness (in the sense of there being no discernible pattern) is impossible, especially for large structures. MathematicianTheodore Motzkinsuggested that "while disorder is more probable in general, complete disorder is impossible".[4]Misunderstanding this can lead to numerousconspiracy theories.[5]Cristian S. Caludestated that "given the impossibility of true randomness, the effort is directed towards studying degrees of randomness".[6]It can be proven that there is infinite hierarchy (in terms of quality or strength) of forms of randomness.[6] In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threwdiceto determine fate, and this later evolved into games of chance. Most ancient cultures used various methods ofdivinationto attempt to circumvent randomness and fate.[7][8]Beyondreligionandgames of chance, randomness has been attested forsortitionsince at least ancientAthenian democracyin the form of akleroterion.[9] The formalization of odds and chance was perhaps earliest done by the Chinese of 3,000 years ago. The Greek philosophers discussed randomness at length, but only in non-quantitative forms. It was only in the 16th century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention ofcalculushad a positive impact on the formal study of randomness. In the 1888 edition of his bookThe Logic of Chance,John Vennwrote a chapter onThe conception of randomnessthat included his view of the randomness of the digits ofpi(π), by using them to construct arandom walkin two dimensions.[10] The early part of the 20th century saw a rapid growth in the formal analysis of randomness, as various approaches to the mathematical foundations of probability were introduced. In the mid-to-late-20th century, ideas ofalgorithmic information theoryintroduced new dimensions to the field via the concept ofalgorithmic randomness. Although randomness had often been viewed as an obstacle and a nuisance for many centuries, in the 20th century computer scientists began to realize that thedeliberateintroduction of randomness into computations can be an effective tool for designing better algorithms. In some cases, suchrandomized algorithmseven outperform the best deterministic methods.[11] Many scientific fields are concerned with randomness: In the 19th century, scientists used the idea of random motions of molecules in the development ofstatistical mechanicsto explain phenomena inthermodynamicsandthe properties of gases. According to several standard interpretations ofquantum mechanics, microscopic phenomena are objectively random.[12]That is, in an experiment that controls all causally relevant parameters, some aspects of the outcome still vary randomly. For example, if a single unstableatomis placed in a controlled environment, it cannot be predicted how long it will take for the atom to decay—only the probability of decay in a given time.[13]Thus, quantum mechanics does not specify the outcome of individual experiments, but only the probabilities.Hidden variable theoriesreject the view that nature contains irreducible randomness: such theories posit that in the processes that appear random, properties with a certain statistical distribution are at work behind the scenes, determining the outcome in each case. Themodern evolutionary synthesisascribes the observed diversity of life to random geneticmutationsfollowed bynatural selection. The latter retains some random mutations in thegene pooldue to the systematically improved chance for survival and reproduction that those mutated genes confer on individuals who possess them. The location of the mutation is not entirely random however as e.g. biologically important regions may be more protected from mutations.[14][15][16] Several authors also claim that evolution (and sometimes development) requires a specific form of randomness, namely the introduction of qualitatively new behaviors. Instead of the choice of one possibility among several pre-given ones, this randomness corresponds to the formation of new possibilities.[17][18] The characteristics of an organism arise to some extent deterministically (e.g., under the influence of genes and the environment), and to some extent randomly. For example, thedensityoffrecklesthat appear on a person's skin is controlled by genes and exposure to light; whereas the exact location ofindividualfreckles seems random.[19] As far as behavior is concerned, randomness is important if an animal is to behave in a way that is unpredictable to others. For instance, insects in flight tend to move about with random changes in direction, making it difficult for pursuing predators to predict their trajectories. The mathematical theory ofprobabilityarose from attempts to formulate mathematical descriptions of chance events, originally in the context ofgambling, but later in connection with physics.Statisticsis used to infer an underlyingprobability distributionof a collection of empirical observations. For the purposes ofsimulation, it is necessary to have a large supply ofrandom numbers—or means to generate them on demand. Algorithmic information theorystudies, among other topics, what constitutes arandom sequence. The central idea is that a string ofbitsis random if and only if it is shorter than any computer program that can produce that string (Kolmogorov randomness), which means that random strings are those that cannot becompressed. Pioneers of this field includeAndrey Kolmogorovand his studentPer Martin-Löf,Ray Solomonoff, andGregory Chaitin. For the notion of infinite sequence, mathematicians generally acceptPer Martin-Löf's semi-eponymous definition: An infinite sequence is random if and only if it withstands all recursively enumerable null sets.[20]The other notions of random sequences include, among others, recursive randomness and Schnorr randomness, which are based on recursively computable martingales. It was shown byYongge Wangthat these randomness notions are generally different.[21] Randomness occurs in numbers such aslog(2)andpi. The decimal digits of pi constitute an infinite sequence and "never repeat in a cyclical fashion." Numbers like pi are also considered likely to benormal: Pi certainly seems to behave this way. In the first six billion decimal places of pi, each of the digits from 0 through 9 shows up about six hundred million times. Yet such results, conceivably accidental, do not prove normality even in base 10, much less normality in other number bases.[22] In statistics, randomness is commonly used to createsimple random samples. This allows surveys of completely random groups of people to provide realistic data that is reflective of the population. Common methods of doing this include drawing names out of a hat or using a random digit chart (a large table of random digits). In information science, irrelevant or meaningless data is considered noise. Noise consists of numerous transient disturbances, with a statistically randomized time distribution. Incommunication theory, randomness in a signal is called "noise", and is opposed to that component of its variation that is causally attributable to the source, the signal. In terms of the development of random networks, for communication randomness rests on the two simple assumptions ofPaul ErdősandAlfréd Rényi, who said that there were a fixed number of nodes and this number remained fixed for the life of the network, and that all nodes were equal and linked randomly to each other.[clarification needed][23] Therandom walk hypothesisconsiders that asset prices in an organizedmarketevolve at random, in the sense that the expected value of their change is zero but the actual value may turn out to be positive or negative. More generally, asset prices are influenced by a variety of unpredictable events in the general economic environment. Random selection can be an official method to resolvetiedelections in some jurisdictions.[24]Its use in politics originates long ago. Many offices inancient Athenswere chosen by lot instead of modern voting. Randomness can be seen as conflicting with thedeterministicideas of some religions, such as those where the universe is created by an omniscient deity who is aware of all past and future events. If the universe is regarded to have a purpose, then randomness can be seen as impossible. This is one of the rationales for religious opposition toevolution, which states thatnon-randomselection is applied to the results of random genetic variation. HinduandBuddhistphilosophies state that any event is the result of previous events, as is reflected in the concept ofkarma. As such, this conception is at odds with the idea of randomness, and any reconciliation between both of them would require an explanation.[25] In some religious contexts, procedures that are commonly perceived as randomizers are used for divination.Cleromancyuses the casting of bones or dice to reveal what is seen as the will of the gods. In most of its mathematical, political, social and religious uses, randomness is used for its innate "fairness" and lack of bias. Politics:Athenian democracywas based on the concept ofisonomia(equality of political rights), and used complex allotment machines to ensure that the positions on the ruling committees that ran Athens were fairly allocated.Allotmentis now restricted to selecting jurors in Anglo-Saxon legal systems, and in situations where "fairness" is approximated byrandomization, such as selectingjurorsand militarydraftlotteries. Games: Random numbers were first investigated in the context ofgambling, and many randomizing devices, such asdice,shuffling playing cards, androulettewheels, were first developed for use in gambling. The ability to produce random numbers fairly is vital to electronic gambling, and, as such, the methods used to create them are usually regulated by governmentGaming Control Boards. Random drawings are also used to determinelotterywinners. In fact, randomness has been used for games of chance throughout history, and to select out individuals for an unwanted task in a fair way (seedrawing straws). Sports: Some sports, includingAmerican football, usecoin tossesto randomly select starting conditions for games orseedtied teams forpostseason play. TheNational Basketball Associationuses a weightedlotteryto order teams in its draft. Mathematics: Random numbers are also employed where their use is mathematically important, such as sampling foropinion pollsand for statistical sampling inquality controlsystems. Computational solutions for some types of problems use random numbers extensively, such as in theMonte Carlo methodand ingenetic algorithms. Medicine: Random allocation of a clinical intervention is used to reduce bias in controlled trials (e.g.,randomized controlled trials). Religion: Although not intended to be random, various forms ofdivinationsuch ascleromancysee what appears to be a random event as a means for a divine being to communicate their will (see alsoFree willandDeterminismfor more). It is generally accepted that there exist three mechanisms responsible for (apparently) random behavior in systems: The manyapplications of randomnesshave led to many different methods for generating random data. These methods may vary as to how unpredictable orstatistically randomthey are, and how quickly they can generate random numbers. Before the advent of computationalrandom number generators, generating large amounts of sufficiently random numbers (which is important in statistics) required a lot of work. Results would sometimes be collected and distributed asrandom number tables. There are many practical measures of randomness for a binary sequence. These include measures based on frequency,discrete transforms,complexity, or a mixture of these, such as the tests by Kak, Phillips, Yuen, Hopkins, Beth and Dai, Mund, and Marsaglia and Zaman.[26] Quantum nonlocalityhas been used to certify the presence of genuine or strong form of randomness in a given string of numbers.[27] Popular perceptions of randomness are frequently mistaken, and are often based on fallacious reasoning or intuitions. This argument is, "In a random selection of numbers, since all numbers eventually appear, those that have not come up yet are 'due', and thus more likely to come up soon." This logic is only correct if applied to a system where numbers that come up are removed from the system, such as whenplaying cardsare drawn and not returned to the deck. In this case, once a jack is removed from the deck, the next draw is less likely to be a jack and more likely to be some other card. However, if the jack is returned to the deck, and the deck is thoroughly reshuffled, a jack is as likely to be drawn as any other card. The same applies in any other process where objects are selected independently, and none are removed after each event, such as the roll of a die, a coin toss, or mostlotterynumber selection schemes. Truly random processes such as these do not have memory, which makes it impossible for past outcomes to affect future outcomes. In fact, there is no finite number of trials that can guarantee a success. In a random sequence of numbers, a number may be said to be cursed because it has come up less often in the past, and so it is thought that it will occur less often in the future. A number may be assumed to be blessed because it has occurred more often than others in the past, and so it is thought likely to come up more often in the future. This logic is valid only if the randomisation might be biased, for example if a die is suspected to be loaded then its failure to roll enough sixes would be evidence of that loading. If the die is known to be fair, then previous rolls can give no indication of future events. In nature, events rarely occur with a frequency that is knowna priori, so observing outcomes to determine which events are more probable makes sense. However, it is fallacious to apply this logic to systems designed and known to make all outcomes equally likely, such as shuffled cards, dice, and roulette wheels. In the beginning of a scenario, one might calculate the probability of a certain event. However, as soon as one gains more information about the scenario, one may need to re-calculate the probability accordingly. For example, when being told that a woman has two children, one might be interested in knowing if either of them is a girl, and if yes, the probability that the other child is also a girl. Considering the two events independently, one might expect that the probability that the other child is female is ½ (50%), but by building aprobability spaceillustrating all possible outcomes, one would notice that the probability is actually only ⅓ (33%). To be sure, the probability space does illustrate four ways of having these two children: boy-boy, girl-boy, boy-girl, and girl-girl. But once it is known that at least one of the children is female, this rules out the boy-boy scenario, leaving only three ways of having the two children: boy-girl, girl-boy, girl-girl. From this, it can be seen only ⅓ of these scenarios would have the other child also be a girl[28](seeBoy or girl paradoxfor more). In general, by using a probability space, one is less likely to miss out on possible scenarios, or to neglect the importance of new information. This technique can be used to provide insights in other situations such as theMonty Hall problem, a game show scenario in which a car is hidden behind one of three doors, and two goats are hidden asbooby prizesbehind the others. Once the contestant has chosen a door, the host opens one of the remaining doors to reveal a goat, eliminating that door as an option. With only two doors left (one with the car, the other with another goat), the player must decide to either keep their decision, or to switch and select the other door. Intuitively, one might think the player is choosing between two doors with equal probability, and that the opportunity to choose another door makes no difference. However, an analysis of the probability spaces would reveal that the contestant has received new information, and that changing to the other door would increase their chances of winning.[28]
https://en.wikipedia.org/wiki/Randomness
Thesuffix-onym(fromAncient Greek:ὄνυμα,lit.'name') is abound morpheme, that is attached to the end of aroot word, thus forming a newcompound wordthat designates a particularclassofnames. Inlinguisticterminology, compound words that are formed with suffix -onym are most commonly used as designations for variousonomasticclasses. Most onomastic terms that are formed with suffix -onym areclassical compounds, whose word roots are taken fromclassical languages(Greek and Latin).[1][2] For example, onomastic terms liketoponymandlinguonymare typical classical (or neoclassical) compounds, formed from suffix-onymand classical (Greek and Latin) root words (Ancient Greek:τόπος/ place;Latin:lingua/ language). In some compounds, the-onymmorpheme has been modified by replacing (or dropping) the "o". In the compounds likeananymandmetanym, the correct forms (anonymandmetonym) were pre-occupied by other meanings. Other, late 20th century examples, such ashypernymandcharacternym, are typically redundantneologisms, for which there are more traditional words formed with the full-onym(hyperonymandcharactonym). The English suffix-onymis from theAncient Greeksuffix-ώνυμον(ōnymon), neuter of the suffixώνυμος(ōnymos), having a specified kind of name, from the Greekὄνομα(ónoma),Aeolic Greekὄνυμα (ónyma), "name". The form-ōnymosis that taken byónomawhen it is the end component of abahuvrihicompound, but in English its use is extended totatpuruṣacompounds. The suffix is found in many modern languages with various spellings. Examples are:Dutchsynoniem,GermanSynonym,Portuguesesinónimo,Russianсиноним (sinonim),Polishsynonim,Finnishsynonyymi,Indonesiansinonim,Czechsynonymum. According to a 1988 study[3]of words ending in-onym, there are four discernible classes of-onymwords: (1) historic, classic, or, for want of better terms, naturally occurring or common words; (2) scientific terminology, occurring in particular in linguistics, onomastics, etc.; (3) language games; and (4)nonce words. Older terms are known to gain new, sometimes contradictory, meanings (e.g.,eponymandcryptonym). In many cases, two or more words describe the same phenomenon, but no precedence is discernible (e.g.,necronymandpenthonym). New words are sometimes created, the meaning of which duplicating existing terms. On occasion, new words are formed with little regard to historical principles.
https://en.wikipedia.org/wiki/-onym
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes ofmajority rule Positive results This article discusses the methods and results of comparing differentelectoral systems. There are two broad ways to compare voting systems: Voting methods can be evaluated by measuring their accuracy under random simulated elections aiming to be faithful to the properties of elections in real life. The first such evaluation was conducted by Chamberlin and Cohen in 1978, who measured the frequency with which certain non-Condorcet systems elected Condorcet winners.[1] TheMarquis de Condorcetviewed elections as analogous to jury votes where each member expresses an independent judgement on the quality of candidates. Candidates differ in terms of their objective merit, but voters have imperfect information about the relative merits of the candidates. Such jury models are sometimes known asvalence models. Condorcet and his contemporaryLaplacedemonstrated that, in such a model, voting theory could be reduced to probability by finding theexpected qualityof each candidate.[2] The jury model implies several natural concepts of accuracy for voting systems under different models: However, Condorcet's model is based on the extremely strong assumption ofindependent errors, i.e. voters will not be systematically biased in favor of one group of candidates or another. This is usually unrealistic: voters tend to communicate with each other, form parties or political ideologies, and engage in other behaviors that can result incorrelated errors. Duncan Blackproposed a one-dimensional spatial model of voting in 1948, viewing elections as ideologically driven.[4]His ideas were later expanded by Anthony Downs.[5]Voters' opinions are regarded as positions in a space of one or more dimensions; candidates have positions in the same space; and voters choose candidates in order of proximity (measured under Euclidean distance or some other metric). Spatial models imply a different notion of merit for voting systems: the more acceptable the winning candidate may be as a location parameter for the voter distribution, the better the system. Apolitical spectrumis a one-dimensional spatial model. Neutral voting models try to minimize the number of parameters and, as an example of thenothing-up-my-sleeve principle. The most common such model is theimpartial anonymous culturemodel (orDirichletmodel). These models assume voters assign each candidate a utility completely at random (from auniform distribution). Tidemanand Plassmann conducted a study which showed that a two-dimensional spatial model gave a reasonable fit to 3-candidate reductions of a large set of electoral rankings. Jury models, neutral models, and one-dimensional spatial models were all inadequate.[6]They looked at Condorcet cycles in voter preferences (an example of which is A being preferred to B by a majority of voters, B to C and C to A) and found that the number of them was consistent with small-sample effects, concluding that "voting cycles will occur very rarely, if at all, in elections with many voters." The relevance of sample size had been studied previously byGordon Tullock, who argued graphically that although finite electorates will be prone to cycles, the area in which candidates may give rise to cycling shrinks as the number of voters increases.[7] Autilitarianmodel views voters as ranking candidates in order of utility. The rightful winner, under this model, is the candidate who maximizes overall social utility. A utilitarian model differs from a spatial model in several important ways: It follows from the last property that no voting system which gives equal influence to all voters is likely to achieve maximum social utility. Extreme cases of conflict between the claims of utilitarianism and democracy are referred to as the 'tyranny of the majority'. See Laslier's, Merlin's, and Nurmi's comments in Laslier's write-up.[8] James Millseems to have been the first to claim the existence of ana prioriconnection between democracy and utilitarianism – see the Stanford Encyclopedia article.[9] Suppose that theithcandidate in an election has meritxi(we may assume thatxi~N(0,σ2)[10]), and that voterj's level of approval for candidateimay be written asxi+ εij(we will assume that the εijareiid.N(0,τ2)). We assume that a voter ranks candidates in decreasing order of approval. We may interpret εijas the error in voterj's valuation of candidateiand regard a voting method as having the task of finding the candidate of greatest merit. Each voter will rank the better of two candidates higher than the less good with a determinate probabilityp(which under the normal model outlined here is equal to12+1πtan−1στ{\displaystyle {\tfrac {1}{2}}\!+\!{\tfrac {1}{\pi }}{\textrm {tan}}^{-1}{\tfrac {\sigma }{\tau }}}, as can be confirmed from a standard formula for Gaussian integrals over a quadrant[citation needed]).Condorcet's jury theoremshows that so long asp>1⁄2, the majority vote of a jury will be a better guide to the relative merits of two candidates than is the opinion of any single member. Peyton Youngshowed that three further properties apply to votes between arbitrary numbers of candidates, suggesting that Condorcet was aware of the first and third of them.[11] Robert F. Bordley constructed a 'utilitarian' model which is a slight variant of Condorcet's jury model.[12]He viewed the task of a voting method as that of finding the candidate who has the greatest total approval from the electorate, i.e. the highest sum of individual voters' levels of approval. This model makes sense even with σ2= 0, in which caseptakes the value12+1πtan−11n−1{\displaystyle {\tfrac {1}{2}}\!+\!{\tfrac {1}{\pi }}{\textrm {tan}}^{-1}{\tfrac {1}{n-1}}}wherenis the number of voters. He performed an evaluation under this model, finding as expected that the Borda count was most accurate. A simulated election can be constructed from a distribution of voters in a suitable space. The illustration shows voters satisfying a bivariate Gaussian distribution centred on O. There are 3 randomly generated candidates, A, B and C. The space is divided into 6 segments by 3 lines, with the voters in each segment having the same candidate preferences. The proportion of voters ordering the candidates in any way is given by the integral of the voter distribution over the associated segment. The proportions corresponding to the 6 possible orderings of candidates determine the results yielded by different voting systems. Those which elect the best candidate, i.e. the candidate closest to O (who in this case is A), are considered to have given a correct result, and those which elect someone else have exhibited an error. By looking at results for large numbers of randomly generated candidates the empirical properties of voting systems can be measured. The evaluation protocol outlined here is modelled on the one described by Tideman and Plassmann.[6]Evaluations of this type are commonest for single-winner electoral systems.Ranked votingsystems fit most naturally into the framework, but other types of ballot (such aFPTPandApproval voting) can be accommodated with lesser or greater effort. The evaluation protocol can be varied in a number of ways: One of the main uses of evaluations is to compare the accuracy of voting systems when voters vote sincerely. If an infinite number of voters satisfy a Gaussian distribution, then the rightful winner of an election can be taken to be the candidate closest to the mean/median, and the accuracy of a method can be identified with the proportion of elections in which the rightful winner is elected. Themedian voter theoremguarantees that all Condorcet systems will give 100% accuracy (and the same applies toCoombs' method[14]). Evaluations published in research papers use multidimensional Gaussians, making the calculation numerically difficult.[1][15][16][17]The number of voters is kept finite and the number of candidates is necessarily small. The computation is much more straightforward in a single dimension, which allows an infinite number of voters and an arbitrary numbermof candidates. Results for this simple case are shown in the first table, which is directly comparable with Table 5 (1000 voters, medium dispersion) of the cited paper by Chamberlin and Cohen. The candidates were sampled randomly from the voter distribution and a single Condorcet method (Minimax) was included in the trials for confirmation. The relatively poor performance of theAlternative vote(IRV) is explained by the well known and common source of error illustrated by the diagram, in which the election satisfies a univariate spatial model and the rightful winner B will be eliminated in the first round. A similar problem exists in all dimensions. An alternative measure of accuracy is the average distance of voters from the winner (in which smaller means better). This is unlikely to change the ranking of voting methods, but is preferred by people who interpret distance as disutility. The second table shows the average distance (in standard deviations)minus2π{\displaystyle {\sqrt {\tfrac {2}{\pi }}}}(which is the average distance of a variate from the centre of a standard Gaussian distribution) for 10 candidates under the same model. James Green-Armytage et al. published a study in which they assessed the vulnerability of several voting systems to manipulation by voters.[18]They say little about how they adapted their evaluation for this purpose, mentioning simply that it "requires creative programming". An earlier paper by the first author gives a little more detail.[19] The number of candidates in their simulated elections was limited to 3. This removes the distinction between certain systems; for instanceBlack's methodand theDasgupta-Maskin methodare equivalent on 3 candidates. The conclusions from the study are hard to summarise, but theBorda countperformed badly;Minimaxwas somewhat vulnerable; and IRV was highly resistant. The authors showed that limiting any method to elections with no Condorcet winner (choosing the Condorcet winner when there was one) would never increase its susceptibility totactical voting. They reported that the 'Condorcet-Hare' system which uses IRV as a tie-break for elections not resolved by the Condorcet criterion was as resistant to tactical voting as IRV on its own and more accurate. Condorcet-Hare is equivalent toCopeland's methodwith an IRV tie-break in elections with 3 candidates. Some systems, and the Borda count in particular, are vulnerable when the distribution of candidates is displaced relative to the distribution of voters. The attached table shows the accuracy of the Borda count (as a percentage) when an infinite population of voters satisfies a univariate Gaussian distribution andmcandidates are drawn from a similar distribution offset byxstandard distributions. Red colouring indicates figures which are worse than random. Recall that all Condorcet methods give 100% accuracy for this problem. (And notice that the reduction in accuracy asxincreases is not seen when there are only 3 candidates.) Sensitivity to the distribution of candidates can be thought of as a matter either of accuracy or of resistance to manipulation. If one expects that in the course of things candidates will naturally come from the same distribution as voters, then any displacement will be seen as attempted subversion; but if one thinks that factors determining the viability of candidacy (such as financial backing) may be correlated with ideological position, then one will view it more in terms of accuracy. Published evaluations take different views of the candidate distribution. Some simply assume that candidates are drawn from the same distribution as voters.[16][18]Several older papers assume equal means but allow the candidate distribution to be more or less tight than the voter distribution.[20][1]A paper by Tideman and Plassmann approximates the relationship between candidate and voter distributions based on empirical measurements.[15]This is less realistic than it may appear, since it makes no allowance for the candidate distribution to adjust to exploit any weakness in the voting system. A paper by James Green-Armytage looks at the candidate distribution as a separate issue, viewing it as a form of manipulation and measuring the effects of strategic entry and exit. Unsurprisingly he finds the Borda count to be particularly vulnerable.[19] The task of a voting system under a spatial model is to identify the candidate whose position most accurately represents the distribution of voter opinions. This amounts to choosing a location parameter for the distribution from the set of alternatives offered by the candidates. Location parameters may be based on the mean, the median, or the mode; but since ranked preference ballots provide only ordinal information, the median is the only acceptable statistic. This can be seen from the diagram, which illustrates two simulated elections with the same candidates but different voter distributions. In both cases the mid-point between the candidates is the 51st percentile of the voter distribution; hence 51% of voters prefer A and 49% prefer B. If we consider a voting method to be correct if it elects the candidate closest to themedianof the voter population, then since the median is necessarily slightly to the left of the 51% line, a voting method will be considered to be correct if it elects A in each case. The mean of the teal distribution is also slightly to the left of the 51% line, but the mean of the orange distribution is slightly to the right. Hence if we consider a voting method to be correct if it elects the candidate closest to themeanof the voter population, then a method will not be able to obtain full marks unless it produces different winners from the same ballots in the two elections. Clearly this will impute spurious errors to voting methods. The same problem will arise for any cardinal measure of location; only the median gives consistent results. The median is not defined for multivariate distributions but the univariate median has a property which generalizes conveniently. The median of a distribution is the position whose average distance from all points within the distribution is smallest. This definition generalizes to thegeometric medianin multiple dimensions. The distance is often defined as a voter'sdisutility function. If we have a set of candidates and a population of voters, then it is not necessary to solve the computationally difficult problem of finding the geometric median of the voters and then identify the candidate closest to it; instead we can identify the candidate whose average distance from the voters is minimized. This is the metric which has been generally deployed since Merrill onwards;[20]see also Green-Armytage and Darlington.[19][16] The candidate closest to the geometric median of the voter distribution may be termed the 'spatial winner'. Data from real elections can be analysed to compare the effects of different systems, either by comparing between countries or by applying alternative electoral systems to the real election data. The electoral outcomes can be compared throughdemocracy indices, measures ofpolitical fragmentation,voter turnout,[21][22]political efficacyand various economic and judicial indicators. The practical criteria to assess real elections include the share ofwasted votes, the complexity ofvote counting,proportionalityof the representation elected based on parties' shares of votes, andbarriers to entryfor new political movements.[23]Additional opportunities for comparison of real elections arise throughelectoral reforms. A Canadian example of such an opportunity is seen in the City of Edmonton (Canada), which went fromfirst-past-the-post votingin1917 Alberta general electionto five-memberplurality block votingin1921 Alberta general election, to five-membersingle transferable votingin1926 Alberta general election, then to FPTP again in1959 Alberta general election. One party swept all the Edmonton seats in 1917, 1921 and 1959. Under STV in 1926, two Conservatives, one Liberal, one Labour and one United Farmers MLA were elected. Traditionally the merits of different electoral systems have been argued by reference to logical criteria. These have the form ofrules of inferencefor electoral decisions, licensing the deduction, for instance, that "ifEandE' are elections such thatR(E,E'), and ifAis the rightful winner ofE, thenAis the rightful winner ofE' ". The absolute criteria state that, if the set of ballots is a certain way, a certain candidate must or must not win. These are criteria that state that, if a certain candidate wins in one circumstance, the same candidate must (or must not) win in a related circumstance. These are criteria which relate to the process of counting votes and determining a winner. These are criteria that relate to a voter's incentive to use certain forms of strategy. They could also be considered as relative result criteria; however, unlike the criteria in that section, these criteria are directly relevant to voters; the fact that a method passes these criteria can simplify the process of figuring out one's optimal strategic vote. Ballots are broadly distinguishable into two categories,cardinalandordinal, where cardinal ballots request individual measures of support for each candidate and ordinal ballots request relative measures of support. A few methods do not fall neatly into one category, such as STAR, which asks the voter to give independent ratings for each candidate, but uses both the absolute and relative ratings to determine the winner. Comparing two methods based on ballot type alone is mostly a matter of voter experience preference, unless the ballot type is connected back to one of the other mathematical criterion listed here. Criterion A is "stronger" than B if satisfying A implies satisfying B. For instance, the Condorcet criterion is stronger than the majority criterion, because all majority winners are Condorcet winners. Thus, any voting method that satisfies the Condorcet criterion must satisfy the majority criterion. The following table shows which of the above criteria are met by several single-winner methods. Not every criterion is listed. type The concerns raised above are used bysocial choice theoriststo devise systems that are accurate and resistant to manipulation. However, there are also practical reasons why one system may be more socially acceptable than another, which fall under the fields ofpublic choiceandpolitical science.[8][16]Important practical considerations include: Other considerations includebarriers to entryto thepolitical competition[28]and likelihood ofgridlocked government.[29] Multi-winner electoral systems at their best seek to produce assemblies representative in a broader sense than that of making the same decisions as would be made by single-winner votes. They can also be route to one-party sweeps of a city's seats, if a non-proportional system, such asplurality block votingorticket voting, is used. Evaluating the performance of multi-winner voting methods requires different metrics than are used for single-winner systems. The following have been proposed. The following table shows which of the above criteria are met by several multiple winner methods.
https://en.wikipedia.org/wiki/Voting_system_criterion
Radical democracyis a type ofdemocracythat advocates the radical extension ofequalityandliberty.[1]Radical democracy is concerned with a radical extension of equality andfreedom, following the idea that democracy is an unfinished, inclusive, continuous and reflexive process.[1] Within radical democracy there are three distinct strands, as articulated by Lincoln Dahlberg.[1]These strands can be labeled as agonistic, deliberative and autonomist. The first and most noted strand of radical democracy is theagonisticperspective, which is associated with the work of Laclau and Mouffe. Radical democracy was articulated byErnesto LaclauandChantal Mouffein their bookHegemony and Socialist Strategy: Towards a Radical Democratic Politics, written in 1985. They argue thatsocial movementswhich attempt to createsocial and political changeneed a strategy which challengesneoliberalandneoconservativeconcepts ofdemocracy.[2]This strategy is to expand theliberaldefinition of democracy, based onfreedomandequality, to includedifference.[2] According to Laclau and Mouffe "Radical democracy" means "the root of democracy".[3]Laclau and Mouffe claim thatliberal democracyanddeliberative democracy, in their attempts to build consensus, oppress differing opinions, races, classes, genders, and worldviews.[2]In the world, in a country, and in a social movement there are many (a plurality of) differences which resist consensus. Radical democracy is not only accepting of difference,dissentand antagonisms, but is dependent on it.[2]Laclau and Mouffe argue based on the assumption that there are oppressivepowerrelations that exist in society and that those oppressive relations should be made visible, re-negotiated and altered.[4]By building democracy around difference and dissent, oppressive power relations existing in societies are able to come to the forefront so that they can be challenged.[2] The second strand,deliberative, is mostly associated with the work ofJürgen Habermas. This strand of radical democracy is opposed to the agonistic perspective of Laclau and Mouffe. Habermas argues that political problems surrounding the organization of life can be resolved bydeliberation.[5]That is, people coming together and deliberating on the best possible solution. This type of radical democracy is in contrast with the agonistic perspective based on consensus and communicative means: there is a reflexive critical process of coming to the best solution.[5]Equality and freedom are at the root of Habermas' deliberative theory. The deliberation is established throughinstitutionsthat can ensure free and equal participation of all.[5]Habermas is aware of the fact that different cultures, world-views and ethics can lead to difficulties in the deliberative process. Despite this fact he argues that the communicative reason can create a bridge between opposing views and interests.[5] The third strand of radical democracy is theautonomiststrand, which is associated with left-communist and post-Marxist ideas. The difference between this type of radical democracy and the two noted above is the focus on "the community".[1]Thecommunityis seen as the pure constituted power instead of the deliberative rational individuals or the agonistic groups as in the first two strands. The community resembles a "plural multitude" (of people) instead of theworking classin traditional Marxist theory.[1]This plural multitude is the pure constituted power and reclaims this power by searching and creating mutual understandings within the community.[1]This strand of radical democracy challenges the traditional thinking about equality and freedom in liberal democracies by stating that individual equality can be found in the singularities within the multitude, equality overall is created by an all-inclusive multitude and freedom is created by restoring the multitude in its pure constituted power.[1]This strand of radical democracy is often a term used to refer to the post-Marxist perspectives ofItalian radicalism– for examplePaolo Virno. Laclau and Mouffe have argued for radical agonistic democracy, where different opinions and worldviews are not oppressed by the search for consensus in liberal and deliberative democracy. As this agonistic perspective has been most influential in academic literature, it has been subject to most criticisms on the idea of radical democracy. Brockelman for example argues that the theory of radical democracy is anUtopian idea.[15]Political theory, he argues, should not be used as offering a vision of a desirable society. In the same vein, it is argued that radical democracy might be useful at the local level, but does not offer a realistic perception ofdecision-makingon the national level.[16]For example, people might know what they want to see changing in their town and feel the urge to participate in the decision-making process of future local policy. Developing an opinion about issues at the local level often does not require specific skills or education. Deliberation in order to combat the problem ofgroupthink, in which the view of the majority dominates over the view of the minority, can be useful in this setting. However, people might not be skilled enough or willing to decide about national or international problems. A radical democracy approach for overcoming the flaws of democracy is, it is argued, not suitable for levels higher than the local one. Habermas and Rawls have argued for radical deliberative democracy, where consensus and communicative means are at the root of politics. However, some scholars identify multiple tensions between participation and deliberation. Three of these tensions are identified byJoshua Cohen, a student of the philosopherJohn Rawls:[17] However, the concept of radical democracy is seen in some circles as colonial in nature due to its reliance on a western notion of democracy.[18]It is argued that liberal democracy is viewed by the West as the only legitimate form of governance.[19] Since Laclau and Mouffe argued for a radical democracy, many other theorists and practitioners have adapted and changed the term.[2]For example,bell hooksandHenry Girouxhave all written about the application of radical democracy in education. In Hook's bookTeaching to Transgress: Education as the practice of freedomshe argues for education where educators teach students to go beyond the limits imposed against racial, sexual and class boundaries in order to "achieve the gift of freedom".[20]Paulo Freire's work, although initiated decades before Laclau and Mouffe, can also be read through similar lenses.[21][22][23]Theorists such asPaul ChattertonandRichard JF Dayhave written about the importance of radical democracy within some of the autonomous movements in Latin America (namely the EZLN—Zapatista Army of National Liberationin Mexico, the MST—Landless Workers' Movementin Brazil, and thePiquetero—Unemployed Workers Movement in Argentina) although the term radical democracy is used differently in these contexts.[24][25] With the rise of the internet in the years after the development of various strands of radical democracy theory, the relationship between the internet and the theory has been increasingly focused upon. The internet is regarded as an important aspect of radical democracy, as it provides a means for communication which is central to every approach to the theory. The internet is believed to reinforce both the theory of radical democracy and the actual possibility of radical democracy through three distinct ways:[26] This last point refers to the concept of aradical public spherewhere voice in thepolitical debateis given to otherwise oppressed ormarginalized groups.[27]Approached from the radical democracy theory, the expression of such views on the internet can be understood asonline activism. In current liberal representative democracies, certain voices and interests are always favored above others. Through online activism, excluded opinions and views can still be articulated. In this way, activists contribute to the ideal of a heterogeneity of positions. However, the digital age does not necessarily contribute to the notion of radical democracy. Social media platforms possess the opportunity of shutting down certain, often radical, voices. This is counterproductive to radical democracy[28]
https://en.wikipedia.org/wiki/Radical_democracy
In mathematics and computer science,optimal radix choiceis the problem of choosing the base, orradix, that is best suited for representing numbers. Various proposals have been made to quantify the relative costs of using different radices in representing numbers, especially in computer systems. One formula is the number ofdigitsneeded to express it in that base, multiplied by the base (the number of possible values each digit could have). This expression also arises in questions regarding organizational structure, networking, and other fields. The cost of representing a numberNin a given basebcan be defined as where we use thefloor function⌊⌋{\displaystyle \lfloor \rfloor }and the base-blogarithmlogb{\displaystyle \log _{b}}. If bothbandNare positive integers, then the quantityE(b,N){\displaystyle E(b,N)}is equal to the number ofdigitsneeded to express the numberNin baseb, multiplied by baseb.[1]This quantity thus measures the cost of storing or processing the numberNin basebif the cost of each "digit" is proportional tob. A base with a lower averageE(b,N){\displaystyle E(b,N)}is therefore, in some senses, more efficient than a base with a higher average value. For example,100indecimalhas three digits, so its cost of representation is 10×3 = 30, while its binary representation has seven digits (11001002), so the analogous calculation gives 2×7 = 14. Likewise, inbase 3its representation has five digits (102013), for a value of 3×5 = 15, and in base 36 (2S36) one finds 36×2 = 72. If the number is imagined to be represented by acombination lockor atally counter, in which each wheel hasbdigit faces, from0,1,...,b−1{\displaystyle 0,1,...,b-1}and having⌊logb⁡(N)+1⌋{\displaystyle \lfloor \log _{b}(N)+1\rfloor }wheels, thenE(b,N){\displaystyle E(b,N)}is the total number of digit faces needed to inclusively represent any integer from 0 toN. The quantityE(b,N){\displaystyle E(b,N)}for largeNcan be approximated as follows: The asymptotically best value is obtained for base 3, sincebln⁡(b){\displaystyle b \over \ln(b)}attains a minimum forb=3{\displaystyle b=3}in the positive integers: For base 10, we have: The closely relatedcontinuous optimizationproblem of finding the maximum of the functionf(x)=x1/x,{\displaystyle f(x)=x^{1/x},}or equivalently, on taking logs and inverting, minimizingxln⁡x{\displaystyle {\tfrac {x}{\ln x}}}for continuous rather than integer values ofx{\displaystyle x}, was posed and solved byJakob Steinerin 1850.[2]The solution isEuler's numbere≈2.71828{\displaystyle e\approx 2.71828}, the base of thenatural logarithm, for whicheln⁡e=e≈2.71828.{\displaystyle {\frac {e}{\ln e}}=e\approx 2.71828\,.}Translating this solution back to Steiner's formulation,e1/e≈1.44467{\displaystyle e^{1/e}\approx 1.44467}is the unique maximum off(x)=x1/x{\displaystyle f(x)=x^{1/x}}.[3] This analysis has sometimes been used to argue that, in some sense, "basee{\displaystyle e}is the most economical base for the representation and storage of numbers", despite the difficulty in understanding what that might mean in practice.[4] This topic appears inUnderwood Dudley'sMathematical Cranks.One of the eccentrics discussed in the book argues thate{\displaystyle e}is the best base, based on a muddled understanding of Steiner's calculus problem, and with a greatly exaggerated sense of how important the choice of radix is.[5] The values ofE(b,N){\displaystyle E(b,N)}of basesb1andb2may be compared for a large value ofN: Choosinge{\displaystyle e}forb2{\displaystyle b_{2}}gives The averageE(b,N){\displaystyle E(b,N)}of various bases up to several arbitrary numbers (avoiding proximity to powers of 2 through 12 ande) are given in the table below. Also shown are the values relative to that of basee.E(1,N){\displaystyle E(1,N)}of any numberN{\displaystyle N}is justN{\displaystyle N}, makingunarythe most economical for the first few integers, but this no longer holds asNclimbs to infinity. N= 1 to 6 N= 1 to 43 N= 1 to 182 N= 1 to 5329 One result of the relative economy of base 3 is thatternary search treesoffer an efficient strategy for retrieving elements of a database.[6]A similar analysis suggests that the optimum design of a largetelephone menu systemto minimise the number of menu choices that the average customer must listen to (i.e. the product of the number of choices per menu and the number of menu levels) is to have three choices per menu.[1] In ad-ary heap, apriority queuedata structure based ond-ary trees, the worst-case number of comparisons per operation in a heap containingn{\displaystyle n}elements isdlogd⁡n{\displaystyle d\log _{d}n}(up to lower-order terms), the same formula used above. It has been suggested that choosingd=3{\displaystyle d=3}ord=4{\displaystyle d=4}may offer optimal performance in practice.[7] Brian Hayessuggests thatE(b,N){\displaystyle E(b,N)}may be the appropriate measure for the complexity of anInteractive voice responsemenu: in a tree-structured phone menu withn{\displaystyle n}outcomes andr{\displaystyle r}choices per step, the time to traverse the menu is proportional to the product ofr{\displaystyle r}(the time to present the choices at each step) withlogr⁡n{\displaystyle \log _{r}n}(the number of choices that need to be made to determine the outcome). From this analysis, the optimal number of choices per step in such a menu is three.[1] The 1950 referenceHigh-Speed Computing Devicesdescribes a particular situation using contemporary technology. Each digit of a number would be stored as the state of aring countercomposed of severaltriodes. Whethervacuum tubesorthyratrons, the triodes were the most expensive part of a counter. For small radicesrless than about 7, a single digit requiredrtriodes.[8](Larger radices required 2rtriodes arranged asrflip-flops, as inENIAC's decimal counters.)[9] So the number of triodes in a numerical register withndigits wasrn. In order to represent numbers up to 106, the following numbers of tubes were needed: The authors conclude, Under these assumptions, the radix 3, on the average, is the most economical choice, closely followed by radices 2 and 4. These assumptions are, of course, only approximately valid, and the choice of 2 as a radix is frequently justified on more complete analysis. Even with the optimistic assumption that 10 triodes will yield a decimal ring, radix 10 leads to about one and one-half times the complexity of radix 2, 3, or 4. This is probably significant despite the shallow nature of the argument used here.[10]
https://en.wikipedia.org/wiki/Radix_economy
Parallel tempering, inphysicsandstatistics, is a computer simulation method typically used to find the lowest energy state of a system of many interacting particles. It addresses the problem that at high temperatures, one may have a stable state different from low temperature, whereas simulations at low temperatures may become "stuck" in a metastable state. It does this by using the fact that the high temperature simulation may visit states typical of both stable and metastable low temperature states. More specifically, parallel tempering (also known asreplica exchange MCMC sampling), is asimulationmethod aimed at improving the dynamic properties ofMonte Carlo methodsimulations of physical systems, and ofMarkov chain Monte Carlo(MCMC) sampling methods more generally. The replica exchange method was originally devised byRobert Swendsenand J. S. Wang,[1]then extended byCharles J. Geyer,[2]and later developed further byGiorgio Parisi,[3]Koji HukushimaandKoji Nemoto,[4]and others.[5][6]Y. Sugita and Y. Okamoto also formulated amolecular dynamicsversion of parallel tempering; this is usually known as replica-exchange molecular dynamics or REMD.[7] Essentially, one runsNcopies of the system, randomly initialized, at different temperatures. Then, based on the Metropolis criterion one exchanges configurations at different temperatures. The idea of this method is to make configurations at high temperatures available to the simulations at low temperatures and vice versa. This results in a very robust ensemble which is able to sample both low and high energy configurations. In this way, thermodynamical properties such as the specific heat, which is in general not well computed in the canonical ensemble, can be computed with great precision. Typically aMonte Carlo simulationusing aMetropolis–Hastingsupdate consists of a singlestochastic processthat evaluates theenergyof the system and accepts/rejects updates based on thetemperatureT. At high temperatures updates that change the energy of the system are comparatively more probable. When the system is highly correlated, updates are rejected and the simulation is said to suffer from critical slowing down. If we were to run two simulations at temperatures separated by a ΔT, we would find that if ΔTis small enough, then the energyhistogramsobtained by collecting the values of the energies over a set of Monte Carlo steps N will create two distributions that will somewhat overlap. The overlap can be defined by the area of the histograms that falls over the same interval of energy values, normalized by the total number of samples. For ΔT= 0 the overlap should approach 1. Another way to interpret this overlap is to say that system configurations sampled at temperatureT1are likely to appear during a simulation atT2. Because theMarkov chainshould have no memory of its past, we can create a new update for the system composed of the two systems atT1andT2. At a given Monte Carlo step we can update the global system by swapping the configuration of the two systems, or alternatively trading the two temperatures. The update is accepted according to the Metropolis–Hastings criterion with probability and otherwise the update is rejected. Thedetailed balancecondition has to be satisfied by ensuring that the reverse update has to be equally likely, all else being equal. This can be ensured by appropriately choosing regular Monte Carlo updates or parallel tempering updates with probabilities that are independent of the configurations of the two systems or of the Monte Carlo step.[8] This update can be generalized to more than two systems. By a careful choice of temperatures and number of systems one can achieve an improvement in the mixing properties of a set of Monte Carlo simulations that exceeds the extra computational cost of running parallel simulations. Other considerations to be made: increasing the number of different temperatures can have a detrimental effect, as one can think of the 'lateral' movement of a given system across temperatures as a diffusion process. Set up is important as there must be a practical histogram overlap to achieve a reasonable probability of lateral moves. The parallel tempering method can be used as a supersimulated annealingthat does not need restart, since a system at high temperature can feed new local optimizers to a system at low temperature, allowing tunneling between metastable states and improving convergence to a global optimum.
https://en.wikipedia.org/wiki/Parallel_tempering
Hypertargetingrefers to the ability to deliver advertising content to specific interest-based segments in a network.MySpacecoined the term in November 2007[1]with the launch of their SelfServe advertising solution (later called myAds[2]), described on their site as "enabling online marketers to tap into self-expressed user information to target campaigns like never before." Hypertargeting is also the ability on social network sites to target ads based on very specific criteria. This is an important step towards precision performance marketing. The first MySpace HyperTarget release offered advertisers the ability to direct their ads to 10 categories self-identified by users in their profiles, including music, sports, and movies. In July 2007 the targeting options expanded to 100 subcategories. Rather than simply targeting movie lovers, for example, advertisers could send ads based on the preferred genres like horror, romance, or comedy. By January 2010, MySpace HyperTarget involved 5 algorithms across 1,000 segments. According to an article by Harry Gold in online publisher ClickZ,[3]the general field of hypertageting draws information from 3 sources: Facebook, a popular social network, offers an ad targeting service through their Social Ads platform. Ads can be hypertargeted to users based on keywords from their profiles, pages they're fans of, events they responded to, or applications used. Some of these examples involve the use ofbehavioral targeting.[4] By 2009, hypertargeting became an accepted industry term.[5]In 2010, the InternationalConsumer Electronics Show(CES), the world's largest consumer technology tradeshow, dedicated three sessions to the topic:[citation needed]
https://en.wikipedia.org/wiki/Hypertargeting
Apotentially unwanted program(PUP) orpotentially unwanted application(PUA) is software that a user may perceive as unwanted or unnecessary. It is used as a subjective tagging criterion by security and parental control products. Such software may use an implementation that can compromise privacy or weaken the computer's security. Companies often bundle a wanted program download with a wrapper application and may offer to install an unwanted application, and in some cases without providing a clear opt-out method. Antivirus companies define the software bundled as potentially unwanted programs[1][2]which can include software that displaysintrusive advertising(adware), or tracks the user's Internet usage to sell information to advertisers (spyware), injects its own advertising into web pages that a user looks at, or uses premium SMS services to rack up charges for the user.[3][1]A growing number of open-source software projects have expressed dismay at third-party websites wrapping their downloads with unwanted bundles, without the project's knowledge or consent. Nearly every third-party free download site bundles their downloads with potentially unwanted software.[4]The practice is widely considered unethical because it violates the security interests of users without their informed consent. Some unwanted software bundles install aroot certificateon a user's device, which allows hackers to intercept private data such as banking details, without a browser giving security warnings. TheUnited States Department of Homeland Securityhas advised removing an insecure root certificate, because they make computers vulnerable to seriouscyberattacks.[5]Software developers and security experts recommend that people always download the latest version from the official project website, or a trusted package manager or app store. Historically, the first big companies working with potentially unwanted programs for creating revenue came up in the US in the mid-2000s, such asZango. These activities declined after the companies were investigated, and in some cases indicted, by authorities for invasive and harmful installs.[6] A major industry, dedicated to creating revenue by foisting potentially unwanted programs, has grown among the Israeli software industry and is frequently referred to asDownload Valley. These companies are responsible for a large part of the download and install tools,[7]which place unwanted, additional software on users' systems.[8][9][10] Unwanted programs have increased in recent years, and one study in 2014 classified unwanted programs as comprising 24.77% of totalmalwareinfections.[11]This malware includes adware according toGoogle.[12][13]Many programs include unwanted browser add-ons that track which websites a user goes to in order to sell this information to advertisers, or add advertising into web pages.[14]Five percent of computer browser visits to Google-owned websites are altered by computer programs that inject their own ads into pages.[15][16][17]Researchers have identified 50,870 Google Chrome extensions and 34,407 programs that inject ads. Thirty-eight percent of extensions and 17 percent of programs were catalogued asmalicious software, the rest being potentially unwantedadware-type applications. Some Google Chrome extension developers have sold extensions they made to third-party companies who silently push unwanted updates that incorporate previously non-existent adware into the extensions.[18][19][20] Spywareprograms install aproxy serveron a person's computer that monitors all web traffic passing through it, tracking user interests to build up a profile and sell that profile to advertisers. Superfishis an advertising injector that creates its ownroot certificatein a computer operating system, allowing the tool to inject advertising into encrypted Google search pages and track the history of a user's search queries. In February 2015, theUnited States Department of Homeland Securityadvised uninstalling Superfish and its associatedroot certificatefromLenovocomputers, because they make computers vulnerable to serious cyberattacks, including interception of passwords and sensitive data being transmitted through browsers.[5][21]Heise Securityrevealed that the Superfish certificate is included in bundled downloads with a number of applications from companies includingSAY MediaandLavasoft'sAd-Aware Web Companion.[22] Many companies usebrowser hijackingto modify a user's home page and search page, to force Internet hits to a particular website and make money from advertisers.[citation needed]Some companies steal the cookies in a user's browser,hijackingtheir connections to websites they are logged into, and performing actions using their account, without the user's knowledge or consent (like installing Android apps). Users withdial-up Internet accessuse modems in their computer to connect to the Internet, and these have been targeted by fraudulent applications that usedsecurity holesin theoperating systemto dial premium numbers. ManyAndroiddevices are targeted by malware that usepremium SMSservices to rack up charges for users.[23][24][25] A few classes of software are usually installed knowingly by the user and do not show any automated abusive behavior. However, the Enterprise controlling the computer or the antivirus vendor may consider the program unwanted due to the activities they allow. Peer-to-peer file sharingprograms are sometimes labelled as PUA and deleted due to their alleged links to piracy. In March 2021, Windows Defender started removinguTorrentandqBittorrent, causing widespread user confusion. Microsoft has since updated the PUA database to flag torrent clients on enterprise installations only.[26] Keygensnot tainted by actual malware are also commonly tagged as PUA due to piracy.[27] In 2015, research byEmsisoftsuggested that all free download providers bundled their downloads with potentially unwanted software, and that Download.com was the worst offender.[4]Lowell Heddings expressed dismay that "Sadly, even on Google all the top results for most open source and freeware are just ads for really terrible sites that are bundling crapware,adware, andmalwareon top of the installer."[28] In December 2011Gordon Lyonpublished his strong dislike of the wayDownload.comhad started bundlinggraywarewith their installation managers and concerns over the bundled software, causing many people to spread the post on social networks, and a few dozen media reports. The main problem is the confusion between Download.com-offered content[29][30]and software offered by original authors; the accusations included deception as well as copyright and trademark violation.[30] In 2014,The RegisterandUS-CERTwarned that via Download.com's "foistware", an "attacker may be able to download and execute arbitrary code".[31] Manyopen-source softwaredevelopers have expressed frustration and dismay that their work is being packaged by companies that profit from their work by usingsearch advertisingto occupy the first result on a search page. Increasingly, these pages are offering bundled installers that include unwanted software, and confuse users by presenting the bundled software as an official download page endorsed by the open source project. As of early 2016 this is no longer the case.[32]Ownership of SourceForge transferred to SourceForge Media, LLC, a subsidiary of BIZX, LLC (BIZX).[33]After the sale they removed the DevShare program, which means bundled installers are no longer available. In November 2013,GIMP, a free image manipulation program, removed its download fromSourceForge, citing misleading download buttons that can potentially confuse customers, as well as SourceForge's own Windows installer, which bundles third-party offers. In a statement, GIMP called SourceForge a once "useful and trustworthy place to develop and host FLOSS applications" that now faces "a problem with the ads they allow on their sites ..."[34]In May 2015, the GIMP for Windows SourceForge project was transferred to the ownership of the "SourceForge Editorial Staff" account and adware downloads were re-enabled.[35]The same happened to the developers ofnmap.[36][37] In May 2015 SourceForge took control of projects which had migrated to other hosting sites and replaced the project downloads with adware-laden downloads.[38] Gordon Lyonhas lost control of theNmapSourceForgepage, with SourceForge taking over the project's page. Lyon stated "So far they seem to be providing just the official Nmap files (as long as you don't click on the fake download buttons) and we haven't caught them trojaning Nmap the way they did with GIMP. But we certainly don't trust them one bit! Sourceforge is pulling the same scheme that CNet Download.com tried back when they started circling the drain".[36][37] VideoLANhas expressed dismay that users searching for their product see search advertising from websites that offer "bundled" downloads that includeunwanted programs, while VideoLAN lacks resources to sue the many companies abusing their trademarks.[28][39][40][41][42]
https://en.wikipedia.org/wiki/Potentially_unwanted_program
printfis ashellcommandthat formats and outputs text like thesame-named C function. It is available in a variety ofUnixandUnix-likesystems. Some shells implement the command asbuiltinand some provide it as autilityprogram[2] The command has similarsyntaxandsemanticsas the library function. The command outputs text tostandard output[3]as specified by a format string and a list of values.Charactersof the format string are copied to the output verbatim except when a format specifier is found which causes a value to be output per the specifier. The command has some aspects unlike the library function. In addition to the library function format specifiers,%bcauses the command to expand backslashescape sequences(for example\nfornewline), and%qoutputs an item that can be used asshellinput.[3]The value used for an unmatched specifier (too few values) is an empty string for%sor 0 for a numeric specifier. If there are more values than specifiers, then the command restarts processing the format string from its beginning, The command is part of theX/OpenPortability Guide since issue 4 of 1992. It was inherited into the first version of POSIX.1 and theSingle Unix Specification.[4]It first appeared in4.3BSD-Reno.[5] The implementation bundled inGNU Core Utilitieswas written by David MacKenzie. It has an extension%qfor escaping strings in POSIX-shell format.[3] This prints a list of numbers: This produces output for a directory's content similar tols:
https://en.wikipedia.org/wiki/Printf_(Unix)
Scholastic Corporationis an American multinational publishing, education, and media company that publishes and distributes books, comics, and educational materials for schools, teachers, parents, children, and other educational institutions. Products are distributed via retail and online sales and through schools viareading clubsand book fairs.Clifford the Big Red Dog, a character created byNorman Bridwellin 1963, is the mascot of Scholastic. Scholastic was founded in 1920 by Maurice R. Robinson nearPittsburgh, Pennsylvaniato be a publisher of youth magazines. The first publication wasThe Western Pennsylvania Scholastic. It coveredhigh school sportsand social activities; the four-page magazine debuted on October 22, 1920, and was distributed in 50 high schools.[3]More magazines followed for Scholastic Magazines.[3][4]In 1948, Scholastic entered the book club business.[5]In the 1960s, scholastic international publishing locations were added in England 1964, New Zealand 1964, and Sydney 1968.[6]Also in the 1960s, Scholastic entered the book publishing business. In the 1970s, Scholastic created its TV entertainment division.[3]From 1975 until his death in 2021,Richard Robinson, son of the corporation's founder, was CEO and president.[7]Scholastic began trading onNASDAQon May 12, 1987. In 2000, Scholastic purchasedGrolierfor US$400 million.[8][9]Scholastic became involved in a video collection in 2001. In February 2012, Scholastic boughtWeekly Reader PublishingfromReader's Digest Association, and announced in July 2012 that it planned to discontinue separate issues ofWeekly Readermagazines after more than a century of publication, and co-branded the magazines asScholastic News/Weekly Reader.[10]Scholastic sold READ 180 to Houghton Mifflin Harcourt in 2015. in December 2015, Scholastic launched the Scholastic Reads Podcasts. On October 22, 2020, Scholastic celebrated its 100th anniversary. In 2005, Scholastic developed FASTT Math with Tom Snyder to help students with their proficiency with math skills, specifically being multiplication, division, addition, and subtraction through a series of games and memorization quizzes gauging the student's progress.[11]In 2013, Scholastic developed System 44 with Houghton Mifflin Harcourt to help students encourage reading skills. In 2011, Scholastic developed READ 180 withHoughton Mifflin Harcourtto help students understand their reading skills.[12] The business has three segments: Children's Book Publishing and Distribution, Education Solutions, and International. Scholastic holds the perpetual US publishing rights to theHarry PotterandHunger Gamesbook series.[13][14]Scholastic is the world's largest publisher and distributor of children's books and print and digital educational materials for pre-K to grade 12.[15]In addition toHarry PotterandThe Hunger Games, Scholastic is known for its school book clubs and book fairs, classroom magazines such asScholastic NewsandScience World, and popular book series:Clifford the Big Red Dog,The Magic School Bus,Goosebumps,Horrible Histories,Captain Underpants,Animorphs,The Baby-Sitters Club, andI Spy. Scholastic also publishes instructional reading and writing programs, and offers professional learning and consultancy services for school improvement.Clifford the Big Red Dogis the official mascot of Scholastic.[16] The Scholastic Art & Writing awards was Founded in 1923 by Maurice R. Robinson,The Scholastic Art & Writing Awards,[17]administered by theAlliance for Young Artists & Writers, is a competition which recognizes talented young artists and writers from across the United States.[18] The success and enduring legacy of theScholastic Art & Writing Awardscan be attributed in part to its well-planned and executed marketing initiatives. These efforts have allowed the competition to adapt to the changing times, connect with a wider audience, and continue its mission of nurturing the creative potential of the nation's youth. In 2005, Scholastic developedFASTT MathwithTom Snyderto help students with their proficiency with math skills, specifically beingmultiplication,division,addition, andsubtractionthrough a series of games and memorization quizzes gauging the student's progress.[30]In 2013, Scholastic developed System 44 withHoughton Mifflin Harcourtto help students encourage reading skills. In 2011, Scholastic developed READ 180 with Houghton Mifflin Harcourt to help students understand their reading skills. Scholastic Reference publishesreference books.[31][32] Scholastic Entertainment (formerly Scholastic Productions and Scholastic Media) is a corporate division[33]led byDeborah Fortesince 1995. It covers "all forms of media and consumer products, and is comprised of four main groups – Productions, Marketing & Consumer Products, Interactive, and Audio."Weston Woodsis its production studio, acquired in 1996, as wasSoup2Nuts(best known forDr. Katz, Professional Therapist,Science CourtandHome Movies) from 2001 to 2015 before shutting down.[34]Scholastic has produced audiobooks such as the Caldecott/Newbery Collection;[35]Scholastic has been involved with several television programs and feature films based on its books. In 1985, Scholastic Productions teamed up withKarl-Lorimar Home Video, a home video unit ofLorimar Productions, to form the line Scholastic-Lorimar Home Video, whereas Scholastic would produce made-for-video programming, and became a best-selling video line for kids, and the pact expired for two years, whereas Scholastic would team up with leading independent family video distributor and a label ofInternational Video Entertainment,Family Home Entertainment, to distribute made-for-video programming for the next three years.[36] Scholastic Book Fairs began in 1981. Scholastic provides book fair products to schools, which then conduct the book fairs. Schools can elect to receive books, supplies and equipment or a portion of the proceeds from the book fair.[37] In the United States, during fiscal 2024, revenue from the book fairs channel ($541.6 million) accounted for more than half of the company's revenue in the "Total Children's Book Publishing and Distribution" segment ($955.2 million),[38]and schools earned over $200 million in proceeds in cash and incentive credits.[39] In October 2023, Scholastic created a separate category for books dealing with "race, LGBTQ and other issues related to diversity", allowing schools to opt out of carrying these types of books. Scholastic defended the move, citing legislation in multiple states seeking toban booksdealing withLGBTQissues orrace.[40]After public backlash from educators, authors, andfree speechadvocacy groups, Scholastic reversed course, saying the new category will be discontinued, writing: "It is unsettling that the current divisive landscape in the U.S. is creating an environment that could deny any child access to books, or that teachers could be penalized for creating access to all stories for their students".[41][42] Scholastic Book Fairs have been criticized for spurring unnecessary purchases, highlighting economic inequality among students, and disruption of school activities and facilities.[43][44] Scholasticbook clubsare offered at schools in many countries. Typically, teachers administer the program to the students in their own classes, but in some cases, the program is administered by a central contact for the entire school. Within Scholastic, Reading Clubs is a separate unit (compared to, e.g., Education). Reading clubs are arranged by age/grade.[45]Book club operators receive "Classroom Funds" redeemable only for Scholastic Corporation products.[46][47][48] In January 2025, claims of a data breach affecting Scholastic came from a group calling themselves Puppygirl Hacker Polycule.[49]The breach affected an estimated 8 million customers consisting of names, email addresses, phone numbers, and home addresses. The breach was provided toHave I Been Pwned?in an effort to inform customers.[50]
https://en.wikipedia.org/wiki/Scholastic_Corporation
Apen register, ordialed number recorder(DNR), is a device that records allnumberscalled from a particulartelephoneline.[1]The term has come to include any device or program that performs similar functions to an original pen register, including programsmonitoringInternetcommunications. The United States statutes governing pen registers are codified under18 U.S.C., Chapter 206. The termpen registeroriginally referred to a device for recordingtelegraphsignals on a strip of paper.Samuel F. B. Morse's 1840 telegraph patent described such a register as consisting of aleverholding an armature on one end, opposite anelectromagnet, with afountain pen,pencilor other marking instrument on the other end, and a clockwork mechanism to advance a paper recording tape under the marker.[2] The termtelegraph registercame to be a generic term for such a recording device in the later 19th century.[3]Where the record was made in ink with a pen, the termpen registeremerged. By the end of the 19th century, pen registers were widely used to record pulsed electrical signals in many contexts. For example, one fire-alarm system used a "double pen-register",[4]and another used a "single or multiple pen register".[5] Aspulse dialingcame into use fortelephone exchanges, pen registers had obvious applications as diagnostic instruments for recording sequences of telephone dial pulses. In the United States, the clockwork-powered Bunnell pen register remained in use into the 1960s.[6] After the introduction oftone dialing, any instrument that could be used to record the numbers dialed from a telephone came to be defined as a pen register. Title 18 of theUnited States Codedefines a pen register as: a device or process which records or decodes dialing, routing, addressing, or signaling information transmitted by an instrument or facility from which a wire or electronic communication is transmitted, provided, however, that such information shall not include the contents of any communication, but such term does not include any device or process used by a provider or customer of a wire or electronic communication service for billing, or recording as an incident to billing, for communications services provided by such provider or any device or process used by a provider or customer of a wire communication service for cost accounting or other like purposes in the ordinary course of its business[7] This is the current definition of a pen register, as amended by passage of the 2001USA PATRIOT Act. The original statutory definition of a pen register was created in 1984 as part of theElectronic Communications Privacy Act, which defined a "Pen Register" as: A device which records or decodes electronic or other impulses which identify the numbers called or otherwise transmitted on the telephone line to which such device is dedicated. A pen register is similar to atrap and trace device. A trap and trace device would show what numbers had called a specific telephone, i.e., allincomingphone numbers. A pen register rather would show what numbers a phone had called, i.e. alloutgoingphone numbers. The two terms are often used in concert, especially in the context of Internet communications. They are often jointly referred to as "Pen Register or Trap and Trace devices" to reflect the fact that the same program will probably do both functions in the modern era, and the distinction is not that important. The term "pen register" is often used to describe both pen registers and trap and trace devices.[8] InKatz v. United States(1967), theUnited States Supreme Courtestablished its "reasonable expectation of privacy" test. It overturnedOlmstead v. United States(1928) and held that warrantlesswiretapswereunconstitutionalsearches, because there was a reasonable expectation that the communication would beprivate. From then on, the government was required to get awarrantto execute a wiretap. Twelve years later the Supreme Court held that a pen register is not a search because the "petitioner voluntarily conveyed numerical information to the telephone company."Smith v. Maryland, 442 U.S. 735, 744 (1979). Since thedefendanthad disclosed the dialed numbers to the telephone company so they could connect his call, he did not have a reasonable expectation of privacy in the numbers he dialed. The court did not distinguish between disclosing the numbers to a human operator or just the automatic equipment used by the telephone company. The Smith decision left pen registers completely outsideconstitutionalprotection. If there was to be any privacy protection, it would have to be enacted by Congress as statutoryprivacy law[citation needed].[1] TheElectronic Communications Privacy Act(ECPA) was passed in 1986 (Pub. L. No. 99-508, 100 Stat. 1848). There were three main provisions or Titles to the ECPA. Title III created the Pen Register Act, which included restrictions on private and law enforcement uses of pen registers. Private parties were generally restricted from using them unless they met one of the exceptions, which included an exception for the business providing the communication if it needed to do so to ensure the proper functioning of its business. Forlaw enforcement agenciesto get a pen register approved forsurveillance, they must get acourt orderfrom a judge. According to 18 U.S.C. § 3123(a)(1), the "court shall enter anex parteorder authorizing the installation and use of a pen register or trap and trace device anywhere within the United States, if the court finds that the attorney for the Government has certified to the court that the information likely to be obtained by such installation and use is relevant to an ongoing criminal investigation".[9]Thus, a government attorney only needs to certify that information will "likely" be obtained in relation to an 'ongoingcriminal investigation'. This is the lowest requirement for receiving a court order under any of the ECPA's three titles. This is because inSmith v. Maryland, the Supreme Court ruled that use of a pen register does not constitute asearch. The ruling held that only the content of a conversation should receive full constitutional protection under theright to privacy; since pen registers do not intercept conversation, they do not pose as much threat to this right. Some have argued that the government should be required to present "specific and articulable facts" showing that the information to be gathered is relevant and material to an ongoing investigation. This is the standard used by Title II of the ECPA with regard to the contents of stored communications. Others, such asDaniel J. Solove,Petricia Bellia, andDierdre Mulligan, believe thatprobable causeand a warrant should be necessary.[10][11][12]Paul Ohmargues thatstandard of proofshould be replaced/reworked for electronic communications altogether.[13] The Pen Register Act did not include anexclusionary rule. While there werecivil remediesfor violations of the Act, evidence gained in violation of the Act can still be used against a defendant in court. There have also been calls for Congress to add anexclusionary ruleto the Pen Register Act, as this would make it more analogous to traditionalFourth Amendmentprotections. The penalty for violating the Pen Register Act is a misdemeanor, and it carries a prison sentence of not more than one year.[14] Section 216 of the 2001USA PATRIOT Actexpanded the definition of a pen register to include devices or programs that provide an analogous function with Internet communications. Prior to the Patriot Act, it was unclear whether or not the definition of a pen register, which included very specific telephone terminology,[15]could apply to Internet communications. Most courts and law enforcement personnel operated under the assumption that it did, however, theClinton administrationhad begun to work on legislation to make that clear, and onemagistratejudge inCaliforniadid rule that the language was too telephone-specific to apply toInternet surveillance. The Pen Register Statute is a privacy act. As there is no constitutional protection for information divulged to a third party under the Supreme Court's expectation of privacy test, and the routing information for phone and Internet communications are divulged to the company providing the communication, the absence or inapplicability of the statute would leave the routing information for those communications completely unprotected from government surveillance. The government also has an interest in making sure the Pen Register Act exists and applies to Internet communications. Without the Act, they cannot compel service providers to give them records or do Internet surveillance with their own equipment orsoftware, and the law enforcement agency, which may not have very goodtechnologicalcapabilities, will have to do the surveillance itself at its own cost. Rather than creating new laws regarding Internet surveillance, the Patriot Act simply expanded the definition of a pen register to include computer software programs doing Internet surveillance by accessing information. While not completely compatible with the technical definition of a pen register device, this was the interpretation that had been used by almost all courts and law enforcement agencies prior to the change.[15] When, in 2006, the Bush administration came under fire for having secretly collected billions of phone call details from regular Americans, ostensibly to check for calls to terror suspects, the Pen Register Act was cited, along with theStored Communications Act, as an example of how such domestic spying violated Federal law.[16] In 2013, the Obama administration sought a court order "requiring Verizon on an 'ongoing, daily basis' to give the NSA information on all telephone calls in its systems, both within the US and between the US and other countries". The order was approved on April 25, 2013, by federal JudgeRoger Vinson, member of the secretForeign Intelligence Surveillance Court(FISC), which court had been created by theForeign Intelligence Surveillance Act(FISA). The order gave the government unlimited authority to compel Verizon to collect and provide the data for a specified three-month period ending on July 19. This is the first time significant and top-secret documents have been revealed exposing the continuation of the practice on a massive scale under U.S. PresidentBarack Obama. According toThe Guardian, "it is not known whether Verizon is the only cell-phone provider to be targeted with such an order, although previous reporting has suggested the NSA has collected cell records from all major mobile networks. It is also unclear from the leaked document whether the three-month order was a one-off or the latest in a series of similar orders".[17] On September 1, 2013, theDEA's Hemisphere Project was revealed to the public byThe New York Times. In a series ofPowerPointslides acquired through a lawsuit,AT&Tis revealed to be operating a call database going back to 1987 which the DEA has warrantless access to with no judicial oversight under "administrative subpoenas" originated by the DEA. The DEA pays AT&T to maintain employees throughout the country devoted to investigating call records through this database for the DEA. The database grows by 4 billion records per day, and presumably covers all traffic that crosses AT&T's network. Internal directives instructed participants never to reveal the project publicly, despite the fact that the project was portrayed as a "routine" part of DEA investigations; several investigations unrelated to drugs have been mentioned as using the data. When questioned on their participation,Verizon,Sprint, andT-Mobilerefused to comment on whether they were part of the project, generating fears thatpen registersandtrap and tracedevices are effectively irrelevant in the face of ubiquitous private-public-partnership surveillance with indefinite data retention.[18] Information that is legally collectible according to 2014 pen trap laws includes:[citation needed]
https://en.wikipedia.org/wiki/Pen_register
Ingraph theory, a division ofmathematics, amedian graphis anundirected graphin which every threeverticesa,b, andchave a uniquemedian: a vertexm(a,b,c) that belongs toshortest pathsbetween each pair ofa,b, andc. The concept of median graphs has long been studied, for instance byBirkhoff & Kiss (1947)or (more explicitly) byAvann (1961), but the first paper to call them "median graphs" appears to beNebeský (1971). AsChung,Graham, and Saks write, "median graphs arise naturally in the study of ordered sets and discretedistributive lattices, and have an extensive literature".[1]Inphylogenetics, the Buneman graph representing allmaximum parsimonyevolutionary treesis a median graph.[2]Median graphs also arise insocial choice theory: if a set of alternatives has the structure of a median graph, it is possible to derive in an unambiguous way a majority preference among them.[3] Additional surveys of median graphs are given byKlavžar & Mulder (1999),Bandelt & Chepoi (2008), andKnuth (2008). Everytreeis a median graph. To see this, observe that in a tree, the union of the three shortest paths between pairs of the three verticesa,b, andcis either itself a path, or a subtree formed by three paths meeting at a single central node withdegreethree. If the union of the three paths is itself a path, the medianm(a,b,c) is equal to one ofa,b, orc, whichever of these three vertices is between the other two in the path. If the subtree formed by the union of the three paths is not a path, the median of the three vertices is the central degree-three node of the subtree.[4] Additional examples of median graphs are provided by thegrid graphs. In a grid graph, the coordinates of the medianm(a,b,c) can be found as the median of the coordinates ofa,b, andc. Conversely, it turns out that, in every median graph, one may label the vertices by points in aninteger latticein such a way that medians can be calculated coordinatewise in this way.[5] Squaregraphs, planar graphs in which all interior faces are quadrilaterals and all interior vertices have four or more incident edges, are another subclass of the median graphs.[6]Apolyominois a special case of a squaregraph and therefore also forms a median graph.[7] Thesimplex graphκ(G) of an arbitrary undirected graphGhas a vertex for everyclique(complete subgraph) ofG; two vertices of κ(G) are linked by an edge if the corresponding cliques differ by one vertex ofG. The simplex graph is always a median graph, in which the median of a given triple of cliques may be formed by using themajority ruleto determine which vertices of the cliques to include.[8] Nocycle graphof length other than four can be a median graph. Every such cycle has three verticesa,b, andcsuch that the three shortest paths wrap all the way around the cycle without having a common intersection. For such a triple of vertices, there can be no median. In an arbitrary graph, for each two verticesaandb, the minimal number of edges between them is called theirdistance, denoted byd(x,y). Theintervalof vertices that lie on shortest paths betweenaandbis defined as A median graph is defined by the property that, for every three verticesa,b, andc, these intervals intersect in a single point: Equivalently, for every three verticesa,b, andcone can find a vertexm(a,b,c) such that theunweighteddistances in the graph satisfy the equalities andm(a,b,c) is the only vertex for which this is true. It is also possible to define median graphs as the solution sets of2-satisfiabilityproblems, as the retracts ofhypercubes, as the graphs of finitemedian algebras, as the Buneman graphs of Helly split systems, and as the graphs of windex 2; see the sections below. Inlattice theory, the graph of afinitelatticehas a vertex for each lattice element and an edge for each pair of elements in thecovering relationof the lattice. Lattices are commonly presented visually viaHasse diagrams, which aredrawingsof graphs of lattices. These graphs, especially in the case ofdistributive lattices, turn out to be closely related to median graphs. In a distributive lattice,Birkhoff'sself-dualternarymedian operation[9] satisfies certain key axioms, which it shares with the usualmedianof numbers in the range from 0 to 1 and withmedian algebrasmore generally: The distributive law may be replaced by an associative law:[10] The median operation may also be used to define a notion of intervals for distributive lattices: The graph of a finite distributive lattice has an edge between verticesaandbwheneverI(a,b) = {a,b}.For every two verticesaandbof this graph, the intervalI(a,b)defined in lattice-theoretic terms above consists of the vertices on shortest paths fromatob, and thus coincides with the graph-theoretic intervals defined earlier. For every three lattice elementsa,b, andc,m(a,b,c) is the unique intersection of the three intervalsI(a,b),I(a,c), andI(b,c).[12]Therefore, the graph of an arbitrary finite distributive lattice is a median graph. Conversely, if a median graphGcontains two vertices 0 and 1 such that every other vertex lies on a shortest path between the two (equivalently,m(0,a,1) =afor alla), then we may define a distributive lattice in whicha∧b=m(a,0,b) anda∨b=m(a,1,b), andGwill be the graph of this lattice.[13] Duffus & Rival (1983)characterize graphs of distributive lattices directly as diameter-preserving retracts of hypercubes. More generally, every median graph gives rise to a ternary operationmsatisfying idempotence, commutativity, and distributivity, but possibly without the identity elements of a distributive lattice. Every ternary operation on a finite set that satisfies these three properties (but that does not necessarily have 0 and 1 elements) gives rise in the same way to a median graph.[14] In a median graph, a setSof vertices is said to beconvexif, for every two verticesaandbbelonging toS, the whole intervalI(a,b) is a subset ofS. Equivalently, given the two definitions of intervals above,Sis convex if it contains every shortest path between two of its vertices, or if it contains the median of every set of three points at least two of which are fromS. Observe that the intersection of every pair of convex sets is itself convex.[15] The convex sets in a median graph have theHelly property: ifFis an arbitrary family of pairwise-intersecting convex sets, then all sets inFhave a common intersection.[16]For, ifFhas only three convex setsS,T, andUin it, withain the intersection of the pairSandT,bin the intersection of the pairTandU, andcin the intersection of the pairSandU, then every shortest path fromatobmust lie withinTby convexity, and similarly every shortest path between the other two pairs of vertices must lie within the other two sets; butm(a,b,c) belongs to paths between all three pairs of vertices, so it lies within all three sets, and forms part of their common intersection. IfFhas more than three convex sets in it, the result follows by induction on the number of sets, for one may replace an arbitrary pair of sets inFby their intersection, using the result for triples of sets to show that the replaced family is still pairwise intersecting. A particularly important family of convex sets in a median graph, playing a role similar to that ofhalfspacesin Euclidean space, are the sets defined for each edgeuvof the graph. In words,Wuvconsists of the vertices closer touthan tov, or equivalently the verticeswsuch that some shortest path fromvtowgoes throughu. To show thatWuvis convex, letw1w2...wkbe an arbitrary shortest path that starts and ends withinWuv; thenw2must also lie withinWuv, for otherwise the two pointsm1=m(u,w1,wk) andm2=m(m1,w2...wk) could be shown (by considering the possible distances between the vertices) to be distinct medians ofu,w1, andwk, contradicting the definition of a median graph which requires medians to be unique. Thus, each successive vertex on a shortest path between two vertices ofWuvalso lies withinWuv, soWuvcontains all shortest paths between its nodes, one of the definitions of convexity. The Helly property for the setsWuvplays a key role in the characterization of median graphs as the solution of 2-satisfiability instances, below. Median graphs have a close connection to the solution sets of2-satisfiabilityproblems that can be used both to characterize these graphs and to relate them to adjacency-preserving maps of hypercubes.[17] A 2-satisfiability instance consists of a collection ofBoolean variablesand a collection ofclauses,constraintson certain pairs of variables requiring those two variables to avoid certain combinations of values. Usually such problems are expressed inconjunctive normal form, in which each clause is expressed as adisjunctionand the whole set of constraints is expressed as aconjunctionof clauses, such as A solution to such an instance is an assignment oftruth valuesto the variables that satisfies all the clauses, or equivalently that causes the conjunctive normal form expression for the instance to become true when the variable values are substituted into it. The family of all solutions has a natural structure as a median algebra, where the median of three solutions is formed by choosing each truth value to be themajority functionof the values in the three solutions; it is straightforward to verify that this median solution cannot violate any of the clauses. Thus, these solutions form a median graph, in which the neighbor of each solution is formed by negating a set of variables that are all constrained to be equal or unequal to each other. Conversely, every median graphGmay be represented in this way as the solution set to a 2-satisfiability instance. To find such a representation, create a 2-satisfiability instance in which each variable describes the orientation of one of the edges in the graph (an assignment of a direction to the edge causing the graph to becomedirectedrather than undirected) and each constraint allows two edges to share a pair of orientations only when there exists a vertexvsuch that both orientations lie along shortest paths from other vertices tov. Each vertexvofGcorresponds to a solution to this 2-satisfiability instance in which all edges are directed towardsv. Each solution to the instance must come from some vertexvin this way, wherevis the common intersection of the setsWuwfor edges directed fromwtou; this common intersection exists due to the Helly property of the setsWuw. Therefore, the solutions to this 2-satisfiability instance correspond one-for-one with the vertices ofG. Aretractionof a graphGis an adjacency-preserving map fromGto one of its subgraphs.[18]More precisely, it isgraph homomorphismφ fromGto itself such that φ(v) =vfor each vertexvin the subgraph φ(G). The image of the retraction is called aretractofG. Retractions are examples ofmetric maps: the distance between φ(v) and φ(w), for everyvandw, is at most equal to the distance betweenvandw, and is equal whenevervandwboth belong to φ(G). Therefore, a retract must be anisometric subgraphofG: distances in the retract equal those inG. IfGis a median graph, anda,b, andcare an arbitrary three vertices of a retract φ(G), then φ(m(a,b,c)) must be a median ofa,b, andc, and so must equalm(a,b,c). Therefore, φ(G) contains medians of all triples of its vertices, and must also be a median graph. In other words, the family of median graphs isclosedunder the retraction operation.[19] Ahypercube graph, in which the vertices correspond to all possiblek-bitbitvectorsand in which two vertices are adjacent when the corresponding bitvectors differ in only a single bit, is a special case of ak-dimensional grid graph and is therefore a median graph. The median of three bitvectorsa,b, andcmay be calculated by computing, in each bit position, themajority functionof the bits ofa,b, andc. Since median graphs are closed under retraction, and include the hypercubes, every retract of a hypercube is a median graph. Conversely, every median graph must be the retract of a hypercube.[20]This may be seen from the connection, described above, between median graphs and 2-satisfiability: letGbe the graph of solutions to a 2-satisfiability instance; without loss of generality this instance can be formulated in such a way that no two variables are always equal or always unequal in every solution. Then the space of all truth assignments to the variables of this instance forms a hypercube. For each clause, formed as the disjunction of two variables or their complements, in the 2-satisfiability instance, one can form a retraction of the hypercube in which truth assignments violating this clause are mapped to truth assignments in which both variables satisfy the clause, without changing the other variables in the truth assignment. The composition of the retractions formed in this way for each of the clauses gives a retraction of the hypercube onto the solution space of the instance, and therefore gives a representation ofGas the retract of a hypercube. In particular, median graphs are isometric subgraphs of hypercubes, and are thereforepartial cubes. However, not all partial cubes are median graphs; for instance, a six-vertexcycle graphis a partial cube but is not a median graph. AsImrich & Klavžar (2000)describe, an isometric embedding of a median graph into a hypercube may be constructed in time O(mlogn), wherenandmare the numbers of vertices and edges of the graph respectively.[21] The problems of testing whether a graph is a median graph, and whether a graph istriangle-free, both had been well studied whenImrich, Klavžar & Mulder (1999)observed that, in some sense, they are computationally equivalent.[22]Therefore, the best known time bound for testing whether a graph is triangle-free, O(m1.41),[23]applies as well to testing whether a graph is a median graph, and any improvement in median graph testing algorithms would also lead to an improvement in algorithms for detecting triangles in graphs. In one direction, suppose one is given as input a graphG, and must test whetherGis triangle-free. FromG, construct a new graphHhaving as vertices each set of zero, one, or two adjacent vertices ofG. Two such sets are adjacent inHwhen they differ by exactly one vertex. An equivalent description ofHis that it is formed by splitting each edge ofGinto a path of two edges, and adding a new vertex connected to all the original vertices ofG. This graphHis by construction a partial cube, but it is a median graph only whenGis triangle-free: ifa,b, andcform a triangle inG, then {a,b}, {a,c}, and {b,c} have no median inH, for such a median would have to correspond to the set {a,b,c}, but sets of three or more vertices ofGdo not form vertices inH. Therefore,Gis triangle-free if and only ifHis a median graph. In the case thatGis triangle-free,His itssimplex graph. An algorithm to test efficiently whetherHis a median graph could by this construction also be used to test whetherGis triangle-free. This transformation preserves the computational complexity of the problem, for the size ofHis proportional to that ofG. The reduction in the other direction, from triangle detection to median graph testing, is more involved and depends on the previous median graph recognition algorithm ofHagauer, Imrich & Klavžar (1999), which tests several necessary conditions for median graphs in near-linear time. The key new step involves using abreadth first searchto partition the graph's vertices into levels according to their distances from some arbitrarily chosen root vertex, forming a graph from each level in which two vertices are adjacent if they share a common neighbor in the previous level, and searching for triangles in these graphs. The median of any such triangle must be a common neighbor of the three triangle vertices; if this common neighbor does not exist, the graph is not a median graph. If all triangles found in this way have medians, and the previous algorithm finds that the graph satisfies all the other conditions for being a median graph, then it must actually be a median graph. This algorithm requires, not just the ability to test whether a triangle exists, but a list of all triangles in the level graph. In arbitrary graphs, listing all triangles sometimes requires Ω(m3/2) time, as some graphs have that many triangles, however Hagauer et al. show that the number of triangles arising in the level graphs of their reduction is near-linear, allowing the Alon et al. fast matrix multiplication based technique for finding triangles to be used. Phylogenyis the inference ofevolutionary treesfrom observed characteristics ofspecies; such a tree must place the species at distinct vertices, and may have additionallatent vertices, but the latent vertices are required to have three or more incident edges and must also be labeled with characteristics. A characteristic isbinarywhen it has only two possible values, and a set of species and their characteristics exhibitperfect phylogenywhen there exists an evolutionary tree in which the vertices (species and latent vertices) labeled with any particular characteristic value form a contiguous subtree. If a tree with perfect phylogeny is not possible, it is often desired to find one exhibitingmaximum parsimony, or equivalently, minimizing the number of times the endpoints of a tree edge have different values for one of the characteristics, summed over all edges and all characteristics. Buneman (1971)described a method for inferring perfect phylogenies for binary characteristics, when they exist. His method generalizes naturally to the construction of a median graph for any set of species and binary characteristics, which has been called themedian networkorBuneman graph[24]and is a type ofphylogenetic network. Every maximum parsimony evolutionary tree embeds into the Buneman graph, in the sense that tree edges follow paths in the graph and the number of characteristic value changes on the tree edge is the same as the number in the corresponding path. The Buneman graph will be a tree if and only if a perfect phylogeny exists; this happens when there are no two incompatible characteristics for which all four combinations of characteristic values are observed. To form the Buneman graph for a set of species and characteristics, first, eliminate redundant species that are indistinguishable from some other species and redundant characteristics that are always the same as some other characteristic. Then, form a latent vertex for every combination of characteristic values such that every two of the values exist in some known species. In the example shown, there are small brown tailless mice, small silver tailless mice, small brown tailed mice, large brown tailed mice, and large silver tailed mice; the Buneman graph method would form a latent vertex corresponding to an unknown species of small silver tailed mice, because every pairwise combination (small and silver, small and tailed, and silver and tailed) is observed in some other known species. However, the method would not infer the existence of large brown tailless mice, because no mice are known to have both the large and tailless traits. Once the latent vertices are determined, form an edge between every pair of species or latent vertices that differ in a single characteristic. One can equivalently describe a collection of binary characteristics as asplit system, afamily of setshaving the property that thecomplement setof each set in the family is also in the family. This split system has a set for each characteristic value, consisting of the species that have that value. When the latent vertices are included, the resulting split system has theHelly property: every pairwise intersecting subfamily has a common intersection. In some sense median graphs are characterized as coming from Helly split systems: the pairs (Wuv,Wvu) defined for each edgeuvof a median graph form a Helly split system, so if one applies the Buneman graph construction to this system no latent vertices will be needed and the result will be the same as the starting graph.[25] Bandelt et al. (1995)andBandelt, Macaulay & Richards (2000)describe techniques for simplified hand calculation of the Buneman graph, and use this construction to visualize human genetic relationships.
https://en.wikipedia.org/wiki/Median_graph
Analgorithmis fundamentally a set of rules or defined procedures that is typically designed and used to solve a specific problem or a broad set of problems. Broadly, algorithms define process(es), sets of rules, or methodologies that are to be followed in calculations, data processing, data mining, pattern recognition, automated reasoning or other problem-solving operations. With the increasing automation of services, more and more decisions are being made by algorithms. Some general examples are; risk assessments, anticipatory policing, and pattern recognition technology.[1] The following is alist of well-known algorithmsalong with one-line descriptions for each. HybridAlgorithms
https://en.wikipedia.org/wiki/List_of_algorithms#Parsing
Adatabase catalogof adatabaseinstance consists ofmetadatain which definitions ofdatabase objectssuch asbase tables,views(virtualtables),synonyms,value ranges,indexes,users, and user groups are stored.[1][2]It is anarchitectureproduct that documents the database's content anddata quality.[3] TheSQLstandard specifies a uniform means to access the catalog, called theINFORMATION_SCHEMA, but not alldatabasesfollow this, even if they implement other aspects of the SQL standard. For an example of database-specificmetadataaccess methods, seeOracle metadata. Thisdatabase-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Database_catalog
High Capacity Color Barcode(HCCB) is a technology developed byMicrosoftfor encoding data in a2D "barcode"using clusters of colored triangles instead of the square pixels conventionally associated with 2D barcodes orQR codes.[1]Data density is increased by using a palette of 4 or 8 colors for the triangles, although HCCB also permits the use of black and white when necessary. It has been licensed by the ISAN International Agency for use in itsInternational Standard Audiovisual Numberstandard,[2]and serves as the basis for the Microsoft Tagmobile taggingapplication. The technology was created byGavin Jancke, an engineering director atMicrosoft Research. Quoted by BBC News in 2007, he said that HCCB was not intended to replace conventionalbarcodes. "'It's more of a 'partner' barcode", he said. "TheUPCbarcodes will always be there. Ours is more of a niche barcode where you want to put a lot of information in a small space."[3] HCCB uses a grid of colored triangles to encode data. Depending on the target use, the grid size (total number of symbols), symbol density (the printed size of the triangles), and symbol count (number of colors used) can be varied. HCCB can use an eight-, four-, or two-color (black-and-white) palette. Microsoft claims that laboratory tests using standard off-the-shelf printers and scanners have yielded readable eight-color HCCBs equivalent to approximately 3,500 characters per square inch.[1][3] Microsoft Tagis a discontinued but still available implementation of High Capacity Color Barcode (HCCB) using 4 colors in a 5 x 10 grid. Additionally, the code works in monochrome.[4]The print size can be varied to allow reasonable reading by a mobile camera phone; for example, a Tag on a real estate sign might be printed large enough to be read from a car driving by, whereas a Tag in a magazine could be smaller because the reader would likely be nearer. A Microsoft Tag is essentially a machine readableweb link, analogous to aURL shorteninglink: when read, the Tag application sends the HCCB data to a Microsoft server, which then returns the publisher's intended URL. The Tag reader then directs the user'smobile browserto the appropriate website. Because of this redirection, Microsoft is also able to track users and provide Taganalyticsto publishers. When the platform was released, creation of tags for both commercial and noncommercial use was free as were the associated analytics.[5]In 2013, the process for creating new accounts was transferred to Scanbuy, which said that "A free plan will also be offered from ScanLife with the same basic features", although additional features may be available at extra cost.[6] Users can download the free Microsoft Tag reader application to their Internet-capable mobile device with camera, launch the reader and read a tag using their phone’s camera. Depending on the scenario, this triggers the intended content to be displayed. SomeGPS-equipped phones can, at the user's option, send coordinate data along with the HCCB data, allowing location-specific information to be returned (e.g. for a restaurant advertisement, a navigational map to the nearest location could be shown).[7] The Microsoft Tag application gives people the ability to use a mobile phone's on-board camera to take a picture of a tag, and be directed to information in any form, such as text,vCard,URL, Online Photos, Online Video or contact details for the publisher. Two-dimensional tags can be used to transform traditional marketing media (for example, print advertising, billboards, packaging and merchandising in stores or on LCDs) into gateways for accessing information online. Tags can be applied as gateways from any type of media to an internet site or online media. The Microsoft Tag reader application is a free download for an Internet-capable mobile device with a camera. The Microsoft Tag reader is compatible with Internet-capable mobile devices, including many based on theWindows Phone 7,Windows Mobile,BlackBerry,Java,Android,Symbian S60,iPhoneandJava MEplatforms.[8] On August 19, 2013 Microsoft sent out an email notice that the Microsoft Tag service will be terminated in two years on August 19, 2015. Scanbuy, a company founded in 2000 by Olivier Attia, has been selected to support Microsoft Tag technology on the ScanLife platform beginning September 18, 2013.
https://en.wikipedia.org/wiki/High_Capacity_Color_Barcode
Inlinear algebra, aToeplitz matrixordiagonal-constant matrix, named afterOtto Toeplitz, is amatrixin which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix: Anyn×n{\displaystyle n\times n}matrixA{\displaystyle A}of the form is aToeplitz matrix. If thei,j{\displaystyle i,j}element ofA{\displaystyle A}is denotedAi,j{\displaystyle A_{i,j}}then we have A Toeplitz matrix is not necessarilysquare. A matrix equation of the form is called aToeplitz systemifA{\displaystyle A}is a Toeplitz matrix. IfA{\displaystyle A}is ann×n{\displaystyle n\times n}Toeplitz matrix, then the system has at most only2n−1{\displaystyle 2n-1}unique values, rather thann2{\displaystyle n^{2}}. We might therefore expect that the solution of a Toeplitz system would be easier, and indeed that is the case. Toeplitz systems can be solved by algorithms such as theSchur algorithmor theLevinson algorithminO(n2){\displaystyle O(n^{2})}time.[1][2]Variants of the latter have been shown to be weakly stable (i.e. they exhibitnumerical stabilityforwell-conditionedlinear systems).[3]The algorithms can also be used to find thedeterminantof a Toeplitz matrix inO(n2){\displaystyle O(n^{2})}time.[4] A Toeplitz matrix can also be decomposed (i.e. factored) inO(n2){\displaystyle O(n^{2})}time.[5]The Bareiss algorithm for anLU decompositionis stable.[6]An LU decomposition gives a quick method for solving a Toeplitz system, and also for computing the determinant. Theconvolutionoperation can be constructed as a matrix multiplication, where one of the inputs is converted into a Toeplitz matrix. For example, the convolution ofh{\displaystyle h}andx{\displaystyle x}can be formulated as: This approach can be extended to computeautocorrelation,cross-correlation,moving averageetc. A bi-infinite Toeplitz matrix (i.e. entries indexed byZ×Z{\displaystyle \mathbb {Z} \times \mathbb {Z} })A{\displaystyle A}induces alinear operatoronℓ2{\displaystyle \ell ^{2}}. The induced operator isboundedif and only if the coefficients of the Toeplitz matrixA{\displaystyle A}are the Fourier coefficients of someessentially boundedfunctionf{\displaystyle f}. In such cases,f{\displaystyle f}is called thesymbolof the Toeplitz matrixA{\displaystyle A}, and the spectral norm of the Toeplitz matrixA{\displaystyle A}coincides with theL∞{\displaystyle L^{\infty }}norm of its symbol. Theproofcan be found as Theorem 1.1 of Böttcher and Grudsky.[8]
https://en.wikipedia.org/wiki/Toeplitz_matrix
Informal semantics,homogeneityis the phenomenon wherepluralexpressions that seem to mean "all" negate to "none" rather than "not all". For example, theEnglishsentence "Robin read the books" requires Robin to have read all of the books, while "Robin didn't read the books" requires her to have read none of them. Neither sentence is true if she read exactly half of the books. Homogeneity effects have been observed in a variety of languages includingJapanese,Russian, andHungarian. Semanticists have proposed a variety of explanations for homogeneity, often involving a combination ofpresupposition,plural quantification, andtrivalent logics. Because analogous effects have been observed withconditionalsand othermodalexpressions, some semanticists have proposed that these phenomena involve pluralities ofpossible worlds. Homogeneous interpretations arise when apluralexpression seems to mean "all" whenassertedbut "none" whennegated. For example, theEnglishsentence in (1a) is typically interpreted to mean that Robin read all the books, while (1b) is interpreted to mean that she read none of them. This is a puzzle since (1b) would merely mean that some books went unread if "the books" expresseduniversal quantification, as it appears to do in the positive sentence.[1][2] Homogeneous readings are also possible with other expressions includingconjunctionsandbare plurals. For instance, (2a) means that Robin read both books while (2b) means that she read neither; example (3a) means that in general Robin likes books while (3b) means that in general she does not.[1] Homogeneity effects have been studied in a variety of languages including English,Russian,JapaneseandHungarian. For instance, the Hungarian example in (4) behaves analogously to the English one in (1b).[3] Homogeneity can be suspended in certain circumstances. For instance, the definite plurals in (1) lose their homogeneous interpretation when an overt universal quantifier is inserted, as shown in (5).[1] Additionally, the conjunctions in (3) lose their homogeneous interpretation when theconnectivereceivesfocus.[3] Homogeneity is important to semantic theory in part because it results in apparenttruth valuegaps. For example, neither of the sentences in (1) are assertable if Robin read exactly half of the relevant books. As a result, some linguists have attempted to provide unified analyses with other gappy phenomena such aspresupposition,scalar implicature,free choice inferences, andvagueness.[1]Homogeneity effects have been argued to appear withsemantic typesother than individuals. For instance, negated conditionals and modals have been argued to show similar effects, potentially suggesting that they refer to pluralities ofpossible worlds.[1][4]
https://en.wikipedia.org/wiki/Homogeneity_(semantics)
Intelecommunication, aBerger codeis a unidirectionalerror detecting code, named after its inventor, J. M. Berger. Berger codes can detect all unidirectional errors. Unidirectional errors are errors that only flip ones into zeroes or only zeroes into ones, such as in asymmetric channels. Thecheck bitsof Berger codes are computed by counting all the zeroes in the information word, and expressing that number in natural binary. If the information word consists ofn{\displaystyle n}bits, then the Berger code needsk=⌈log2⁡(n+1)⌉{\displaystyle k=\lceil \log _{2}(n+1)\rceil }"check bits", giving a Berger code of length k+n. (In other words, thek{\displaystyle k}check bits are enough to check up ton=2k−1{\displaystyle n=2^{k}-1}information bits). Berger codes can detect any number of one-to-zero bit-flip errors, as long as no zero-to-one errors occurred in the same code word. Similarly, Berger codes can detect any number of zero-to-one bit-flip errors, as long as no one-to-zero bit-flip errors occur in the same code word. Berger codes cannot correct any error. Like all unidirectional error detecting codes, Berger codes can also be used indelay-insensitivecircuits. As stated above, Berger codes detectanynumber of unidirectional errors. For agiven code word, if the only errors that have occurred are that some (or all) bits with value 1 have changed to value 0, then this transformation will be detected by the Berger code implementation. To understand why, consider that there are three such cases: For case 1, the number of 0-valued bits in the information section will, by definition of the error, increase. Therefore, our Berger check code will be lower than the actual 0-bit-count for the data, and so the check will fail. For case 2, the number of 0-valued bits in the information section have stayed the same, but the value of the check data has changed. Since we know some 1s turned into 0s, but no 0s have turned into 1s (that's how we defined the error model in this case), the encoded binary value of the check data will go down (e.g., from binary 1011 to 1010, or to 1001, or 0011). Since the information data has stayed the same, it has the same number of zeros it did before, and that will no longer match the mutated check value. For case 3, where bits have changed in both the information and the check sections, notice that the number of zeros in the information section hasgone up, as described for case 1, and the binary value stored in the check portion hasgone down, as described for case 2. Therefore, there is no chance that the two will end up mutating in such a way as to become a different valid code word. A similar analysis can be performed, and is perfectly valid, in the case where the only errors that occur are that some 0-valued bits change to 1. Therefore, if all the errors that occur on a specific codeword all occur in the same direction, these errors will be detected. For the next code word being transmitted (for instance), the errors can go in the opposite direction, and they will still be detected, as long as they all go in the same direction as each other. Unidirectional errors are common in certain situations. For instance, inflash memory, bits can more easily be programmed to a 0 than can be reset to a 1.
https://en.wikipedia.org/wiki/Berger_code
User modelingis the subdivision ofhuman–computer interactionwhich describes the process of building up and modifying a conceptual understanding of the user. The main goal of user modeling is customization andadaptation of systemsto the user's specific needs. The system needs to "say the 'right' thing at the 'right' time in the 'right' way".[1]To do so it needs an internal representation of the user. Another common purpose is modeling specific kinds of users, including modeling of their skills anddeclarative knowledge, for use in automatic software-tests.[2]User-models can thus serve as a cheaper alternative touser testingbut should not replaceuser testing. A user model is the collection and categorization ofpersonal dataassociated with a specific user. A user model is a (data) structure that is used to capture certain characteristics about an individual user, and auser profileis the actual representation in a given user model. The process of obtaining the user profile is called user modeling.[3]Therefore, it is the basis for any adaptive changes to the system's behavior. Which data is included in the model depends on the purpose of the application. It can include personal information such as users' names and ages, their interests, their skills and knowledge, their goals and plans, their preferences and their dislikes or data about their behavior and their interactions with the system. There are different design patterns for user models, though often a mixture of them is used.[2][4] Information about users can begatheredin several ways. There are three main methods: Though the first method is a good way to quickly collect main data it lacks the ability to automatically adapt to shifts in users' interests. It depends on the users' readiness to give information and it is unlikely that they are going to edit their answers once the registration process is finished. Therefore, there is a high likelihood that the user models are not up to date. However, this first method allows the users to have full control over the collected data about them. It is their decision which information they are willing to provide. This possibility is missing in the second method. Adaptive changes in a system that learns users' preferences and needs only by interpreting their behavior might appear a bit opaque to the users, because they cannot fully understand and reconstruct why the system behaves the way it does.[5]Moreover, the system is forced to collect a certain amount of data before it is able to predict the users' needs with the required accuracy. Therefore, it takes a certain learning time before a user can benefit from adaptive changes. However, afterwards these automatically adjusted user models allow a quite accurate adaptivity of the system. The hybrid approach tries to combine the advantages of both methods. Through collecting data by directly asking its users it gathers a first stock of information which can be used for adaptive changes. By learning from the users' interactions it can adjust the user models and reach more accuracy. Yet, the designer of the system has to decide, which of these information should have which amount of influence and what to do with learned data that contradicts some of the information given by a user. Once a system has gathered information about a user it can evaluate that data by preset analytical algorithm and then start to adapt to the user's needs. These adaptations may concern every aspect of the system's behavior and depend on the system's purpose. Information and functions can be presented according to the user's interests, knowledge or goals by displaying only relevant features, hiding information the user does not need, making proposals what to do next and so on. One has to distinguish betweenadaptive and adaptable systems.[1]In an adaptable system the user can manually change the system's appearance, behavior or functionality by actively selecting the corresponding options. Afterwards the system will stick to these choices. In anadaptive systema dynamic adaption to the user is automatically performed by the system itself, based on the built user model. Thus, an adaptive system needs ways to interpret information about the user in order to make these adaptations. One way to accomplish this task is implementing rule-based filtering. In this case a set of IF... THEN... rules is established that covers theknowledge baseof the system.[2]The IF-conditions can check for specific user-information and if they match the THEN-branch is performed which is responsible for the adaptive changes. Another approach is based oncollaborative filtering.[2][5]In this case information about a user is compared to that of other users of the same systems. Thus, if characteristics of the current user match those of another, the system can make assumptions about the current user by presuming that he or she is likely to have similar characteristics in areas where the model of the current user is lacking data. Based on these assumption the system then can perform adaptive changes. A certain number of representation formats and standards are available for representing the users in computer systems,[8]such as:
https://en.wikipedia.org/wiki/User_modeling
Tile, Inc.(stylized astile) is an Americanconsumer electronicscompany which producestracking devicesthat users can attach to their belongings such as keys and backpacks. A companionmobile appforAndroidandiOSallows users to track the devices usingBluetooth 4.0in order to locate lost items or to view their last detected location.[1]The first devices were delivered in 2013. In September 2015, Tile launched a newer line of hardware that includes functionality to assist users in locating smartphones, as well as other feature upgrades.[2][3]In August 2017, two new versions of the Tile were launched, the Tile Sport and Tile Style.[4]As of 2019[update], Tile's hardware offerings consist of the Pro, Mate, Slim, and Sticker.[5] Since September 2018, formerGoProexecutive C. J. Prober has been theCEOof Tile after he replaced co-founder Mike Farley.[6]In November 2021,Life360agreed to acquire Tile in a $205 million acquisition, and is expected to integrate the two services.[7] Tile manufactures hardware devices, "Tiles", that can be attached to items such as keychains. By attaching the device, a user can later use the Tile app to help locate the item if it is lost.[8]The Tile application usesBluetooth Low Energy4.0 radio technology to locate Tiles within a 100 foot (30 meters) range, depending on the model.[9]Each Tile comes with a built-in speaker, and the user is able to trigger the device to play a sound to aid in the location of items at close range. The second generation of Tile devices produce sound at a volume of 90 decibels,[10]which is three times as loud as the previous generation of products.[11]The second generation also added a "Find My Phone" feature, which can be used to produce a sound on the user's paired smartphone when the user presses a button on the Tile device.[10] The Tile app can locate Tiles beyond the 100 foot (30 m) Bluetooth range by using "crowd GPS". If a Tile device is reported as lost and comes within range of any smartphone running the Tile app, the nearby user's app will send the item's owner an anonymous update of the lost item's location.[9][12][13][14]Users can also share their Tiles with others, which allows both participants to locate shared Tiles.[15] Tile's first generation products have built-in batteries with a battery life of about one year. Owners of these devices were automatically notified when the batteries were nearing depletion and were eligible to receive a discount on a replacement product.[16]Users could then return Tiles with depleted batteries in order for them to be recycled.[17][18]In October 2018, the Tile Mate and Tile Pro were redesigned to have user-replaceable batteries.[19]These models have lower water-resistance ratings than models that require factory battery replacement.[20] Tile's developers used Selfstarter, anopen sourcewebsite platform, tocrowdfundthe project through pre-orders.[21] As of July 7, 2013, Tile had raised overUS$2.6millionby selling preordered Tiles directly to 50,000 backers through their website.[22] In 2014, Tile raised additionalSeries A fundingof US$13 million led byGGV Capitaland a further US$3 million fromKhosla Venturesin 2015.[23][24] In May 2020, Tile sought assistance from theEuropean Unionin a dispute it had withAppleregarding the provision of its services on Apple devices. It claimed that its app was not activated on Apple devices while the Find My service provided by Apple is activated automatically. Apple denied the allegation.[25]In September 2020, Tile joined theCoalition for App Fairnesswhich aims to reach better conditions for the inclusion of apps into app stores.[26] In 2024 a computerhackeracquired the credentials of a suspected former Tile employee and gained access to the company's internal tool that processes location data for law enforcement, as well as customer data such as names, addresses, emails, telephone numbers, and order information. Other functions that were compromised include changing the email address linked to a particular device and creating administrative users. Tile said only its customer support platform, not the service platform, was breached and it has disabled credentials to prevent further unauthorized access.[27]
https://en.wikipedia.org/wiki/Tile_(company)
Scientific consensusis the generally held judgment, position, and opinion of themajorityor thesupermajorityofscientistsin aparticular fieldof study at any particular time.[1][2] Consensus is achieved throughscholarly communicationatconferences, thepublicationprocess, replication ofreproducibleresults by others, scholarlydebate,[3][4][5][6]andpeer review. A conference meant to create a consensus is termed as a consensus conference.[7][8][9]Such measures lead to a situation in which those within the discipline can often recognize such a consensus where it exists; however, communicating to outsiders that consensus has been reached can be difficult, because the "normal" debates through which science progresses may appear to outsiders as contestation.[10]On occasion, scientific institutes issue position statements intended to communicate a summary of the science from the "inside" to the "outside" of the scientific community, or consensus review articles[11]orsurveys[12]may be published. In cases where there is little controversy regarding the subject under study, establishing the consensus can be quite straightforward. Popular or political debate on subjects that are controversial within the public sphere but not necessarily controversial within the scientific community may invoke scientific consensus: note such topics asevolution,[13][14]climate change,[15]the safety ofgenetically modified organisms,[16]or the lack of a link betweenMMR vaccinations and autism.[10] Scientific consensus is related to (and sometimes used to mean)convergent evidence, that is, the concept that independent sources of evidence converge on a conclusion.[17][18] There are many philosophical and historical theories as to how scientific consensus changes over time. Because the history of scientific change is extremely complicated, and because there is a tendency to project "winners" and "losers" onto the past in relation to thecurrentscientific consensus, it is very difficult to come up with accurate and rigorous models for scientific change.[19]This is made exceedingly difficult also in part because each of the various branches of science functions in somewhat different ways with different forms of evidence and experimental approaches.[20][21] Most models of scientific change rely on new data produced by scientificexperiment.Karl Popperproposed that since no amount of experiments could everprovea scientific theory, but a single experiment coulddisproveone, science should be based onfalsification.[22]Whilst this forms a logical theory for science, it is in a sense "timeless" and does not necessarily reflect a view on how science should progress over time. Among the most influential challengers of this approach wasThomas Kuhn, who argued instead that experimentaldataalways provide some data which cannot fit completely into a theory, and that falsification alone did not result in scientific change or an undermining of scientific consensus. He proposed that scientific consensus worked in the form of "paradigms", which were interconnected theories and underlying assumptions about the nature of the theory itself which connected various researchers in a given field. Kuhn argued that only after the accumulation of many "significant" anomalies would scientific consensus enter a period of "crisis". At this point, new theories would be sought out, and eventually one paradigm would triumph over the old one – a series ofparadigm shiftsrather than a linear progression towards truth. Kuhn's model also emphasized more clearly the social and personal aspects of theory change, demonstrating through historical examples that scientific consensus was never truly a matter of pure logic or pure facts.[23]However, these periods of 'normal' and 'crisis' science are not mutually exclusive. Research shows that these are different modes of practice, more than different historical periods.[10] Perception of whether a scientific consensus exists on a given issue, and how strong that conception is, has been described as a "gateway belief" upon which other beliefs and then action are based.[28] In public policy debates, the assertion that there exists a consensus of scientists in a particular field is often used as an argument for the validity of a theory. Similarly arguments for alackof scientific consensus are often used to support doubt about the theory.[citation needed] For example, thescientific consensus on the causes of global warmingis thatglobal surface temperatureshave increased in recent decades and that the trend is caused primarily by human-inducedemissions of greenhouse gases.[29][30][31]Thehistorian of scienceNaomi Oreskespublished an article inSciencereporting that a survey of the abstracts of 928 science articles published between 1993 and 2003 showed none which disagreed explicitly with the notion ofanthropogenic global warming.[29]In an editorial published inThe Washington Post, Oreskes stated that those who opposed these scientific findings are amplifying the normal range of scientific uncertainty about any facts into an appearance that there is a great scientific disagreement, or a lack of scientific consensus.[32]Oreskes's findings were replicated by other methods that require no interpretation.[10] The theory ofevolution through natural selectionis also supported by an overwhelming scientific consensus; it is one of the most reliable and empirically tested theories in science.[33][34]Opponents of evolution claim that there is significant dissent on evolution within the scientific community.[35]Thewedge strategy, a plan to promoteintelligent design, depended greatly on seeding and building on public perceptions of absence of consensus on evolution.[36] The inherentuncertainty in science, where theories are neverprovenbut can only bedisproven(seefalsifiability), poses a problem for politicians, policymakers, lawyers, and business professionals. Where scientific or philosophical questions can often languish in uncertainty for decades within their disciplinary settings, policymakers are faced with the problems of making sound decisions based on the currently available data, even if it is likely not a final form of the "truth". The tricky part is discerning what is close enough to "final truth". For example, social action against smoking probably came too long after science was 'pretty consensual'.[10] Certain domains, such as the approval of certain technologies for public consumption, can have vast and far-reaching political, economic, and human effects should things run awry with the predictions of scientists. However, insofar as there is an expectation that policy in a given field reflect knowable and pertinent data and well-accepted models of the relationships between observable phenomena, there is little good alternative for policy makers than to rely on so much of what may fairly be called 'the scientific consensus' in guiding policy design and implementation, at least in circumstances where the need for policy intervention is compelling. While science cannot supply 'absolute truth' (or even its complement 'absolute error') its utility is bound up with the capacity to guide policy in the direction of increased public good and away from public harm. Seen in this way, the demand that policy rely only on what is proven to be "scientific truth" would be a prescription for policy paralysis and amount in practice to advocacy of acceptance of all of the quantified and unquantified costs and risks associated with policy inaction.[10] No part of policy formation on the basis of the ostensible scientific consensus precludes persistent review either of the relevant scientific consensus or the tangible results of policy. Indeed, the same reasons that drove reliance upon the consensus drives the continued evaluation of this reliance over time – and adjusting policy as needed.[citation needed]
https://en.wikipedia.org/wiki/Scientific_consensus
Multi-agent reinforcement learning (MARL)is a sub-field ofreinforcement learning. It focuses on studying the behavior of multiple learning agents that coexist in a shared environment.[1]Each agent is motivated by its own rewards, and does actions to advance its own interests; in some environments these interests are opposed to the interests of other agents, resulting in complexgroup dynamics. Multi-agent reinforcement learning is closely related togame theoryand especiallyrepeated games, as well asmulti-agent systems. Its study combines the pursuit of finding ideal algorithms that maximize rewards with a more sociological set of concepts. While research in single-agent reinforcement learning is concerned with finding the algorithm that gets the biggest number of points for one agent, research in multi-agent reinforcement learning evaluates and quantifies social metrics, such as cooperation,[2]reciprocity,[3]equity,[4]social influence,[5]language[6]and discrimination.[7] Similarly tosingle-agent reinforcement learning, multi-agent reinforcement learning is modeled as some form of aMarkov decision process (MDP). Fix a set of agentsI={1,...,N}{\displaystyle I=\{1,...,N\}}. We then define: In settings withperfect information, such as the games ofchessandGo, the MDP would be fully observable. In settings with imperfect information, especially in real-world applications likeself-driving cars, each agent would access an observation that only has part of the information about the current state. In the partially observable setting, the core model is the partially observablestochastic gamein the general case, and thedecentralized POMDPin the cooperative case. When multiple agents are acting in a shared environment their interests might be aligned or misaligned. MARL allows exploring all the different alignments and how they affect the agents' behavior: When two agents are playing azero-sum game, they are in pure competition with each other. Many traditional games such aschessandGofall under this category, as do two-player variants of video games likeStarCraft. Because each agent can only win at the expense of the other agent, many complexities are stripped away. There is no prospect of communication or social dilemmas, as neither agent is incentivized to take actions that benefit its opponent. TheDeep Blue[8]andAlphaGoprojects demonstrate how to optimize the performance of agents in pure competition settings. One complexity that is not stripped away in pure competition settings isautocurricula. As the agents' policy is improved usingself-play, multiple layers of learning may occur. MARL is used to explore how separate agents with identical interests can communicate and work together. Pure cooperation settings are explored in recreationalcooperative gamessuch asOvercooked,[9]as well as real-world scenarios inrobotics.[10] In pure cooperation settings all the agents get identical rewards, which means that social dilemmas do not occur. In pure cooperation settings, oftentimes there are an arbitrary number of coordination strategies, and agents converge to specific "conventions" when coordinating with each other. The notion of conventions has been studied in language[11]and also alluded to in more general multi-agent collaborative tasks.[12][13][14][15] Most real-world scenarios involving multiple agents have elements of both cooperation and competition. For example, when multipleself-driving carsare planning their respective paths, each of them has interests that are diverging but not exclusive: Each car is minimizing the amount of time it's taking to reach its destination, but all cars have the shared interest of avoiding atraffic collision.[17] Zero-sum settings with three or more agents often exhibit similar properties to mixed-sum settings, since each pair of agents might have a non-zero utility sum between them. Mixed-sum settings can be explored using classicmatrix gamessuch asprisoner's dilemma, more complexsequential social dilemmas, and recreational games such asAmong Us,[18]Diplomacy[19]andStarCraft II.[20][21] Mixed-sum settings can give rise to communication and social dilemmas. As ingame theory, much of the research in MARL revolves aroundsocial dilemmas, such asprisoner's dilemma,[22]chickenandstag hunt.[23] While game theory research might focus onNash equilibriaand what an ideal policy for an agent would be, MARL research focuses on how the agents would learn these ideal policies using a trial-and-error process. Thereinforcement learningalgorithms that are used to train the agents are maximizing the agent's own reward; the conflict between the needs of the agents and the needs of the group is a subject of active research.[24] Various techniques have been explored in order to induce cooperation in agents: Modifying the environment rules,[25]adding intrinsic rewards,[4]and more. Social dilemmas like prisoner's dilemma, chicken and stag hunt are "matrix games". Each agent takes only one action from a choice of two possible actions, and a simple 2x2 matrix is used to describe the reward that each agent will get, given the actions that each agent took. In humans and other living creatures, social dilemmas tend to be more complex. Agents take multiple actions over time, and the distinction between cooperating and defecting is not as clear cut as in matrix games. The concept of asequential social dilemma (SSD)was introduced in 2017[26]as an attempt to model that complexity. There is ongoing research into defining different kinds of SSDs and showing cooperative behavior in the agents that act in them.[27] An autocurriculum[28](plural: autocurricula) is a reinforcement learning concept that's salient in multi-agent experiments. As agents improve their performance, they change their environment; this change in the environment affects themselves and the other agents. The feedback loop results in several distinct phases of learning, each depending on the previous one. The stacked layers of learning are called an autocurriculum. Autocurricula are especially apparent in adversarial settings,[29]where each group of agents is racing to counter the current strategy of the opposing group. TheHide and Seek gameis an accessible example of an autocurriculum occurring in an adversarial setting. In this experiment, a team of seekers is competing against a team of hiders. Whenever one of the teams learns a new strategy, the opposing team adapts its strategy to give the best possible counter. When the hiders learn to use boxes to build a shelter, the seekers respond by learning to use a ramp to break into that shelter. The hiders respond by locking the ramps, making them unavailable for the seekers to use. The seekers then respond by "box surfing", exploiting aglitchin the game to penetrate the shelter. Each "level" of learning is an emergent phenomenon, with the previous level as its premise. This results in a stack of behaviors, each dependent on its predecessor. Autocurricula in reinforcement learning experiments are compared to the stages of theevolution of life on Earthand the development ofhuman culture. A major stage in evolution happened 2-3 billion years ago, whenphotosynthesizing life formsstarted to produce massive amounts ofoxygen, changing the balance of gases in the atmosphere.[30]In the next stages of evolution, oxygen-breathing life forms evolved, eventually leading up to landmammalsand human beings. These later stages could only happen after the photosynthesis stage made oxygen widely available. Similarly, human culture could not have gone through theIndustrial Revolutionin the 18th century without the resources and insights gained by theagricultural revolutionat around 10,000 BC.[31] Multi-agent reinforcement learning has been applied to a variety of use cases in science and industry: Multi-agent reinforcement learning has been used in research intoAI alignment. The relationship between the different agents in a MARL setting can be compared to the relationship between a human and an AI agent. Research efforts in the intersection of these two fields attempt to simulate possible conflicts between a human's intentions and an AI agent's actions, and then explore which variables could be changed to prevent these conflicts.[45][46] There are some inherent difficulties about multi-agentdeep reinforcement learning.[47]The environment is not stationary anymore, thus theMarkov propertyis violated: transitions and rewards do not only depend on the current state of an agent.
https://en.wikipedia.org/wiki/Multi-agent_reinforcement_learning
Indistributed data storage, aP-Gridis a self-organizing structuredpeer-to-peersystem, which can accommodate arbitrary key distributions (and hence support lexicographic key ordering and range queries), still providing storageload-balancingand efficient search by using randomized routing. P-Grid abstracts atrieand resolves queries based on prefix matching. The actual topology has no hierarchy. Queries are resolved by matching prefixes. This also determines the choice of routing table entries. Each peer, for each level of the trie, maintains autonomously routing entries chosen randomly from the complementary sub-trees.[2]In fact, multiple entries are maintained for each level at each peer to provide fault-tolerance (as well as potentially for query-load management). For diverse reasons including fault-tolerance and load-balancing, multiple peers are responsible for each leaf node in the P-Grid tree. These are called replicas. The replica peers maintain an independent replica sub-network and uses gossip based communication to keep the replica group up-to-date.[3]The redundancy in both the replication of key-space partitions as well as the routing network together is called structural replication. The figure above shows how a query is resolved by forwarding it based on prefix matching.[citation needed] P-Grid partitions the key-space in a granularity adaptive to the load at that part of the key-space. Consequently, its possible to realize a P-Grid overlay network where each peer has similar storage load even for non-uniform load distributions. This network probably provides as efficient search of keys as traditionaldistributed hash tables(DHTs) do. Note that in contrast to P-Grid, DHTs work efficiently only for uniform load-distributions.[4] Hence we can use a lexicographic order preserving function to generate the keys, and still realize a load-balanced P-Grid network which supports efficient search of exact keys. Moreover, because of the preservation of lexicographic ordering, range queries can be done efficiently and precisely on P-Grid. The trie-structure of P-Grid allows different range query strategies, processed serially or in parallel, trading off message overheads and query resolution latency.[5]Simple vector-based data storage architectural frameworks are also subject to variable query limitations within the P-Grid environment.[6]
https://en.wikipedia.org/wiki/P-Grid
The followingoutlineis provided as an overview of and topical guide to information technology: Information technology(IT) –microelectronicsbased combination ofcomputingandtelecommunicationstechnologyto treatinformation, including in the acquisition, processing, storage and dissemination of vocal, pictorial, textual and numerical information. It is defined by theInformation Technology Association of America(ITAA) as "the study, design, development, implementation, support or management of computer-basedinformation systems, particularly toward software applications and computer hardware." There are different names for this at different periods or through fields. Some of these names are: Third-party commercial organizations and vendor neutral interest groups that sponsor certifications include: General certification of software practitioners has struggled. TheACMhad a professional certification program in the early 1980s, which was discontinued due to lack of interest. Today, theIEEEis certifying software professionals, but only about 500 people have passed the exam by March 2005[update].
https://en.wikipedia.org/wiki/Outline_of_information_technology
Inmathematical logicandcomputer science, ageneral recursive function,partial recursive function, orμ-recursive functionis apartial functionfromnatural numbersto natural numbers that is "computable" in an intuitive sense – as well as in aformal one. If the function is total, it is also called atotal recursive function(sometimes shortened torecursive function).[1]Incomputability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed byTuring machines[2][4](this is one of the theorems that supports theChurch–Turing thesis). The μ-recursive functions are closely related toprimitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is theAckermann function. Other equivalent classes of functions are the functions oflambda calculusand the functions that can be computed byMarkov algorithms. The subset of alltotalrecursive functions with values in{0,1}is known incomputational complexity theoryas thecomplexity class R. Theμ-recursive functions(orgeneral recursive functions) are partial functions that take finite tuples of natural numbers and return a single natural number. They are the smallest class of partial functions that includes the initial functions and is closed under composition, primitive recursion, and theminimization operatorμ. The smallest class of functions including the initial functions and closed under composition and primitive recursion (i.e. without minimisation) is the class ofprimitive recursive functions. While all primitive recursive functions are total, this is not true of partial recursive functions; for example, the minimisation of the successor function is undefined. The primitive recursive functions are a subset of the total recursive functions, which are a subset of the partial recursive functions. For example, theAckermann functioncan be proven to be total recursive, and to be non-primitive. Primitive or "basic" functions: Operators (thedomain of a functiondefined by an operator is the set of the values of the arguments such that every function application that must be done during the computation provides a well-defined result): Intuitively, minimisation seeks—beginning the search from 0 and proceeding upwards—the smallest argument that causes the function to return zero; if there is no such argument, or if one encounters an argument for whichfis not defined, then the search never terminates, andμ(f){\displaystyle \mu (f)}is not defined for the argument(x1,…,xk).{\displaystyle (x_{1},\ldots ,x_{k}).} While some textbooks use the μ-operator as defined here,[5][6]others[7][8]demand that the μ-operator is applied tototalfunctionsfonly. Although this restricts the μ-operator as compared to the definition given here, the class of μ-recursive functions remains the same, which follows from Kleene's Normal Form Theorem (seebelow).[5][6]The only difference is, that it becomes undecidable whether a specific function definition defines a μ-recursive function, as it is undecidable whether a computable (i.e. μ-recursive) function is total.[7] Thestrong equalityrelation≃{\displaystyle \simeq }can be used to compare partial μ-recursive functions. This is defined for all partial functionsfandgso that holds if and only if for any choice of arguments either both functions are defined and their values are equal or both functions are undefined. Examples not involving the minimization operator can be found atPrimitive recursive function#Examples. The following examples are intended just to demonstrate the use of the minimization operator; they could also be defined without it, albeit in a more complicated way, since they are all primitive recursive. The following examples define general recursive functions that are not primitive recursive; hence they cannot avoid using the minimization operator. A general recursive function is calledtotal recursive functionif it is defined for every input, or, equivalently, if it can be computed by atotal Turing machine. There is no way to computably tell if a given general recursive function is total - seeHalting problem. In theequivalence of models of computability, a parallel is drawn betweenTuring machinesthat do not terminate for certain inputs and an undefined result for that input in the corresponding partial recursive function. The unbounded search operator is not definable by the rules of primitive recursion as those do not provide a mechanism for "infinite loops" (undefined values). Anormal form theoremdue to Kleene says that for eachkthere are primitive recursive functionsU(y){\displaystyle U(y)\!}andT(y,e,x1,…,xk){\displaystyle T(y,e,x_{1},\ldots ,x_{k})\!}such that for any μ-recursive functionf(x1,…,xk){\displaystyle f(x_{1},\ldots ,x_{k})\!}withkfree variables there is anesuch that The numbereis called anindexorGödel numberfor the functionf.[10]: 52–53A consequence of this result is that any μ-recursive function can be defined using a single instance of the μ operator applied to a (total) primitive recursive function. Minskyobserves theU{\displaystyle U}defined above is in essence the μ-recursive equivalent of theuniversal Turing machine: To construct U is to write down the definition of a general-recursive function U(n, x) that correctly interprets the number n and computes the appropriate function of x. to construct U directly would involve essentially the same amount of effort,and essentially the same ideas, as we have invested in constructing the universal Turing machine[11] A number of different symbolisms are used in the literature. An advantage to using the symbolism is a derivation of a function by "nesting" of the operators one inside the other is easier to write in a compact form. In the following the string of parameters x1, ..., xnis abbreviated asx: Example: Kleene gives an example of how to perform the recursive derivation of f(b, a) = b + a (notice reversal of variables a and b). He starts with 3 initial functions He arrives at:
https://en.wikipedia.org/wiki/%CE%9C-recursive_function
Instatistics,simple linear regression(SLR) is alinear regressionmodel with a singleexplanatory variable.[1][2][3][4][5]That is, it concerns two-dimensional sample points withone independent variable and one dependent variable(conventionally, thexandycoordinates in aCartesian coordinate system) and finds a linear function (a non-verticalstraight line) that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjectivesimplerefers to the fact that the outcome variable is related to a single predictor. It is common to make the additional stipulation that theordinary least squares(OLS) method should be used: the accuracy of each predicted value is measured by its squaredresidual(vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible. In this case, the slope of the fitted line is equal to thecorrelationbetweenyandxcorrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that the line passes through the center of mass(x,y)of the data points. Consider themodelfunction which describes a line with slopeβandy-interceptα. In general, such a relationship may not hold exactly for the largely unobserved population of values of the independent and dependent variables; we call the unobserved deviations from the above equation theerrors. Suppose we observendata pairs and call them{(xi,yi),i= 1, ...,n}. We can describe the underlying relationship betweenyiandxiinvolving this error termεiby This relationship between the true (but unobserved) underlying parametersαandβand the data points is called a linear regression model. The goal is to find estimated valuesα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}for the parametersαandβwhich would provide the "best" fit in some sense for the data points. As mentioned in the introduction, in this article the "best" fit will be understood as in theleast-squaresapproach: a line that minimizes thesum of squared residuals(see alsoErrors and residuals)ε^i{\displaystyle {\widehat {\varepsilon }}_{i}}(differences between actual and predicted values of the dependent variabley), each of which is given by, for any candidate parameter valuesα{\displaystyle \alpha }andβ{\displaystyle \beta }, In other words,α^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}solve the followingminimization problem: where theobjective functionQis: By expanding to get a quadratic expression inα{\displaystyle \alpha }andβ,{\displaystyle \beta ,}we can derive minimizing values of the function arguments, denotedα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}:[6] α^=y¯−(β^x¯),β^=∑i=1n(xi−x¯)(yi−y¯)∑i=1n(xi−x¯)2=∑i=1nΔxiΔyi∑i=1nΔxi2{\displaystyle {\begin{aligned}{\widehat {\alpha }}&={\bar {y}}-({\widehat {\beta }}\,{\bar {x}}),\\[5pt]{\widehat {\beta }}&={\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}={\frac {\sum _{i=1}^{n}\Delta x_{i}\Delta y_{i}}{\sum _{i=1}^{n}\Delta x_{i}^{2}}}\end{aligned}}} Here we have introduced The above equations are efficient to use if the mean of the x and y variables (x¯andy¯{\displaystyle {\bar {x}}{\text{ and }}{\bar {y}}}) are known. If the means are not known at the time of calculation, it may be more efficient to use the expanded version of theα^andβ^{\displaystyle {\widehat {\alpha }}{\text{ and }}{\widehat {\beta }}}equations. These expanded equations may be derived from the more generalpolynomial regressionequations[7][8]by defining the regression polynomial to be of order 1, as follows. [n∑i=1nxi∑i=1nxi∑i=1nxi2][α^β^]=[∑i=1nyi∑i=1nyixi]{\displaystyle {\begin{bmatrix}n&\sum _{i=1}^{n}x_{i}\\\sum _{i=1}^{n}x_{i}&\sum _{i=1}^{n}x_{i}^{2}\end{bmatrix}}{\begin{bmatrix}{\widehat {\alpha }}\\{\widehat {\beta }}\end{bmatrix}}={\begin{bmatrix}\sum _{i=1}^{n}y_{i}\\\sum _{i=1}^{n}y_{i}x_{i}\end{bmatrix}}} The abovesystem of linear equationsmay be solved directly, or stand-alone equations forα^andβ^{\displaystyle {\widehat {\alpha }}{\text{ and }}{\widehat {\beta }}}may be derived by expanding the matrix equations above. The resultant equations are algebraically equivalent to the ones shown in the prior paragraph, and are shown below without proof.[9][7] α^=∑i=1nyi∑i=1nxi2−∑i=1nxi∑i=1nxiyin∑i=1nxi2−(∑i=1nxi)2β^=n∑i=1nxiyi−∑i=1nxi∑i=1nyin∑i=1nxi2−(∑i=1nxi)2{\displaystyle {\begin{aligned}&\qquad {\widehat {\alpha }}={\frac {\sum _{i=1}^{n}y_{i}\sum _{i=1}^{n}x_{i}^{2}-\sum _{i=1}^{n}x_{i}\sum _{i=1}^{n}x_{i}y_{i}}{n\sum _{i=1}^{n}x_{i}^{2}-(\sum _{i=1}^{n}x_{i})^{2}}}\\\\&\qquad {\widehat {\beta }}={\frac {n\sum _{i=1}^{n}x_{i}y_{i}-\sum _{i=1}^{n}x_{i}\sum _{i=1}^{n}y_{i}}{n\sum _{i=1}^{n}x_{i}^{2}-(\sum _{i=1}^{n}x_{i})^{2}}}\\&\qquad \end{aligned}}} The solution can be reformulated using elements of thecovariance matrix:β^=sx,ysx2=rxysysx{\displaystyle {\widehat {\beta }}={\frac {s_{x,y}}{s_{x}^{2}}}=r_{xy}{\frac {s_{y}}{s_{x}}}} where Substituting the above expressions forα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}into the original solution yields This shows thatrxyis the slope of the regression line of thestandardizeddata points (and that this line passes through the origin). Since−1≤rxy≤1{\displaystyle -1\leq r_{xy}\leq 1}then we get that if x is some measurement and y is a followup measurement from the same item, then we expect that y (on average) will be closer to the mean measurement than it was to the original value of x. This phenomenon is known asregressions toward the mean. Generalizing thex¯{\displaystyle {\bar {x}}}notation, we can write a horizontal bar over an expression to indicate the average value of that expression over the set of samples. For example: This notation allows us a concise formula forrxy: Thecoefficient of determination("R squared") is equal torxy2{\displaystyle r_{xy}^{2}}when the model is linear with a single independent variable. Seesample correlation coefficientfor additional details. By multiplying all members of the summation in the numerator by :(xi−x¯)(xi−x¯)=1{\displaystyle {\begin{aligned}{\frac {(x_{i}-{\bar {x}})}{(x_{i}-{\bar {x}})}}=1\end{aligned}}}(thereby not changing it): We can see that the slope (tangent of angle) of the regression line is the weighted average of(yi−y¯)(xi−x¯){\displaystyle {\frac {(y_{i}-{\bar {y}})}{(x_{i}-{\bar {x}})}}}that is the slope (tangent of angle) of the line that connects the i-th point to the average of all points, weighted by(xi−x¯)2{\displaystyle (x_{i}-{\bar {x}})^{2}}because the further the point is the more "important" it is, since small errors in its position will affect the slope connecting it to the center point more. Givenβ^=tan⁡(θ)=dy/dx→dy=dx×β^{\displaystyle {\widehat {\beta }}=\tan(\theta )=dy/dx\rightarrow dy=dx\times {\widehat {\beta }}}withθ{\displaystyle \theta }the angle the line makes with the positive x axis, we haveyintersection=y¯−dx×β^=y¯−dy{\displaystyle y_{\rm {intersection}}={\bar {y}}-dx\times {\widehat {\beta }}={\bar {y}}-dy}[remove orclarification needed] In the above formulation, notice that eachxi{\displaystyle x_{i}}is a constant ("known upfront") value, while theyi{\displaystyle y_{i}}are random variables that depend on the linear function ofxi{\displaystyle x_{i}}and the random termεi{\displaystyle \varepsilon _{i}}. This assumption is used when deriving the standard error of the slope and showing that it isunbiased. In this framing, whenxi{\displaystyle x_{i}}is not actually arandom variable, what type of parameter does the empirical correlationrxy{\displaystyle r_{xy}}estimate? The issue is that for each value i we'll have:E(xi)=xi{\displaystyle E(x_{i})=x_{i}}andVar(xi)=0{\displaystyle Var(x_{i})=0}. A possible interpretation ofrxy{\displaystyle r_{xy}}is to imagine thatxi{\displaystyle x_{i}}defines a random variable drawn from theempirical distributionof the x values in our sample. For example, if x had 10 values from thenatural numbers: [1,2,3...,10], then we can imagine x to be aDiscrete uniform distribution. Under this interpretation allxi{\displaystyle x_{i}}have the same expectation and some positive variance. With this interpretation we can think ofrxy{\displaystyle r_{xy}}as the estimator of thePearson's correlationbetween the random variable y and the random variable x (as we just defined it). Description of the statistical properties of estimators from the simple linear regression estimates requires the use of astatistical model. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such asinhomogeneity, but this is discussed elsewhere.[clarification needed] The estimatorsα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}areunbiased. To formalize this assertion we must define a framework in which these estimators are random variables. We consider the residualsεias random variables drawn independently from some distribution with mean zero. In other words, for each value ofx, the corresponding value ofyis generated as a mean responseα+βxplus an additional random variableεcalled theerror term, equal to zero on average. Under such interpretation, the least-squares estimatorsα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}will themselves be random variables whose means will equal the "true values"αandβ. This is the definition of an unbiased estimator. Since the data in this context is defined to be (x,y) pairs for every observation, themean responseat a given value ofx, sayxd, is an estimate of the mean of theyvalues in the population at thexvalue ofxd, that isE^(y∣xd)≡y^d{\displaystyle {\hat {E}}(y\mid x_{d})\equiv {\hat {y}}_{d}\!}. The variance of the mean response is given by:[11] This expression can be simplified to wheremis the number of data points. To demonstrate this simplification, one can make use of the identity Thepredicted responsedistribution is the predicted distribution of the residuals at the given pointxd. So the variance is given by The second line follows from the fact thatCov⁡(yd,[α^+β^xd]){\displaystyle \operatorname {Cov} \left(y_{d},\left[{\hat {\alpha }}+{\hat {\beta }}x_{d}\right]\right)}is zero because the new prediction point is independent of the data used to fit the model. Additionally, the termVar⁡(α^+β^xd){\displaystyle \operatorname {Var} \left({\hat {\alpha }}+{\hat {\beta }}x_{d}\right)}was calculated earlier for the mean response. SinceVar⁡(yd)=σ2{\displaystyle \operatorname {Var} (y_{d})=\sigma ^{2}}(a fixed but unknown parameter that can be estimated), the variance of the predicted response is given by The formulas given in the previous section allow one to calculate thepoint estimatesofαandβ— that is, the coefficients of the regression line for the given set of data. However, those formulas do not tell us how precise the estimates are, i.e., how much the estimatorsα^{\displaystyle {\widehat {\alpha }}}andβ^{\displaystyle {\widehat {\beta }}}vary from sample to sample for the specified sample size.Confidence intervalswere devised to give a plausible set of values to the estimates one might have if one repeated the experiment a very large number of times. The standard method of constructing confidence intervals for linear regression coefficients relies on the normality assumption, which is justified if either: The latter case is justified by thecentral limit theorem. Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with meanβand varianceσ2/∑(xi−x¯)2,{\displaystyle \sigma ^{2}\left/\sum (x_{i}-{\bar {x}})^{2}\right.,}whereσ2is the variance of the error terms (seeProofs involving ordinary least squares). At the same time the sum of squared residualsQis distributed proportionally toχ2withn− 2degrees of freedom, and independently fromβ^{\displaystyle {\widehat {\beta }}}. This allows us to construct at-value where is the unbiasedstandard errorestimator of the estimatorβ^{\displaystyle {\widehat {\beta }}}. Thist-value has aStudent'st-distribution withn− 2degrees of freedom. Using it we can construct a confidence interval forβ: at confidence level(1 −γ), wheretn−2∗{\displaystyle t_{n-2}^{*}}is the(1−γ2)-th{\displaystyle \scriptstyle \left(1\;-\;{\frac {\gamma }{2}}\right){\text{-th}}}quantile of thetn−2distribution. For example, ifγ= 0.05then the confidence level is 95%. Similarly, the confidence interval for the intercept coefficientαis given by at confidence level (1 −γ), where The confidence intervals forαandβgive us the general idea where these regression coefficients are most likely to be. For example, in theOkun's lawregression shown here the point estimates are The 95% confidence intervals for these estimates are In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shown[12]that at confidence level (1 −γ) the confidence band has hyperbolic form given by the equation When the model assumed the intercept is fixed and equal to 0 (α=0{\displaystyle \alpha =0}), the standard error of the slope turns into: With:ε^i=yi−y^i{\displaystyle {\hat {\varepsilon }}_{i}=y_{i}-{\hat {y}}_{i}} The alternative second assumption states that when the number of points in the dataset is "large enough", thelaw of large numbersand thecentral limit theorembecome applicable, and then the distribution of the estimators is approximately normal. Under this assumption all formulas derived in the previous section remain valid, with the only exception that the quantilet*n−2ofStudent'stdistribution is replaced with the quantileq*of thestandard normal distribution. Occasionally the fraction⁠1/n−2⁠is replaced with⁠1/n⁠. Whennis large such a change does not alter the results appreciably. This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. Although theOLSarticle argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead. There aren= 15 points in this data set. Hand calculations would be started by finding the following five sums: These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors. The 0.975 quantile of Student'st-distribution with 13 degrees of freedom ist*13= 2.1604, and thus the 95% confidence intervals forαandβare Theproduct-moment correlation coefficientmight also be calculated: In SLR, there is an underlying assumption that only the dependent variable contains measurement error; if the explanatory variable is also measured with error, then simple regression is not appropriate for estimating the underlying relationship because it will be biased due toregression dilution. Other estimation methods that can be used in place of ordinary least squares includeleast absolute deviations(minimizing the sum of absolute values of residuals) and theTheil–Sen estimator(which chooses a line whoseslopeis themedianof the slopes determined by pairs of sample points). Deming regression(total least squares) also finds a line that fits a set of two-dimensional sample points, but (unlike ordinary least squares, least absolute deviations, and median slope regression) it is not really an instance of simple linear regression, because it does not separate the coordinates into one dependent and one independent variable and could potentially return a vertical line as its fit. can lead to a model that attempts to fit the outliers more than the data. Line fittingis the process of constructing astraight linethat has the best fit to a series of data points. Several methods exist, considering: Sometimes it is appropriate to force the regression line to pass through the origin, becausexandyare assumed to be proportional. For the model without the intercept term,y=βx, the OLS estimator forβsimplifies to Substituting(x−h,y−k)in place of(x,y)gives the regression through(h,k): where Cov and Var refer to the covariance and variance of the sample data (uncorrected for bias). The last form above demonstrates how moving the line away from the center of mass of the data points affects the slope.
https://en.wikipedia.org/wiki/Simple_linear_regression
Aposition weight matrix (PWM), also known as aposition-specific weight matrix (PSWM)orposition-specific scoring matrix (PSSM), is a commonly used representation ofmotifs(patterns) in biological sequences. PWMs are often derived from a set of aligned sequences that are thought to be functionally related and have become an important part of many software tools for computational motif discovery. A PWM has one row for each symbol of the alphabet (4 rows fornucleotidesinDNAsequences or 20 rows foramino acidsinproteinsequences) and one column for each position in the pattern. In the first step in constructing a PWM, a basic position frequency matrix (PFM) is created by counting the occurrences of each nucleotide at each position. From the PFM, a position probability matrix (PPM) can now be created by dividing that former nucleotide count at each position by the number of sequences, thereby normalising the values. Formally, given a setXofNaligned sequences of lengthl, the elements of the PPMMare calculated: wherei∈{\displaystyle \in }(1,...,N),j∈{\displaystyle \in }(1,...,l),kis the set of symbols in the alphabet andI(a=k)is anindicator functionwhereI(a=k)is 1 ifa=kand 0 otherwise. For example, given the following DNA sequences: GAGGTAAACTCCGTAAGTCAGGTTGGAACAGTCAGTTAGGTCATTTAGGTACTGATGGTAACTCAGGTATACTGTGTGAGTAAGGTAAGT The corresponding PFM is: Therefore, the resulting PPM is:[1] Both PPMs and PWMs assumestatistical independencebetween positions in the pattern, as the probabilities for each position are calculated independently of other positions. From the definition above, it follows that the sum of values for a particular position (that is, summing over all symbols) is 1. Each column can therefore be regarded as an independentmultinomial distribution. This makes it easy to calculate the probability of a sequence given a PPM, by multiplying the relevant probabilities at each position. For example, the probability of the sequenceS=GAGGTAAACgiven the above PPMMcan be calculated: Pseudocounts(orLaplace estimators) are often applied when calculating PPMs if based on a small dataset, in order to avoid matrix entries having a value of 0.[2]This is equivalent to multiplying each column of the PPM by aDirichlet distributionand allows the probability to be calculated for new sequences (that is, sequences which were not part of the original dataset). In the example above, without pseudocounts, any sequence which did not have aGin the 4th position or aTin the 5th position would have a probability of 0, regardless of the other positions. Most often the elements in PWMs are calculated as log odds. That is, the elements of a PPM are transformed using a background modelb{\displaystyle b}so that: describes howan element in the PWM (left),Mk,j{\displaystyle M_{k,j}}, can be calculated. The simplest background model assumes that each letter appears equally frequently in the dataset. That is, the value ofbk=1/|k|{\displaystyle b_{k}=1/\vert k\vert }for all symbols in the alphabet (0.25 for nucleotides and 0.05 for amino acids). Applying this transformation to the PPMMfrom above (with no pseudocounts added) gives: The−∞{\displaystyle -\infty }entries in the matrix make clear the advantage of adding pseudocounts, especially when using small datasets to constructM. The background model need not have equal values for each symbol: for example, when studying organisms with a highGC-content, the values forCandGmay be increased with a corresponding decrease for theAandTvalues. When the PWM elements are calculated using log likelihoods, the score of a sequence can be calculated by adding (rather than multiplying) the relevant values at each position in the PWM. The sequence score gives an indication of how different the sequence is from a random sequence. The score is 0 if the sequence has the same probability of being a functional site and of being a random site. The score is greater than 0 if it is more likely to be a functional site than a random site, and less than 0 if it is more likely to be a random site than a functional site.[1]The sequence score can also be interpreted in a physical framework as the binding energy for that sequence. Theinformation content(IC) of a PWM is sometimes of interest, as it says something about how different a given PWM is from auniform distribution. Theself-informationof observing a particular symbol at a particular position of the motif is: The expected (average) self-information of a particular element in the PWM is then: Finally, the IC of the PWM is then the sum of the expected self-information of every element: Often, it is more useful to calculate the information content with the background letter frequencies of the sequences you are studying rather than assuming equal probabilities of each letter (e.g., the GC-content of DNA ofthermophilicbacteria range from 65.3 to 70.8,[3]thus a motif of ATAT would contain much more information than a motif of CCGG). The equation for information content thus becomes wherepj{\displaystyle p_{j}}is the background frequency for letterj{\displaystyle j}. This corresponds to theKullback–Leibler divergenceor relative entropy. However, it has been shown that when using PSSM to search genomic sequences (see below) this uniform correction can lead to overestimation of the importance of the different bases in a motif, due to the uneven distribution of n-mers in real genomes, leading to a significantly larger number of false positives.[4] There are various algorithms to scan for hits of PWMs in sequences. One example is the MATCH algorithm[5]which has been implemented in the ModuleMaster.[6]More sophisticated algorithms for fast database searching with nucleotide as well as amino acid PWMs/PSSMs are implemented in the possumsearch software.[7] The basic PWM/PSSM is unable to deal with insertions and deletions. A PSSM with additional probabilities for insertion and deletion at each position can be interpreted as ahidden Markov model. This is the approach used byPfam.[8][9]
https://en.wikipedia.org/wiki/Position-specific_scoring_matrix
"What the Tortoise Said to Achilles",[1]written byLewis Carrollin 1895 for the philosophical journalMind,[1]is a brief allegorical dialogue on the foundations oflogic.[1]The titlealludesto one ofZeno's paradoxes of motion,[2]in whichAchillescould never overtake thetortoisein a race. In Carroll's dialogue, the tortoise challenges Achilles to use the force of logic to make him accept the conclusion of a simple deductive argument. Ultimately, Achilles fails, because the clever tortoise leads him into aninfinite regression.[1] The discussion begins by considering the following logical argument:[1][3] The tortoise accepts premisesAandBas true but not the hypothetical: The Tortoise claims that it is not "under any logical necessity to acceptZas true". The tortoise then challenges Achilles to force it logically to acceptZas true. Instead of searching the tortoise’s reasons for not acceptingC, Achilles asks it to acceptC, which it does. After which, Achilles says: The tortoise responds, "That's another Hypothetical, isn't it? And, if I failed to see its truth, I might accept A and B and C, and still not accept Z, mightn't I?"[1][3] Again, instead of requesting reasons for not acceptingD, he asks the tortoise to acceptD. And again, it is "quite willing to grant it",[1][3]but it still refuses to accept Z. It then tells Achilles to write into his book, Following this, the Tortoise says: "until I’ve granted that [i.e.,E], of course I needn’t grant Z. So it's quite a necessary step".[1]With a touch of sadness, Achilles sees the point.[1][3] The story ends by suggesting that the list of premises continues to grow without end, but without explaining the point of the regress.[1][3] Lewis Carroll was showing that there is a regressive problem that arises frommodus ponensdeductions. Or, in words: propositionP(is true) impliesQ(is true), and givenP, thereforeQ. The regress problem arises because a prior principle is required to explain logical principles, heremodus ponens, and oncethatprinciple is explained,anotherprinciple is required to explainthatprinciple. Thus, if the argumentative chain is to continue, the argument falls into infinite regress. However, if a formal system is introduced wherebymodus ponensis simply arule of inferencedefined within the system, then it can be abided by simply by reasoning within the system. That is not to say that the user reasoning according to this formal system agrees with these rules (consider, for example, theconstructivist's rejection of thelaw of the excluded middleand thedialetheist's rejection of thelaw of noncontradiction). In this way, formalising logic as a system can be considered as a response to the problem of infinite regress:modus ponensis placed as a rule within the system, the validity ofmodus ponensis eschewed without the system. In propositional logic, the logical implication is defined as follows: P implies Q if and only if the propositionnot P or Qis atautology. Hencemodus ponens, [P ∧ (P → Q)] ⇒ Q, is a valid logical conclusion according to the definition of logical implication just stated. Demonstrating the logical implication simply translates into verifying that the compound truth table produces a tautology. But the tortoise does not accept on faith the rules of propositional logic that this explanation is founded upon. He asks that these rules, too, be subject to logical proof. The tortoise and Achilles do not agree on any definition of logical implication. In addition, the story hints at problems with the propositional solution. Within the system of propositional logic, no proposition or variable carries any semantic content. The moment any proposition or variable takes on semantic content, the problem arises again because semantic content runs outside the system. Thus, if the solution is to be said to work, then it is to be said to work solely within the given formal system, and not otherwise. Some logicians (Kenneth Ross, Charles Wright) draw a firm distinction between theconditional connectiveand theimplication relation. These logicians use the phrasenot p or qfor the conditional connective and the termimpliesfor an asserted implication relation. Several philosophers have tried to resolve Carroll's paradox.Bertrand Russelldiscussed the paradox briefly in§ 38 ofThe Principles of Mathematics(1903), distinguishing betweenimplication(associated with the form "ifp, thenq"), which he held to be a relation betweenunassertedpropositions, andinference(associated with the form "p, thereforeq"), which he held to be a relation betweenassertedpropositions; having made this distinction, Russell could deny that the Tortoise's attempt to treatinferringZfromAandBas equivalent to, or dependent on, agreeing to thehypothetical"IfAandBare true, thenZis true." Peter Winch, aWittgensteinianphilosopher, discussed the paradox inThe Idea of a Social Science and its Relation to Philosophy(1958), where he argued that the paradox showed that "the actual process of drawing an inference, which is after all at the heart of logic, is something which cannot be represented as a logical formula ... Learning to infer is not just a matter of being taught about explicit logical relations between propositions; it is learningto dosomething" (p. 57). Winch goes on to suggest that the moral of the dialogue is a particular case of a general lesson, to the effect that the properapplicationof rules governing a form of human activity cannot itself be summed up with a set offurtherrules, and so that "a form of human activity can never be summed up in a set of explicit precepts" (p. 53). Carroll's dialogue is apparently the first description of an obstacle toconventionalismabout logical truth,[4]later reworked in more sober philosophical terms byW.V.O. Quine.[5] Lewis Carroll (April 1895). "What the Tortoise Said to Achilles".Mind.IV(14):278–280.doi:10.1093/mind/IV.14.278. Reprinted: As audio:
https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles
Inprobability theoryand related fields, thelife-time of correlationmeasures the timespan over which there is appreciableautocorrelationorcross-correlationinstochastic processes. Thecorrelation coefficientρ, expressed as anautocorrelation functionorcross-correlation function, depends on the lag-time between the times being considered. Typically such functions,ρ(t), decay to zero with increasing lag-time, but they can assume values across all levels of correlations: strong and weak, and positive and negative as in the table. The life-time of a correlation is defined as the length of time when the correlation coefficient is at the strong level.[1]The durability of correlation is determined by signal (the strong level of correlation is separated from weak and negative levels). The mean life-time of correlation could measure how the durability of correlation depends on the window width size (the window is the length of time series used to calculate correlation). Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Life-time_of_correlation
Ininformation theory,perplexityis a measure of uncertainty in the value of a sample from a discrete probability distribution. The larger the perplexity, the less likely it is that an observer can guess the value which will be drawn from the distribution. Perplexity was originally introduced in 1977 in the context ofspeech recognitionbyFrederick Jelinek,Robert Leroy Mercer, Lalit R. Bahl, andJames K. Baker.[1] The perplexityPPof a discreteprobability distributionpis a concept widely used in information theory,machine learning, and statistical modeling. It is defined as whereH(p) is theentropy(inbits) of the distribution, andxranges over theevents. The base of the logarithm need not be 2: The perplexity is independent of the base, provided that the entropy and the exponentiation use the same base. In some contexts, this measure is also referred to as the(order-1 true) diversity. Perplexity of arandom variableXmay be defined as the perplexity of the distribution over its possible valuesx. It can be thought of as a measure of uncertainty or "surprise" related to the outcomes. For a probability distributionpwhere exactlykoutcomes each have a probability of1/kand all other outcomes have a probability of zero, the perplexity of this distribution is simplyk. This is because the distribution models a fairk-sideddie, with each of thekoutcomes being equally likely. In this context, the perplexitykindicates that there is as much uncertainty as there would be when rolling a fairk-sided die. Even if a random variable has more thankpossible outcomes, the perplexity will still bekif the distribution is uniform overkoutcomes and zero for the rest. Thus, a random variable with a perplexity ofkcan be described as being "k-ways perplexed," meaning it has the same level of uncertainty as a fairk-sided die. Perplexity is sometimes used as a measure of the difficulty of a prediction problem. It is, however, generally not a straight forward representation of the relevant probability. For example, if you have two choices, one with probability 0.9, your chances of a correct guess using the optimal strategy are 90 percent. Yet, the perplexity is 2−0.9 log20.9 - 0.1 log20.1= 1.38. The inverse of the perplexity, 1/1.38 = 0.72, does not correspond to the 0.9 probability. The perplexity is the exponentiation of the entropy, a more commonly encountered quantity. Entropy measures the expected or "average" number of bits required to encode the outcome of the random variable using an optimalvariable-length code. It can also be regarded as the expected information gain from learning the outcome of the random variable, providing insight into the uncertainty and complexity of the underlying probability distribution. A model of an unknown probability distributionp, may be proposed based on a training sample that was drawn fromp. Given a proposed probability modelq, one may evaluateqby asking how well it predicts a separate test samplex1,x2, ...,xNalso drawn fromp. The perplexity of the modelqis defined as whereb{\displaystyle b}is customarily 2. Better modelsqof the unknown distributionpwill tend to assign higher probabilitiesq(xi) to the test events. Thus, they have lower perplexity because they are less surprised by the test sample. This is equivalent to saying that better models have higherlikelihoodsfor the test data, which leads to a lower perplexity value. The exponent above may be regarded as the average number of bits needed to represent a test eventxiif one uses an optimal code based onq. Low-perplexity models do a better job ofcompressingthe test sample, requiring few bits per test element on average becauseq(xi) tends to be high. The exponent−1N∑i=1Nlogb⁡q(xi){\displaystyle -{\tfrac {1}{N}}\sum _{i=1}^{N}\log _{b}q(x_{i})}may also be interpreted as across-entropy: wherep~{\displaystyle {\tilde {p}}}denotes theempirical distributionof the test sample (i.e.,p~(x)=n/N{\displaystyle {\tilde {p}}(x)=n/N}ifxappearedntimes in the test sample of sizeN). By the definition ofKL divergence, it is also equal toH(p~)+DKL(p~‖q){\displaystyle H({\tilde {p}})+D_{KL}({\tilde {p}}\|q)}, which is≥H(p~){\displaystyle \geq H({\tilde {p}})}. Consequently, the perplexity is minimized whenq=p~{\displaystyle q={\tilde {p}}}. Innatural language processing(NLP), acorpusis a structured collection oftextsor documents, and alanguage modelis a probability distribution over entire texts or documents. Consequently, in NLP, the more commonly used measure isperplexity per token(word or, more frequently, sub-word), defined as:(∏i=1nq(si))−1/N{\displaystyle \left(\prod _{i=1}^{n}q(s_{i})\right)^{-1/N}}wheres1,...,sn{\displaystyle s_{1},...,s_{n}}are then{\displaystyle n}documents in the corpus andN{\displaystyle N}is the number oftokensin the corpus. This normalizes the perplexity by the length of the text, allowing for more meaningful comparisons between different texts or models rather than documents. Suppose the average textxiin the corpus has a probability of2−190{\displaystyle 2^{-190}}according to the language model. This would give a model perplexity of 2190per sentence. However, in NLP, it is more common to normalize by the length of a text. Thus, if the test sample has a length of 1,000 tokens, and could be coded using 7.95 bits per token, one could report a model perplexity of 27.95= 247per token.In other words, the model is as confused on test data as if it had to choose uniformly and independently among 247 possibilities for each token. There are two standard evaluation metrics for language models: perplexity or word error rate(WER). The simpler of these measures, WER, is simply the percentage of erroneously recognized words E (deletions, insertions, substitutions) to total number of words N, in a speech recognition task i.e.WER=(EN)∗100%{\displaystyle WER=\left({\frac {E}{\mathbb {N} }}\right)*100\%}The second metric, perplexity (per token), is an information theoretic measure that evaluates the similarity of proposed modelmto the original distributionp. It can be computed as a inverse of (geometric) average probability of test setT PPL(D)=1m(T)N=2−1Nlog2⁡(m(T)){\displaystyle PPL(D)={\sqrt[{N}]{\frac {1}{m(T)}}}=2^{-{\frac {1}{N}}\log _{2}{\big (}m(T){\big )}}} whereNis the number of tokens in test setT. This equation can be seen as the exponentiated cross entropy, where cross entropy H (p;m) is approximated as H(p;m)=−1Nlog2⁡(m(T)){\displaystyle H(p;m)=-{\frac {1}{N}}\log _{2}{\big (}m(T){\big )}} Since 2007, significant advancements in language modeling have emerged, particularly with the advent ofdeep learningtechniques. Perplexity per token, a measure that quantifies the predictive power of a language model, has remained central to evaluating models such as the dominanttransformermodels like Google'sBERT, OpenAI'sGPT-4and otherlarge language models(LLMs). This measure was employed to compare different models on the same dataset and guide the optimization ofhyperparameters, although it has been found sensitive to factors such as linguistic features and sentence length.[2] Despite its pivotal role in language model development, perplexity has shown limitations, particularly as an inadequate predictor ofspeech recognitionperformance,overfittingandgeneralization,[3][4]raising questions about the benefits of blindly optimizing perplexity alone. The lowest perplexity that had been published on theBrown Corpus(1 million words of American English of varying topics and genres) as of 1992 is indeed about 247 per word/token, corresponding to a cross-entropy of log2247 = 7.95 bits per word or 1.75 bits per letter[5]using atrigrammodel. While this figure represented the state of the art (SOTA) at the time, advancements in techniques such as deep learning have led to significant improvements in perplexity on other benchmarks, such as the One Billion Word Benchmark.[6] In the context of theBrown Corpus, simply guessing that the next word is "the" will achieve an accuracy of 7 percent, contrasting with the 1/247 = 0.4 percent that might be expected from a naive use of perplexity. This difference underscores the importance of thestatistical modelused and the nuanced nature of perplexity as a measure of predictiveness.[7]The guess is based on unigram statistics, not on the trigram statistics that yielded the perplexity of 247, and utilizing trigram statistics would further refine the prediction.
https://en.wikipedia.org/wiki/Perplexity
Biomedical engineering(BME) ormedical engineeringis the application of engineering principles and design concepts to medicine and biology for healthcare applications (e.g., diagnostic or therapeutic purposes). BME is also traditionally logical sciences to advance health care treatment, includingdiagnosis,monitoring, andtherapy.[1][2]Also included under the scope of a biomedical engineer is the management of current medical equipment in hospitals while adhering to relevant industry standards. This involves procurement, routine testing, preventive maintenance, and making equipment recommendations, a role also known as a Biomedical Equipment Technician (BMET) or as aclinical engineer. Biomedical engineering has recently emerged as its own field of study, as compared to many other engineering fields.[3]Such an evolution is common as a new field transitions from being aninterdisciplinaryspecialization among already-established fields to being considered a field in itself. Much of the work in biomedical engineering consists ofresearch and development, spanning a broad array of subfields (see below). Prominent biomedical engineering applications include the development ofbiocompatibleprostheses, various diagnostic and therapeuticmedical devicesranging from clinical equipment to micro-implants, imaging technologies such asMRIandEKG/ECG,regenerativetissue growth, and the development of pharmaceuticaldrugsincludingbiopharmaceuticals. Bioinformaticsis an interdisciplinary field that develops methods and software tools for understanding biological data. As an interdisciplinary field of science, bioinformatics combines computer science, statistics, mathematics, and engineering to analyze and interpret biological data. Bioinformatics is considered both an umbrella term for the body of biological studies that use computer programming as part of their methodology, as well as a reference to specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidate genes and nucleotides (SNPs). Often, such identification is made with the aim of better understanding the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic acid and protein sequences. Biomechanics is the study of the structure and function of the mechanical aspects of biological systems, at any level from wholeorganismstoorgans,cellsandcell organelles,[4]using the methods ofmechanics.[5] Abiomaterialis any matter, surface, or construct that interacts with living systems. As a science,biomaterialsis about fifty years old. The study of biomaterials is calledbiomaterials science or biomaterials engineering. It has experienced steady and strong growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science. Biomedical optics combines the principles of physics, engineering, and biology to study the interaction of biological tissue and light, and how this can be exploited for sensing, imaging, and treatment.[6]It has a wide range of applications, including optical imaging, microscopy, ophthalmoscopy, spectroscopy, and therapy. Examples of biomedical optics techniques and technologies includeoptical coherence tomography(OCT),fluorescence microscopy,confocal microscopy, andphotodynamic therapy(PDT). OCT, for example, uses light to create high-resolution, three-dimensional images of internal structures, such as theretinain the eye or thecoronary arteriesin the heart. Fluorescence microscopy involves labeling specific molecules with fluorescent dyes and visualizing them using light, providing insights into biological processes and disease mechanisms. More recently,adaptive opticsis helping imaging by correcting aberrations in biological tissue, enabling higher resolution imaging and improved accuracy in procedures such as laser surgery and retinal imaging. Tissue engineering, like genetic engineering (see below), is a major segment ofbiotechnology– which overlaps significantly with BME. One of the goals of tissue engineering is to create artificial organs (via biological material) such as kidneys, livers, for patients that need organ transplants. Biomedical engineers are currently researching methods of creating such organs. Researchers have grown solidjawbones[7]andtracheas[8]from human stem cells towards this end. Severalartificial urinary bladdershave been grown in laboratories and transplanted successfully into human patients.[9]Bioartificial organs, which use both synthetic and biological component, are also a focus area in research, such as with hepatic assist devices that use liver cells within an artificial bioreactor construct.[10] Genetic engineering, recombinant DNA technology, genetic modification/manipulation (GM) and gene splicing are terms that apply to the direct manipulation of an organism's genes. Unlike traditional breeding, an indirect method of genetic manipulation, genetic engineering utilizes modern tools such as molecular cloning and transformation to directly alter the structure and characteristics of target genes. Genetic engineering techniques have found success in numerous applications. Some examples include the improvement of crop technology (not a medical application, but seebiological systems engineering), the manufacture of synthetic human insulin through the use of modified bacteria, the manufacture of erythropoietin in hamster ovary cells, and the production of new types of experimental mice such as the oncomouse (cancer mouse) for research.[citation needed] Neural engineering(also known as neuroengineering) is a discipline that uses engineering techniques to understand, repair, replace, or enhance neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs. Neural engineering can assist with numerous things, including the future development of prosthetics. For example, cognitive neural prosthetics (CNP) are being heavily researched and would allow for a chip implant to assist people who have prosthetics by providing signals to operate assistive devices.[11] Pharmaceutical engineeringis an interdisciplinary science that includes drug engineering, novel drug delivery and targeting, pharmaceutical technology, unit operations ofchemical engineering, and pharmaceutical analysis. It may be deemed as a part ofpharmacydue to its focus on the use of technology on chemical agents in providing better medicinal treatment. This is anextremely broad category—essentially covering all health care products that do not achieve their intended results through predominantly chemical (e.g., pharmaceuticals) or biological (e.g., vaccines) means, and do not involve metabolism. A medical device is intended for use in: Some examples includepacemakers,infusion pumps, theheart-lung machine,dialysismachines,artificial organs,implants,artificial limbs,corrective lenses,cochlear implants,ocular prosthetics,facial prosthetics, somato prosthetics, anddental implants. Stereolithographyis a practical example ofmedical modelingbeing used to create physical objects. Beyond modeling organs and the human body, emerging engineering techniques are also currently used in the research and development of new devices for innovative therapies,[12]treatments,[13]patient monitoring,[14]of complex diseases. Medical devices are regulated and classified (in the US) as follows (see alsoRegulation): Medical/biomedical imaging is a major segment ofmedical devices. This area deals with enabling clinicians to directly or indirectly "view" things not visible in plain sight (such as due to their size, and/or location). This can involve utilizing ultrasound, magnetism, UV, radiology, and other means. Alternatively, navigation-guided equipment utilizeselectromagnetictracking technology, such ascatheterplacement into the brain orfeeding tubeplacement systems. For example, ENvizion Medical's ENvue, an electromagnetic navigation system for enteral feeding tube placement. The system uses an external field generator and several EM passive sensors enabling scaling of the display to the patient's body contour, and a real-time view of the feeding tube tip location and direction, which helps the medical staff ensure the correct placement in theGI tract.[15] Imaging technologies are often essential to medical diagnosis, and are typically the most complex equipment found in a hospital including:fluoroscopy,magnetic resonance imaging(MRI),nuclear medicine,positron emission tomography(PET),PET-CT scans, projection radiography such asX-raysandCT scans,tomography,ultrasound,optical microscopy, andelectron microscopy. An implant is a kind of medical device made to replace and act as a missing biological structure (as compared with a transplant, which indicates transplanted biomedical tissue). The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone or apatite depending on what is the most functional. In some cases, implants contain electronics, e.g. artificial pacemakers and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills ordrug-eluting stents. Artificial body part replacements are one of the many applications of bionics. Concerned with the intricate and thorough study of the properties and function of human body systems, bionics may be applied to solve some engineering problems. Careful study of the different functions and processes of the eyes, ears, and other organs paved the way for improved cameras, television, radio transmitters and receivers, and many other tools. In recent years biomedical sensors based in microwave technology have gained more attention. Different sensors can be manufactured for specific uses in both diagnosing and monitoring disease conditions, for example microwave sensors can be used as a complementary technique to X-ray to monitor lower extremity trauma.[16]The sensor monitor the dielectric properties and can thus notice change in tissue (bone, muscle, fat etc.) under the skin so when measuring at different times during the healing process the response from the sensor will change as the trauma heals. Clinical engineeringis the branch of biomedical engineering dealing with the actual implementation ofmedical equipmentand technologies in hospitals or other clinical settings. Major roles of clinical engineers include training and supervisingbiomedical equipment technicians (BMETs), selecting technological products/services and logistically managing their implementation, working with governmental regulators on inspections/audits, and serving as technological consultants for other hospital staff (e.g. physicians, administrators, I.T., etc.). Clinical engineers also advise and collaborate with medical device producers regarding prospective design improvements based on clinical experiences, as well as monitor the progression of the state of the art so as to redirect procurement patterns accordingly. Their inherent focus onpracticalimplementation of technology has tended to keep them oriented more towardsincremental-level redesigns and reconfigurations, as opposed to revolutionary research & development or ideas that would be many years from clinical adoption; however, there is a growing effort to expand this time-horizon over which clinical engineers can influence the trajectory of biomedical innovation. In their various roles, they form a "bridge" between the primary designers and the end-users, by combining the perspectives of being both close to the point-of-use, while also trained in product and process engineering. Clinical engineering departments will sometimes hire not just biomedical engineers, but also industrial/systems engineers to help address operations research/optimization, human factors, cost analysis, etc. Also, seesafety engineeringfor a discussion of the procedures used to design safe systems. The clinical engineering department is constructed with a manager, supervisor, engineer, and technician. One engineer per eighty beds in the hospital is the ratio. Clinical engineers are also authorized to audit pharmaceutical and associated stores to monitor FDA recalls of invasive items. Rehabilitation engineeringis the systematic application of engineering sciences to design, develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted by individuals with disabilities. Functional areas addressed through rehabilitation engineering may include mobility, communications, hearing, vision, and cognition, and activities associated with employment, independent living, education, and integration into the community.[1] While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a subspecialty of Biomedical engineering, most rehabilitation engineers have an undergraduate or graduate degrees in biomedical engineering, mechanical engineering, or electrical engineering. A Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation Engineering and Accessibility.[7][9]Qualification to become a Rehab' Engineer in the UK is possible via a University BSc Honours Degree course such as Health Design & Technology Institute, Coventry University.[10] The rehabilitation process for people with disabilities often entails the design of assistive devices such as Walking aids intended to promote the inclusion of their users into the mainstream of society, commerce, and recreation. Regulatory issues have been constantly increased in the last decades to respond to the many incidents caused by devices to patients. For example, from 2008 to 2011, in US, there were 119 FDA recalls of medical devices classified as class I. According to U.S. Food and Drug Administration (FDA),Class I recallis associated to "a situation in which there is a reasonable probability that the use of, or exposure to, a product will cause serious adverse health consequences or death"[17] Regardless of the country-specific legislation, the main regulatory objectives coincide worldwide.[18]For example, in the medical device regulations, a product must be 1), safe 2), effective and 3), applicable to all the manufactured devices. A product is safe if patients, users, and third parties do not run unacceptable risks of physical hazards, such as injury or death, in its intended use. Protective measures must be introduced on devices that are hazardous to reduce residual risks at an acceptable level if compared with the benefit derived from the use of it. A product is effective if it performs as specified by the manufacturer in the intended use. Proof of effectiveness is achieved through clinical evaluation, compliance to performance standards or demonstrations of substantial equivalence with an already marketed device. The previous features have to be ensured for all the manufactured items of the medical device. This requires that a quality system shall be in place for all the relevant entities and processes that may impact safety and effectiveness over the whole medical device lifecycle. The medical device engineering area is among the most heavily regulated fields of engineering, and practicing biomedical engineers must routinely consult and cooperate with regulatory law attorneys and other experts. The Food and Drug Administration (FDA) is the principal healthcare regulatory authority in the United States, having jurisdiction over medicaldevices, drugs, biologics, and combinationproducts. The paramount objectives driving policy decisions by the FDA are safety and effectiveness of healthcare products that have to be assured through a quality system in place as specified under21 CFR 829 regulation. In addition, because biomedical engineers often develop devices and technologies for "consumer" use, such as physical therapy devices (which are also "medical" devices), these may also be governed in some respects by theConsumer Product Safety Commission. The greatest hurdles tend to be 510K "clearance" (typically for Class 2 devices) or pre-market "approval" (typically for drugs and class 3 devices). In the European context, safety effectiveness and quality is ensured through the "Conformity Assessment" which is defined as "the method by which a manufacturer demonstrates that its device complies with the requirements of the EuropeanMedical Device Directive". The directive specifies different procedures according to the class of the device ranging from the simple Declaration of Conformity (Annex VII) for Class I devices to EC verification (Annex IV), Production quality assurance (Annex V), Product quality assurance (Annex VI) and Full quality assurance (Annex II). The Medical Device Directive specifies detailed procedures for Certification. In general terms, these procedures include tests and verifications that are to be contained in specific deliveries such as the risk management file, the technical file, and the quality system deliveries. The risk management file is the first deliverable that conditions the following design and manufacturing steps. The risk management stage shall drive the product so that product risks are reduced at an acceptable level with respect to the benefits expected for the patients for the use of the device. Thetechnical filecontains all the documentation data and records supporting medical device certification. FDA technical file has similar content although organized in a different structure. The Quality System deliverables usually include procedures that ensure quality throughout all product life cycles. The same standard (ISO EN 13485) is usually applied for quality management systems in the US and worldwide. In the European Union, there are certifying entities named "Notified Bodies", accredited by the European Member States. The Notified Bodies must ensure the effectiveness of the certification process for all medical devices apart from the class I devices where a declaration of conformity produced by the manufacturer is sufficient for marketing. Once a product has passed all the steps required by the Medical Device Directive, the device is entitled to bear aCE marking, indicating that the device is believed to be safe and effective when used as intended, and, therefore, it can be marketed within the European Union area. The different regulatory arrangements sometimes result in particular technologies being developed first for either the U.S. or in Europe depending on the more favorable form of regulation. While nations often strive for substantive harmony to facilitate cross-national distribution, philosophical differences about theoptimal extentof regulation can be a hindrance; more restrictive regulations seem appealing on an intuitive level, but critics decry the tradeoff cost in terms of slowing access to life-saving developments. Directive 2011/65/EU, better known as RoHS 2 is a recast of legislation originally introduced in 2002. The original EU legislation "Restrictions of Certain Hazardous Substances in Electrical and Electronics Devices" (RoHS Directive 2002/95/EC) was replaced and superseded by 2011/65/EU published in July 2011 and commonly known as RoHS 2.RoHSseeks to limit the dangerous substances in circulation in electronics products, in particular toxins and heavy metals, which are subsequently released into the environment when such devices are recycled. The scope of RoHS 2 is widened to include products previously excluded, such as medical devices and industrial equipment. In addition, manufacturers are now obliged to provide conformity risk assessments and test reports – or explain why they are lacking. For the first time, not only manufacturers but also importers and distributors share a responsibility to ensure Electrical and Electronic Equipment within the scope of RoHS complies with the hazardous substances limits and have a CE mark on their products. The new International StandardIEC 60601for home healthcare electro-medical devices defining the requirements for devices used in the home healthcare environment. IEC 60601-1-11 (2010) must now be incorporated into the design and verification of a wide range of home use and point of care medical devices along with other applicable standards in the IEC 60601 3rd edition series. The mandatory date for implementation of the EN European version of the standard is June 1, 2013. The US FDA requires the use of the standard on June 30, 2013, while Health Canada recently extended the required date from June 2012 to April 2013. The North American agencies will only require these standards for new device submissions, while the EU will take the more severe approach of requiring all applicable devices being placed on the market to consider the home healthcare standard. AS/ANS 3551:2012is the Australian and New Zealand standards for the management of medical devices. The standard specifies the procedures required to maintain a wide range of medical assets in a clinical setting (e.g. Hospital).[19]The standards are based on the IEC 606101 standards. The standard covers a wide range of medical equipment management elements including, procurement, acceptance testing, maintenance (electrical safety and preventive maintenance testing) and decommissioning. Biomedical engineers require considerable knowledge of both engineering and biology, and typically have a Bachelor's (B.Sc., B.S., B.Eng. or B.S.E.) or Master's (M.S., M.Sc., M.S.E., or M.Eng.) or a doctoral (Ph.D., orMD-PhD[20][21][22]) degree in BME (Biomedical Engineering) or another branch of engineering with considerable potential for BME overlap. As interest in BME increases, many engineering colleges now have a Biomedical Engineering Department or Program, with offerings ranging from the undergraduate (B.Sc., B.S., B.Eng. or B.S.E.) to doctoral levels. Biomedical engineering has only recently been emerging asits own disciplinerather than a cross-disciplinary hybrid specialization of other disciplines; and BME programs at all levels are becoming more widespread, including theBachelor of Science in Biomedical Engineeringwhich includes enough biological science content that many students use it as a "pre-med" major in preparation formedical school. The number of biomedical engineers is expected to rise as both a cause and effect of improvements in medical technology.[23] In the U.S., an increasing number ofundergraduateprograms are also becoming recognized byABETas accredited bioengineering/biomedical engineering programs. As of 2023, 155 programs are currently accredited by ABET.[24] In Canada and Australia, accredited graduate programs in biomedical engineering are common.[25]For example,McMaster Universityoffers an M.A.Sc, an MD/PhD, and a PhD in Biomedical engineering.[26]The first CanadianundergraduateBME program was offered atUniversity of Guelphas a four-year B.Eng. program.[27]The Polytechnique in Montreal is also offering a bachelors's degree in biomedical engineering[28]as is Flinders University.[29] As with many degrees, the reputation and ranking of a program may factor into the desirability of a degree holder for either employment or graduate admission. The reputation of many undergraduate degrees is also linked to the institution's graduate or research programs, which have some tangible factors for rating, such as research funding and volume, publications and citations. With BME specifically, the ranking of a university's hospital and medical school can also be a significant factor in the perceived prestige of its BME department/program. Graduate educationis a particularly important aspect in BME. While many engineering fields (such as mechanical or electrical engineering) do not need graduate-level training to obtain an entry-level job in their field, the majority of BME positions do prefer or even require them.[30]Since most BME-related professions involve scientific research, such as inpharmaceuticalandmedical devicedevelopment, graduate education is almost a requirement (as undergraduate degrees typically do not involve sufficient research training and experience). This can be either a Masters or Doctoral level degree; while in certain specialties a Ph.D. is notably more common than in others, it is hardly ever the majority (except in academia). In fact, the perceived need for some kind of graduate credential is so strong that some undergraduate BME programs will actively discourage students from majoring in BME without an expressed intention to also obtain a master's degree or apply to medical school afterwards. Graduate programs in BME, like in other scientific fields, are highly varied, and particular programs may emphasize certain aspects within the field. They may also feature extensive collaborative efforts with programs in other fields (such as the university's Medical School or other engineering divisions), owing again to the interdisciplinary nature of BME. M.S. and Ph.D. programs will typically require applicants to have an undergraduate degree in BME, oranother engineeringdiscipline (plus certain life science coursework), orlife science(plus certain engineering coursework). Education in BME also varies greatly around the world. By virtue of its extensive biotechnology sector, its numerous major universities, and relatively few internal barriers, the U.S. has progressed a great deal in its development of BME education and training opportunities. Europe, which also has a large biotechnology sector and an impressive education system, has encountered trouble in creating uniform standards as the European community attempts to supplant some of the national jurisdictional barriers that still exist. Recently, initiatives such as BIOMEDEA have sprung up to develop BME-related education and professional standards.[31]Other countries, such as Australia, are recognizing and moving to correct deficiencies in their BME education.[32]Also, as high technology endeavors are usually marks of developed nations, some areas of the world are prone to slower development in education, including in BME. As with other learned professions, each state has certain (fairly similar) requirements for becoming licensed as a registeredProfessional Engineer(PE), but, in US, in industry such a license is not required to be an employee as an engineer in the majority of situations (due to an exception known as the industrial exemption, which effectively applies to the vast majority of American engineers). The US model has generally been only to require the practicing engineers offering engineering services that impact the public welfare, safety, safeguarding of life, health, or property to be licensed, while engineers working in private industry without a direct offering of engineering services to the public or other businesses, education, and government need not be licensed. This is notably not the case in many other countries, where a license is as legally necessary to practice engineering as it is for law or medicine. Biomedical engineering is regulated in some countries, such as Australia, but registration is typically only recommended and not required.[33] In the UK, mechanical engineers working in the areas of Medical Engineering,Bioengineeringor Biomedical engineering can gainChartered Engineerstatus through theInstitution of Mechanical Engineers. The Institution also runs the Engineering in Medicine and Health Division.[34]The Institute of Physics and Engineering in Medicine (IPEM) has a panel for the accreditation of MSc courses in Biomedical Engineering and Chartered Engineering status can also be sought through IPEM. TheFundamentals of Engineering exam– the first (and more general) of two licensure examinations for most U.S. jurisdictions—does now cover biology (although technically not BME). For the second exam, called the Principles and Practices, Part 2, or the Professional Engineering exam, candidates may select a particular engineering discipline's content to be tested on; there is currently not an option for BME with this, meaning that any biomedical engineers seeking a license must prepare to take this examination in another category (which does not affect the actual license, since most jurisdictions do not recognize discipline specialties anyway). However, the Biomedical Engineering Society (BMES) is, as of 2009, exploring the possibility of seeking to implement a BME-specific version of this exam to facilitate biomedical engineers pursuing licensure. Beyond governmental registration, certain private-sector professional/industrial organizations also offer certifications with varying degrees of prominence. One such example is the Certified Clinical Engineer (CCE) certification for Clinical engineers. In 2012 there were about 19,400 biomedical engineers employed in the US, and the field was predicted to grow by 5% (faster than average) from 2012 to 2022.[35]Biomedical engineering has the highest percentage of female engineers compared to other common engineering professions. Now as of 2023, there are 19,700 jobs for this degree, the average pay for a person in this field is around $100,730.00 and making around $48.43 an hour. There is also expected to be a 7% increase in jobs from here 2023 to 2033 (even faster than the last average). 45. ^Bureau of Labor Statistics, U.S. Department of Labor, Occupational Outlook Handbook,"Bioengineers and Biomedical Engineers", retrieved October 27, 2024.
https://en.wikipedia.org/wiki/Biomedical_engineering
Inmathematics, amorphismis a concept ofcategory theorythat generalizes structure-preservingmapssuch ashomomorphismbetweenalgebraic structures,functionsfrom a set to another set, andcontinuous functionsbetweentopological spaces. Although many examples of morphisms are structure-preserving maps, morphisms need not to be maps, but they can be composed in a way that is similar tofunction composition. Morphisms andobjectsare constituents of acategory. Morphisms, also calledmapsorarrows, relate two objects called thesourceand thetargetof the morphism. There is apartial operation, calledcomposition, on the morphisms of a category that is defined if the target of the first object equals the source of the second object. The composition of morphisms behave like function composition (associativityof composition when it is defined, and existence of anidentity morphismfor every object). Morphisms and categories recur in much of contemporary mathematics. Originally, they were introduced forhomological algebraandalgebraic topology. They belong to the foundational tools ofGrothendieck'sscheme theory, a generalization ofalgebraic geometrythat applies also toalgebraic number theory. AcategoryCconsists of twoclasses, one ofobjectsand the other ofmorphisms. There are two objects that are associated to every morphism, thesourceand thetarget. AmorphismffromXtoYis a morphism with sourceXand targetY; it is commonly written asf:X→YorXf→Ythe latter form being better suited forcommutative diagrams. For many common categories, an object is aset(often with some additional structure) and a morphism is afunctionfrom an object to another object. Therefore, the source and the target of a morphism are often calleddomainandcodomainrespectively. Morphisms are equipped with apartial binary operation, calledcomposition. The composition of two morphismsfandgis defined precisely when the target offis the source ofg, and is denotedg∘f(or sometimes simplygf). The source ofg∘fis the source off, and the target ofg∘fis the target ofg. The composition satisfies twoaxioms: For a concrete category (a category in which the objects are sets, possibly with additional structure, and the morphisms are structure-preserving functions), the identity morphism is just theidentity function, and composition is just ordinarycomposition of functions. The composition of morphisms is often represented by acommutative diagram. For example, The collection of all morphisms fromXtoYis denotedHomC(X,Y)or simplyHom(X,Y)and called thehom-setbetweenXandY. Some authors writeMorC(X,Y),Mor(X,Y)orC(X,Y). The term hom-set is something of a misnomer, as the collection of morphisms is not required to be a set; a category whereHom(X,Y)is a set for all objectsXandYis calledlocally small. Because hom-sets may not be sets, some people prefer to use the term "hom-class". The domain and codomain are in fact part of the information determining a morphism. For example, in thecategory of sets, where morphisms are functions, two functions may be identical as sets of ordered pairs (may have the samerange), while having different codomains. The two functions are distinct from the viewpoint of category theory. Thus many authors require that the hom-classesHom(X,Y)bedisjoint. In practice, this is not a problem because if this disjointness does not hold, it can be assured by appending the domain and codomain to the morphisms (say, as the second and third components of an ordered triple). A morphismf:X→Yis called amonomorphismiff∘g1=f∘g2impliesg1=g2for all morphismsg1,g2:Z→X. A monomorphism can be called amonofor short, and we can usemonicas an adjective.[1]A morphismfhas aleft inverseor is asplit monomorphismif there is a morphismg:Y→Xsuch thatg∘f= idX. Thusf∘g:Y→Yisidempotent; that is,(f∘g)2=f∘ (g∘f) ∘g=f∘g. The left inversegis also called aretractionoff.[1] Morphisms with left inverses are always monomorphisms, but theconverseis not true in general; a monomorphism may fail to have a left inverse. Inconcrete categories, a function that has a left inverse isinjective. Thus in concrete categories, monomorphisms are often, but not always, injective. The condition of being an injection is stronger than that of being a monomorphism, but weaker than that of being a split monomorphism. Dually to monomorphisms, a morphismf:X→Yis called anepimorphismifg1∘f=g2∘fimpliesg1=g2for all morphismsg1,g2:Y→Z. An epimorphism can be called anepifor short, and we can useepicas an adjective.[1]A morphismfhas aright inverseor is asplit epimorphismif there is a morphismg:Y→Xsuch thatf∘g= idY. The right inversegis also called asectionoff.[1]Morphisms having a right inverse are always epimorphisms, but the converse is not true in general, as an epimorphism may fail to have a right inverse. If a monomorphismfsplits with left inverseg, thengis a split epimorphism with right inversef. Inconcrete categories, a function that has a right inverse issurjective. Thus in concrete categories, epimorphisms are often, but not always, surjective. The condition of being a surjection is stronger than that of being an epimorphism, but weaker than that of being a split epimorphism. In thecategory of sets, the statement that every surjection has a section is equivalent to theaxiom of choice. A morphism that is both an epimorphism and a monomorphism is called abimorphism. A morphismf:X→Yis called anisomorphismif there exists a morphismg:Y→Xsuch thatf∘g= idYandg∘f= idX. If a morphism has both left-inverse and right-inverse, then the two inverses are equal, sofis an isomorphism, andgis called simply theinverseoff. Inverse morphisms, if they exist, are unique. The inversegis also an isomorphism, with inversef. Two objects with an isomorphism between them are said to beisomorphicor equivalent. While every isomorphism is a bimorphism, a bimorphism is not necessarily an isomorphism. For example, in the category ofcommutative ringsthe inclusionZ→Qis a bimorphism that is not an isomorphism. However, any morphism that is both an epimorphism and asplitmonomorphism, or both a monomorphism and asplitepimorphism, must be an isomorphism. A category, such as aSet, in which every bimorphism is an isomorphism is known as abalanced category. A morphismf:X→X(that is, a morphism with identical source and target) is anendomorphismofX. Asplit endomorphismis an idempotent endomorphismfiffadmits a decompositionf=h∘gwithg∘h= id. In particular, theKaroubi envelopeof a category splits every idempotent morphism. Anautomorphismis a morphism that is both an endomorphism and an isomorphism. In every category, the automorphisms of an object always form agroup, called theautomorphism groupof the object. For more examples, seeCategory theory.
https://en.wikipedia.org/wiki/Morphism
Dimensional modeling(DM) is part of theBusiness Dimensional Lifecyclemethodology developed byRalph Kimballwhich includes a set of methods, techniques and concepts for use indata warehousedesign.[1]: 1258–1260[2]The approach focuses on identifying the keybusiness processeswithin a business and modelling and implementing these first before adding additional business processes, as abottom-up approach.[1]: 1258–1260An alternative approach fromInmonadvocates a top down design of the model of all the enterprise data using tools such asentity-relationship modeling(ER).[1]: 1258–1260 Dimensional modeling always uses the concepts of facts (measures), and dimensions (context). Facts are typically (but not always) numeric values that can be aggregated, and dimensions are groups of hierarchies and descriptors that define the facts. For example, sales amount is a fact; timestamp, product, register#, store#, etc. are elements of dimensions. Dimensional models are built by business process area, e.g. store sales, inventory, claims, etc. Because the differentbusiness process areasshare some but not all dimensions, efficiency in design, operation, and consistency, is achieved usingconformed dimensions, i.e. using one copy of the shared dimension across subject areas.[citation needed] Dimensional modeling does not necessarily involve a relational database. The same modeling approach, at the logical level, can be used for any physical form, such as multidimensional database or even flat files. It is oriented around understandability and performance.[citation needed] The dimensional model is built on astar-like schemaorsnowflake schema, with dimensions surrounding the fact table.[3][4]To build the schema, the following design model is used: The process of dimensional modeling builds on a 4-step design method that helps to ensure the usability of the dimensional model and the use of thedata warehouse. The basics in the design build on the actual business process which thedata warehouseshould cover. Therefore, the first step in the model is to describe the business process which the model builds on. This could for instance be a sales situation in a retail store. To describe the business process, one can choose to do this in plain text or use basicBusiness Process Model and Notation(BPMN) or other design guides like theUnified Modeling Language|UML). After describing the business process, the next step in the design is to declare the grain of the model. The grain of the model is the exact description of what the dimensional model should be focusing on. This could for instance be “An individual line item on a customer slip from a retail store”. To clarify what the grain means, you should pick the central process and describe it with one sentence. Furthermore, the grain (sentence) is what you are going to build your dimensions and fact table from. You might find it necessary to go back to this step to alter the grain due to new information gained on what your model is supposed to be able to deliver. The third step in the design process is to define the dimensions of the model. The dimensions must be defined within the grain from the second step of the 4-step process. Dimensions are the foundation of the fact table, and is where the data for the fact table is collected. Typically dimensions are nouns like date, store, inventory etc. These dimensions are where all the data is stored. For example, the date dimension could contain data such as year, month and weekday. After defining the dimensions, the next step in the process is to make keys for the fact table. This step is to identify the numeric facts that will populate each fact table row. This step is closely related to the business users of the system, since this is where they get access to data stored in thedata warehouse. Therefore, most of the fact table rows are numerical, additive figures such as quantity or cost per unit, etc. Dimensional normalization or snowflaking removes redundant attributes, which are known in the normal flatten de-normalized dimensions. Dimensions are strictly joined together in sub dimensions. Snowflaking has an influence on the data structure that differs from many philosophies of data warehouses.[4]Single data (fact) table surrounded by multiple descriptive (dimension) tables Developers often don't normalize dimensions due to several reasons:[5] There are some arguments on why normalization can be useful.[4]It can be an advantage when part of hierarchy is common to more than one dimension. For example, a geographic dimension may be reusable because both the customer and supplier dimensions use it. Benefits of the dimensional model are the following:[6] We still get the benefits of dimensional models onHadoopand similarbig dataframeworks. However, some features of Hadoop require us to slightly adapt the standard approach to dimensional modelling.[citation needed]
https://en.wikipedia.org/wiki/Dimensional_modeling
Incomputing,type introspectionis the ability of a program toexaminethetypeor properties of anobjectatruntime. Someprogramming languagespossess this capability. Introspection should not be confused withreflection, which goes a step further and is the ability for a program tomanipulatethe metadata, properties, and functions of an object at runtime. Some programming languages also possess that capability (e.g.,Java,Python,Julia, andGo). InObjective-C, for example, both the generic Object and NSObject (inCocoa/OpenStep) provide themethodisMemberOfClass:which returns true if the argument to the method is an instance of the specified class. The methodisKindOfClass:analogously returns true if the argument inherits from the specified class. For example, say we have anAppleand anOrangeclass inheriting fromFruit. Now, in theeatmethod we can write Now, wheneatis called with a generic object (anid), the function will behave correctly depending on the type of the generic object. C++ supports type introspection via therun-time type information(RTTI)typeidanddynamic castkeywords. Thedynamic_castexpression can be used to determine whether a particular object is of a particular derived class. For instance: Thetypeidoperator retrieves astd::type_infoobject describing the most derived type of an object: Type introspection has been a part of Object Pascal since the original release of Delphi, which uses RTTI heavily for visual form design. In Object Pascal, all classes descend from the base TObject class, which implements basic RTTI functionality. Every class's name can be referenced in code for RTTI purposes; the class name identifier is implemented as a pointer to the class's metadata, which can be declared and used as a variable of type TClass. The language includes anisoperator, to determine if an object is or descends from a given class, anasoperator, providing a type-checked typecast, and several TObject methods. Deeper introspection (enumerating fields and methods) is traditionally only supported for objects declared in the $M+ (a pragma) state, typically TPersistent, and only for symbols defined in the published section. Delphi 2010 increased this to nearly all symbols. The simplest example of type introspection in Java is theinstanceof[1]operator. Theinstanceofoperator determines whether a particular object belongs to a particular class (or a subclass of that class, or a class that implements that interface). For instance: Thejava.lang.Class[2]class is the basis of more advanced introspection. For instance, if it is desirable to determine the actual class of an object (rather than whether it is a member of aparticularclass),Object.getClass()andClass.getName()can be used: InPHPintrospection can be done usinginstanceofoperator. For instance: Introspection can be achieved using therefandisafunctions inPerl. We can introspect the following classes and their corresponding instances: using: Much more powerful introspection in Perl can be achieved using theMooseobject system[3]and theClass::MOPmeta-objectprotocol;[4]for example, you can check if a given objectdoesaroleX: This is how you can list fully qualified names of all of the methods that can be invoked on the object, together with the classes in which they were defined: The most common method of introspection inPythonis using thedirfunction to detail the attributes of an object. For example: Also, the built-in functionstypeandisinstancecan be used to determine what an objectiswhilehasattrcan determine what an objectdoes. For example: Type introspection is a core feature ofRuby. In Ruby, the Object class (ancestor of every class) providesObject#instance_of?andObject#kind_of?methods for checking the instance's class. The latter returns true when the particular instance the message was sent to is an instance of a descendant of the class in question. For example, consider the following example code (you can immediately try this with theInteractive Ruby Shell): In the example above, theClassclass is used as any other class in Ruby. Two classes are created,AandB, the former is being a superclass of the latter, then one instance of each class is checked. The last expression gives true becauseAis a superclass of the class ofb. Further, you can directly ask for the class of any object, and "compare" them (code below assumes having executed the code above): InActionScript(as3), the functionflash.utils.getQualifiedClassNamecan be used to retrieve the class/type name of an arbitrary object. Alternatively, the operatoriscan be used to determine if an object is of a specific type: This second function can be used to testclass inheritanceparents as well: Like Perl, ActionScript can go further than getting the class name, but all the metadata, functions and other elements that make up an object using theflash.utils.describeTypefunction; this is used when implementingreflectionin ActionScript.
https://en.wikipedia.org/wiki/Type_introspection
VACUUM[1][2][3][4]is a set ofnormativeguidance principles for achieving training and test dataset quality for structured datasets indata scienceandmachine learning. Thegarbage-in, garbage outprinciple motivates a solution to the problem of data quality but does not offer a specific solution. Unlike the majority of the ad-hoc data quality assessment metrics often used by practitioners[5]VACUUM specifies qualitative principles for data quality management and serves as a basis for defining more detailed quantitative metrics of data quality.[6] VACUUM is anacronymthat stands for: This technology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/VACUUM
Mathematical optimization(alternatively spelledoptimisation) ormathematical programmingis the selection of a best element, with regard to some criteria, from some set of available alternatives.[1][2]It is generally divided into two subfields:discrete optimizationandcontinuous optimization. Optimization problems arise in all quantitative disciplines fromcomputer scienceandengineering[3]tooperations researchandeconomics, and the development of solution methods has been of interest inmathematicsfor centuries.[4][5] In the more general approach, anoptimization problemconsists ofmaximizing or minimizingareal functionby systematically choosinginputvalues from within an allowed set and computing thevalueof the function. The generalization of optimization theory and techniques to other formulations constitutes a large area ofapplied mathematics.[6] Optimization problems can be divided into two categories, depending on whether thevariablesarecontinuousordiscrete: An optimization problem can be represented in the following way: Such a formulation is called anoptimization problemor amathematical programming problem(a term not directly related tocomputer programming, but still in use for example inlinear programming– seeHistorybelow). Many real-world and theoretical problems may be modeled in this general framework. Since the following is valid: it suffices to solve only minimization problems. However, the opposite perspective of considering only maximization problems would be valid, too. Problems formulated using this technique in the fields ofphysicsmay refer to the technique asenergyminimization,[7]speaking of the value of the functionfas representing the energy of thesystembeingmodeled. Inmachine learning, it is always necessary to continuously evaluate the quality of a data model by using acost functionwhere a minimum implies a set of possibly optimal parameters with an optimal (lowest) error. Typically,Ais somesubsetof theEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}, often specified by a set ofconstraints, equalities or inequalities that the members ofAhave to satisfy. ThedomainAoffis called thesearch spaceor thechoice set, while the elements ofAare calledcandidate solutionsorfeasible solutions. The functionfis variously called anobjective function,criterion function,loss function,cost function(minimization),[8]utility functionorfitness function(maximization), or, in certain fields, anenergy functionorenergyfunctional. A feasible solution that minimizes (or maximizes) the objective function is called anoptimal solution. In mathematics, conventional optimization problems are usually stated in terms of minimization. Alocal minimumx*is defined as an element for which there exists someδ> 0such that the expressionf(x*) ≤f(x)holds; that is to say, on some region aroundx*all of the function values are greater than or equal to the value at that element. Local maxima are defined similarly. While a local minimum is at least as good as any nearby elements, aglobal minimumis at least as good as every feasible element. Generally, unless the objective function isconvexin a minimization problem, there may be several local minima. In aconvex problem, if there is a local minimum that is interior (not on the edge of the set of feasible elements), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima. A large number of algorithms proposed for solving the nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem.Global optimizationis the branch ofapplied mathematicsandnumerical analysisthat is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem. Optimization problems are often expressed with special notation. Here are some examples: Consider the following notation: This denotes the minimumvalueof the objective functionx2+ 1, when choosingxfrom the set ofreal numbersR{\displaystyle \mathbb {R} }. The minimum value in this case is 1, occurring atx= 0. Similarly, the notation asks for the maximum value of the objective function2x, wherexmay be any real number. In this case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or "undefined". Consider the following notation: or equivalently This represents the value (or values) of theargumentxin theinterval(−∞,−1]that minimizes (or minimize) the objective functionx2+ 1(the actual minimum value of that function is not what the problem asks for). In this case, the answer isx= −1, sincex= 0is infeasible, that is, it does not belong to thefeasible set. Similarly, or equivalently represents the{x,y}pair (or pairs) that maximizes (or maximize) the value of the objective functionxcosy, with the added constraint thatxlie in the interval[−5,5](again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form{5, 2kπ}and{−5, (2k+ 1)π}, wherekranges over allintegers. Operatorsarg minandarg maxare sometimes also written asargminandargmax, and stand forargument of the minimumandargument of the maximum. FermatandLagrangefound calculus-based formulae for identifying optima, whileNewtonandGaussproposed iterative methods for moving towards an optimum. The term "linear programming" for certain optimization cases was due toGeorge B. Dantzig, although much of the theory had been introduced byLeonid Kantorovichin 1939. (Programmingin this context does not refer tocomputer programming, but comes from the use ofprogramby theUnited Statesmilitary to refer to proposed training andlogisticsschedules, which were the problems Dantzig studied at that time.) Dantzig published theSimplex algorithmin 1947, and alsoJohn von Neumannand other researchers worked on the theoretical aspects of linear programming (like the theory ofduality) around the same time.[9] Other notable researchers in mathematical optimization include the following: In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time): Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as thePareto set. The curve created plotting weight against stiffness of the best designs is known as thePareto frontier. A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal. The choice among "Pareto optimal" solutions to determine the "favorite solution" is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker. Multi-objective optimization problems have been generalized further intovector optimizationproblems where the (partial) ordering is no longer given by the Pareto ordering. Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer. Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm. Common approaches toglobal optimizationproblems, where multiple local extrema may be present includeevolutionary algorithms,Bayesian optimizationandsimulated annealing. Thesatisfiability problem, also called thefeasibility problem, is just the problem of finding anyfeasible solutionat all without regard to objective value. This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal. Many optimization algorithms need to start from a feasible point. One way to obtain such a point is torelaxthe feasibility conditions using aslack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative. Theextreme value theoremofKarl Weierstrassstates that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view. One of Fermat's theoremsstates that optima of unconstrained problems are found atstationary points, where the first derivative or the gradient of the objective function is zero (seefirst derivative test). More generally, they may be found atcritical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions. Optima of equality-constrained problems can be found by theLagrange multipliermethod. The optima of problems with equality and/or inequality constraints can be found using the 'Karush–Kuhn–Tucker conditions'. While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called theHessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called thebordered Hessianin constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test'). If a candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality. Theenvelope theoremdescribes how the value of an optimal solution changes when an underlyingparameterchanges. The process of computing this change is calledcomparative statics. Themaximum theoremofClaude Berge(1963) describes the continuity of an optimal solution as a function of underlying parameters. For unconstrained problems with twice-differentiable functions, somecritical pointscan be found by finding the points where thegradientof the objective function is zero (that is, the stationary points). More generally, a zerosubgradientcertifies that a local minimum has been found forminimization problems with convexfunctionsand otherlocallyLipschitz functions, which meet in loss function minimization of the neural network. The positive-negative momentum estimation lets to avoid the local minimum and converges at the objective function global minimum.[10] Further, critical points can be classified using thedefinitenessof theHessian matrix: If the Hessian ispositivedefinite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is a local maximum; finally, if indefinite, then the point is some kind ofsaddle point. Constrained problems can often be transformed into unconstrained problems with the help ofLagrange multipliers.Lagrangian relaxationcan also provide approximate solutions to difficult constrained problems. When the objective function is aconvex function, then any local minimum will also be a global minimum. There exist efficient numerical techniques for minimizing convex functions, such asinterior-point methods. More generally, if the objective function is not a quadratic function, then many optimization methods use other methods to ensure that some subsequence of iterations converges to an optimal solution. The first and still popular method for ensuring convergence relies online searches, which optimize a function along one dimension. A second and increasingly popular method for ensuring convergence usestrust regions. Both line searches and trust regions are used in modern methods ofnon-differentiable optimization. Usually, a global optimizer is much slower than advanced local optimizers (such asBFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points. To solve problems, researchers may usealgorithmsthat terminate in a finite number of steps, oriterative methodsthat converge to a solution (on some specified class of problems), orheuristicsthat may provide approximate solutions to some problems (although their iterates need not converge). Theiterative methodsused to solve problems ofnonlinear programmingdiffer according to whether theyevaluateHessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase thecomputational complexity(or computational cost) of each iteration. In some cases, the computational complexity may be excessively high. One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables. The derivatives provide detailed information for such optimizers, but are even harder to calculate, e.g. approximating the gradient takes at least N+1 function evaluations. For approximations of the 2nd derivatives (collected in the Hessian matrix), the number of function evaluations is in the order of N². Newton's method requires the 2nd-order derivatives, so for each iteration, the number of function calls is in the order of N², but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself. Besides (finitely terminating)algorithmsand (convergent)iterative methods, there areheuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations. List of some well-known heuristics: Problems inrigid body dynamics(in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve anordinary differential equationon a constraint manifold;[11]the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving alinear complementarity problem, which can also be viewed as a QP (quadratic programming) problem. Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is theengineering optimization, and another recent and growing subset of this field ismultidisciplinary design optimization, which, while useful in many problems, has in particular been applied toaerospace engineeringproblems. This approach may be applied in cosmology and astrophysics.[12] Economicsis closely enough linked to optimization ofagentsthat an influential definition relatedly describes economicsquascience as the "study of human behavior as a relationship between ends andscarcemeans" with alternative uses.[13]Modern optimization theory includes traditional optimization theory but also overlaps withgame theoryand the study of economicequilibria. TheJournal of Economic Literaturecodesclassify mathematical programming, optimization techniques, and related topics underJEL:C61-C63. In microeconomics, theutility maximization problemand itsdual problem, theexpenditure minimization problem, are economic optimization problems. Insofar as they behave consistently,consumersare assumed to maximize theirutility, whilefirmsare usually assumed to maximize theirprofit. Also, agents are often modeled as beingrisk-averse, thereby preferring to avoid risk.Asset pricesare also modeled using optimization theory, though the underlying mathematics relies on optimizingstochastic processesrather than on static optimization.International trade theoryalso uses optimization to explain trade patterns between nations. The optimization ofportfoliosis an example of multi-objective optimization in economics. Since the 1970s, economists have modeled dynamic decisions over time usingcontrol theory.[14]For example, dynamicsearch modelsare used to studylabor-market behavior.[15]A crucial distinction is between deterministic and stochastic models.[16]Macroeconomistsbuilddynamic stochastic general equilibrium (DSGE)models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments.[17][18] Some common applications of optimization techniques inelectrical engineeringincludeactive filterdesign,[19]stray field reduction in superconducting magnetic energy storage systems,space mappingdesign ofmicrowavestructures,[20]handset antennas,[21][22][23]electromagnetics-based design. Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empiricalsurrogate modelandspace mappingmethodologies since the discovery ofspace mappingin 1993.[24][25]Optimization techniques are also used inpower-flow analysis.[26] Optimization has been widely used in civil engineering.Construction managementandtransportation engineeringare among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures,[27]resource leveling,[28][29]water resource allocation,trafficmanagement[30]and schedule optimization. Another field that uses optimization techniques extensively isoperations research.[31]Operations research also uses stochastic modeling and simulation to support improved decision-making. Increasingly, operations research usesstochastic programmingto model dynamic decisions that adapt to events; such problems can be solved with large-scale optimization andstochastic optimizationmethods. Mathematical optimization is used in much modern controller design. High-level controllers such asmodel predictive control(MPC) or real-time optimization (RTO) employ mathematical optimization. These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled. Optimization techniques are regularly used ingeophysicalparameter estimation problems. Given a set of geophysical measurements, e.g.seismic recordings, it is common to solve for thephysical propertiesandgeometrical shapesof the underlying rocks and fluids. The majority of problems in geophysics are nonlinear with both deterministic and stochastic methods being widely used. Nonlinear optimization methods are widely used inconformational analysis. Optimization techniques are used in many facets of computational systems biology such as model building, optimal experimental design, metabolic engineering, and synthetic biology.[32]Linear programminghas been applied to calculate the maximal possible yields of fermentation products,[32]and to infer gene regulatory networks from multiple microarray datasets[33]as well as transcriptional regulatory networks from high-throughput data.[34]Nonlinear programminghas been used to analyze energy metabolism[35]and has been applied to metabolic engineering and parameter estimation in biochemical pathways.[36]
https://en.wikipedia.org/wiki/Optimization_(mathematics)
Intelecommunications,6Gis the designation for a futuretechnical standardof asixth-generationtechnology forwireless communications. It is the planned successor to5G(ITU-RIMT-2020), and is currently in the early stages of the standardization process, tracked by theITU-Ras IMT-2030[1]with the framework and overall objectives defined in recommendation ITU-R M.2160-0.[2][3]Similar to previous generations of thecellulararchitecture, standardization bodies such as3GPPandETSI, as well as industry groups such as theNext Generation Mobile Networks(NGMN) Alliance, are expected to play a key role in its development.[4][5][6] Numerous companies (Airtel,Anritsu,Apple,Ericsson, Fly,Huawei,Jio,Keysight,LG,Nokia,NTT Docomo,Samsung,Vi,Xiaomi), research institutes (Technology Innovation Institute, theInteruniversity Microelectronics Centre) and countries (United States, United Kingdom,European Unionmember states, Russia, China, India, Japan, South Korea, Singapore, Saudi Arabia, United Arab Emirates, Qatar, and Israel) have shown interest in 6G networks, and are expected to contribute to this effort.[7][8][9][10][11][12][13][14] 6G networks will likely be faster than previous generations,[15]thanks to further improvements in radio interface modulation and coding techniques,[2]as well as physical-layer technologies.[16]Proposals include a ubiquitous connectivity model which could include non-cellular access such as satellite and WiFi, precise location services, and a framework for distributed edge computing supporting more sensor networks, AR/VR and AI workloads.[5]Other goals include network simplification and increased interoperability, lower latency, and energy efficiency.[2][17]It should enable network operators to adopt flexible decentralizedbusiness modelsfor 6G, with localspectrum licensing, spectrum sharing, infrastructure sharing, and intelligent automated management. Some have proposed that machine-learning/AI systems can be leveraged to support these functions.[18][19][20][17][21] The NGMN alliance have cautioned that "6G must not inherently trigger a hardware refresh of 5G RAN infrastructure", and that it must "address demonstrable customer needs".[17]This reflects industry sentiment about the cost of the 5G rollout, and concern that certain applications and revenue streams have not lived up to expectations.[22][23][24]6G is expected to begin rolling out in the early 2030s, but given such concerns it is not yet clear which features and improvements will be implemented first.[25][26][27] 6G networks are expected to be developed and released by the early 2030s.[28][29]The largest number of 6G patents have been filed inChina.[30] Recent academic publications have been conceptualizing 6G and new features that may be included. Artificial intelligence (AI) is included in many predictions, from 6G supporting AI infrastructure to "AI designing and optimizing 6G architectures, protocols, and operations."[31]Another study inNature Electronicslooks to provide a framework for 6G research stating "We suggest that human-centric mobile communications will still be the most important application of 6G and the 6G network should be human-centric. Thus, high security, secrecy and privacy should be key features of 6G and should be given particular attention by the wireless research community."[32] The frequency bands for 6G are undetermined. Initially, Terahertz was considered an important band for 6G, as indicated by theInstitute of Electrical and Electronics Engineerswhich stated that "Frequencies from 100 GHz to 3 THz are promising bands for the next generation of wireless communication systems because of the wide swaths of unused and unexploredspectrum."[33] One of the challenges in supporting the required high transmission speeds will be the limitation of energy consumption and associated thermal protection in the electronic circuits.[34] As of now, mid bands are being considered by WRC for 6G/IMT-2030. In June 2021, according to Samsung white paper, using Sub-THz 6G spectrum, their indoor data rate was successful for 6 Gbps at 15 meters distance. The following year, in 2022, 12G at 30 meters distance, and 2.3G at 120 meters distance in 2022.[35] In September 2023, LG successfully tested 6G transmission and reception at 500 meters distance outdoor.[36][37] Millimeter waves(30 to 300 GHz) andterahertz radiation(300 to 3,000 GHz) might, according to some speculations, be used in 6G. However, the wave propagation of these frequencies is much more sensitive to obstacles than themicrowavefrequencies (about 2 to 30 GHz) used in5GandWi-Fi, which are more sensitive than theradio wavesused in1G,2G,3Gand4G. Therefore, there are concerns those frequencies may not be commercially viable, especially considering that 5G mmWave deployments are very limited due to deployment costs. In October 2020, theAlliance for Telecommunications Industry Solutions(ATIS) launched a "Next G Alliance", an alliance consisting ofAT&T,Ericsson,Telus,Verizon,T-Mobile,Microsoft,Samsung, and others that "will advance North American mobile technology leadership in 6G and beyond over the next decade."[38] In January 2022, Purple Mountain Laboratories of China claimed that its research team had achieved a world record of 206.25 gigabits per second (Gbit/s) data rate for the first time in a lab environment within the terahertz frequency band, which is supposed to be the base of 6G cellular technology.[39] In February 2022, Chinese researchers stated that they had achieved a record data streaming speed usingvortex millimetre waves, a form of extremely high-frequency radio wave with rapidly changing spins, the researchers transmitted 1 terabyte of data over a distance of 1 km (3,300 feet) in a second. The spinning potential of radio waves was first reported by British physicistJohn Henry Poyntingin 1909, but making use of it proved to be difficult. Zhang and colleagues said their breakthrough was built on the hard work of many research teams across the globe over the past few decades. Researchers in Europe conducted the earliest communication experiments using vortex waves in the 1990s. A major challenge is that the size of the spinning waves increases with distance, and the weakening signal makes high-speed data transmission difficult. The Chinese team built a unique transmitter to generate a more focused vortex beam, making the waves spin in three different modes to carry more information, and developed a high-performance receiving device that could pick up and decode a huge amount of data in a split second.[40] In 2023,Nagoya Universityin Japan reported successful fabrication of three-dimensional wave guides withniobiummetal,[41]asuperconductingmaterial that minimizes attenuation due to absorption and radiation, for transmission of waves in the100GHzfrequency band, deemed useful in 6G networking. On November 6, 2020, China launched aLong March 6rocketwith a payload of thirteen satellites into orbit. One of the satellites reportedly served as an experimental testbed for 6G technology, which was described as "the world's first 6G satellite."[42] During rollout of5G, China bannedEricssonin favour of Chinese suppliers, primarilyHuaweiandZTE.[43][failed verification]HuaweiandZTEwere banned in many Western countries over concerns of spying.[44]This creates a risk of 6G network fragmentation.[45]Many power struggles are expected during the development of common standards.[46]In February 2024, the U.S., Australia, Canada, the Czech Republic, Finland, France, Japan, South Korea, Sweden and the U.K. released a joint statement stating that they support a set of shared principles for 6G for "open, free, global, interoperable, reliable, resilient, and secure connectivity."[47][48] 6G is considered a key technology for economic competitiveness, national security, and the functioning of society. It is a national priority in many countries and is named as priority in China'sFourteenth five-year plan.[49][50] Many countries are favouring the OpenRANapproach, where different suppliers can be integrated together and hardware and software are independent of supplier.[51] In March 2025 Australia's largest telecommunications providerTelstraannounced that 6G is expected to be rolled out in the 2030s, with a budget of $800 million AUD to upgrade existing infrastructure over four years.[52]
https://en.wikipedia.org/wiki/6G
Inphysics,relativistic angular momentumrefers to the mathematical formalisms and physical concepts that defineangular momentuminspecial relativity(SR) andgeneral relativity(GR). The relativistic quantity is subtly different from thethree-dimensionalquantity inclassical mechanics. Angular momentum is an important dynamical quantity derived from position and momentum. It is a measure of an object's rotational motion and resistance to changes in its rotation. Also, in the same way momentum conservation corresponds to translational symmetry, angular momentum conservation corresponds to rotational symmetry – the connection betweensymmetriesandconservation lawsis made byNoether's theorem. While these concepts were originally discovered in classical mechanics, they are also true and significant in special and general relativity. In terms of abstract algebra, the invariance of angular momentum, four-momentum, and other symmetries in spacetime, are described by theLorentz group, or more generally thePoincaré group. Physical quantitiesthat remain separate in classical physics arenaturally combinedin SR and GR by enforcing the postulates of relativity. Most notably, the space and time coordinates combine into thefour-position, and energy and momentum combine into thefour-momentum. The components of thesefour-vectorsdepend on theframe of referenceused, and change underLorentz transformationsto otherinertial framesoraccelerated frames. Relativistic angular momentum is less obvious. The classical definition of angular momentum is thecross productof positionxwith momentumpto obtain apseudovectorx×p, or alternatively as theexterior productto obtain a second orderantisymmetric tensorx∧p. What does this combine with, if anything? There is another vector quantity not often discussed – it is the time-varying moment of mass polar-vector (notthemoment of inertia) related to the boost of thecentre of massof the system, and this combines with the classical angular momentum pseudovector to form an antisymmetric tensor of second order, in exactly the same way as the electric field polar-vector combines with the magnetic field pseudovector to form the electromagnetic field antisymmetric tensor. For rotating mass–energy distributions (such asgyroscopes,planets,stars, andblack holes) instead of point-like particles, theangular momentum tensoris expressed in terms of thestress–energy tensorof the rotating object. In special relativity alone, in therest frameof a spinning object, there is an intrinsic angular momentum analogous to the "spin" inquantum mechanicsandrelativistic quantum mechanics, although for an extended body rather than a point particle. In relativistic quantum mechanics,elementary particleshavespinand this is an additional contribution to theorbitalangular momentum operator, yielding thetotalangular momentum tensor operator. In any case, the intrinsic "spin" addition to the orbital angular momentum of an object can be expressed in terms of thePauli–Lubanski pseudovector.[1] For reference and background, two closely related forms of angular momentum are given. Inclassical mechanics, the orbital angular momentum of a particle with instantaneous three-dimensional position vectorx= (x,y,z)and momentum vectorp= (px,py,pz), is defined as theaxial vectorL=x×p{\displaystyle \mathbf {L} =\mathbf {x} \times \mathbf {p} }which has three components, that are systematically given bycyclic permutationsof Cartesian directions (e.g. changextoy,ytoz,ztox, repeat)Lx=ypz−zpy,Ly=zpx−xpz,Lz=xpy−ypx.{\displaystyle {\begin{aligned}L_{x}&=yp_{z}-zp_{y}\,,\\L_{y}&=zp_{x}-xp_{z}\,,\\L_{z}&=xp_{y}-yp_{x}\,.\end{aligned}}} A related definition is to conceive orbital angular momentum as aplane element. This can be achieved by replacing the cross product by theexterior productin the language ofexterior algebra, and angular momentum becomes acontravariantsecond orderantisymmetric tensor[2]L=x∧p{\displaystyle \mathbf {L} =\mathbf {x} \wedge \mathbf {p} } or writingx= (x1,x2,x3) = (x,y,z)and momentum vectorp= (p1,p2,p3) = (px,py,pz), the components can be compactly abbreviated intensor index notationLij=xipj−xjpi{\displaystyle L^{ij}=x^{i}p^{j}-x^{j}p^{i}}where the indicesiandjtake the values 1, 2, 3. On the other hand, the components can be systematically displayed fully in a 3 × 3antisymmetric matrixL=(L11L12L13L21L22L23L31L32L33)=(0LxyLxzLyx0LyzLzxLzy0)=(0Lxy−Lzx−Lxy0LyzLzx−Lyz0)=(0xpy−ypx−(zpx−xpz)−(xpy−ypx)0ypz−zpyzpx−xpz−(ypz−zpy)0){\displaystyle {\begin{aligned}\mathbf {L} &={\begin{pmatrix}L^{11}&L^{12}&L^{13}\\L^{21}&L^{22}&L^{23}\\L^{31}&L^{32}&L^{33}\\\end{pmatrix}}={\begin{pmatrix}0&L_{xy}&L_{xz}\\L_{yx}&0&L_{yz}\\L_{zx}&L_{zy}&0\end{pmatrix}}={\begin{pmatrix}0&L_{xy}&-L_{zx}\\-L_{xy}&0&L_{yz}\\L_{zx}&-L_{yz}&0\end{pmatrix}}\\&={\begin{pmatrix}0&xp_{y}-yp_{x}&-(zp_{x}-xp_{z})\\-(xp_{y}-yp_{x})&0&yp_{z}-zp_{y}\\zp_{x}-xp_{z}&-(yp_{z}-zp_{y})&0\end{pmatrix}}\end{aligned}}} This quantity is additive, and for an isolated system, the total angular momentum of a system is conserved. In classical mechanics, the three-dimensional quantity for a particle of massmmoving with velocityu[2][3]N=m(x−tu)=mx−tp{\displaystyle \mathbf {N} =m\left(\mathbf {x} -t\mathbf {u} \right)=m\mathbf {x} -t\mathbf {p} }has thedimensionsofmass moment– length multiplied by mass. It is equal to the mass of the particle or system of particles multiplied by the distance from the space origin to thecentre of mass(COM) at the time origin (t= 0), as measured in thelab frame. There is no universal symbol, nor even a universal name, for this quantity. Different authors may denote it by other symbols if any (for exampleμ), may designate other names, and may defineNto be the negative of what is used here. The above form has the advantage that it resembles the familiarGalilean transformationfor position, which in turn is the non-relativistic boost transformation between inertial frames. This vector is also additive: for a system of particles, the vector sum is the resultant∑nNn=∑nmn(xn−tun)=(xCOM∑nmn−t∑nmnun)=Mtot(xCOM−uCOMt){\displaystyle \sum _{n}\mathbf {N} _{n}=\sum _{n}m_{n}\left(\mathbf {x} _{n}-t\mathbf {u} _{n}\right)=\left(\mathbf {x} _{\mathrm {COM} }\sum _{n}m_{n}-t\sum _{n}m_{n}\mathbf {u} _{n}\right)=M_{\text{tot}}(\mathbf {x} _{\mathrm {COM} }-\mathbf {u} _{\mathrm {COM} }t)}where the system's centre of mass position and velocity and total mass are respectivelyxCOM=∑nmnxn∑nmn,uCOM=∑nmnun∑nmn,Mtot=∑nmn.{\displaystyle {\begin{aligned}\mathbf {x} _{\mathrm {COM} }&={\frac {\sum _{n}m_{n}\mathbf {x} _{n}}{\sum _{n}m_{n}}},\\[3pt]\mathbf {u} _{\mathrm {COM} }&={\frac {\sum _{n}m_{n}\mathbf {u} _{n}}{\sum _{n}m_{n}}},\\[3pt]M_{\text{tot}}&=\sum _{n}m_{n}.\end{aligned}}} For an isolated system,Nis conserved in time, which can be seen by differentiating with respect to time. The angular momentumLis a pseudovector, butNis an "ordinary" (polar) vector, and is therefore invariant under inversion. The resultantNtotfor a multiparticle system has the physical visualization that, whatever the complicated motion of all the particles are, they move in such a way that the system's COM moves in a straight line. This does not necessarily mean all particles "follow" the COM, nor that all particles all move in almost the same direction simultaneously, only that the collective motion of the particles is constrained in relation to the centre of mass. In special relativity, if the particle moves with velocityurelative to the lab frame, thenE=γ(u)m0c2,p=γ(u)m0u{\displaystyle {\begin{aligned}E&=\gamma (\mathbf {u} )m_{0}c^{2},&\mathbf {p} &=\gamma (\mathbf {u} )m_{0}\mathbf {u} \end{aligned}}}whereγ(u)=11−u⋅uc2{\displaystyle \gamma (\mathbf {u} )={\frac {1}{\sqrt {1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}}}}}is theLorentz factorandmis the mass (i.e. the rest mass) of the particle. The corresponding relativistic mass moment in terms ofm,u,p,E, in the same lab frame isN=Ec2x−pt=mγ(u)(x−ut).{\displaystyle \mathbf {N} ={\frac {E}{c^{2}}}\mathbf {x} -\mathbf {p} t=m\gamma (\mathbf {u} )(\mathbf {x} -\mathbf {u} t).} The Cartesian components areNx=mx−pxt=Ec2x−pxt=mγ(u)(x−uxt)Ny=my−pyt=Ec2y−pyt=mγ(u)(y−uyt)Nz=mz−pzt=Ec2z−pzt=mγ(u)(z−uzt){\displaystyle {\begin{aligned}N_{x}=mx-p_{x}t&={\frac {E}{c^{2}}}x-p_{x}t=m\gamma (u)(x-u_{x}t)\\N_{y}=my-p_{y}t&={\frac {E}{c^{2}}}y-p_{y}t=m\gamma (u)(y-u_{y}t)\\N_{z}=mz-p_{z}t&={\frac {E}{c^{2}}}z-p_{z}t=m\gamma (u)(z-u_{z}t)\end{aligned}}} Consider a coordinate frameF′which moves with velocityv= (v, 0, 0)relative to another frame F, along the direction of the coincidentxx′axes. The origins of the two coordinate frames coincide at timest=t′ = 0. The mass–energyE=mc2and momentum componentsp= (px,py,pz)of an object, as well as position coordinatesx= (x,y,z)and timetin frameFare transformed toE′ =m′c2,p′ = (px′,py′,pz′),x′ = (x′,y′,z′), andt′inF′according to the Lorentz transformationst′=γ(v)(t−vxc2),E′=γ(v)(E−vpx)x′=γ(v)(x−vt),px′=γ(v)(px−vEc2)y′=y,py′=pyz′=z,pz′=pz{\displaystyle {\begin{aligned}t'&=\gamma (v)\left(t-{\frac {vx}{c^{2}}}\right)\,,\quad &E'&=\gamma (v)\left(E-vp_{x}\right)\\x'&=\gamma (v)(x-vt)\,,\quad &p_{x}'&=\gamma (v)\left(p_{x}-{\frac {vE}{c^{2}}}\right)\\y'&=y\,,\quad &p_{y}'&=p_{y}\\z'&=z\,,\quad &p_{z}'&=p_{z}\\\end{aligned}}} The Lorentz factor here applies to the velocityv, the relative velocity between the frames. This is not necessarily the same as the velocityuof an object. For the orbital 3-angular momentumLas a pseudovector, we haveLx′=y′pz′−z′py′=LxLy′=z′px′−x′pz′=γ(v)(Ly−vNz)Lz′=x′py′−y′px′=γ(v)(Lz+vNy){\displaystyle {\begin{aligned}L_{x}'&=y'p_{z}'-z'p_{y}'=L_{x}\\L_{y}'&=z'p_{x}'-x'p_{z}'=\gamma (v)(L_{y}-vN_{z})\\L_{z}'&=x'p_{y}'-y'p_{x}'=\gamma (v)(L_{z}+vN_{y})\\\end{aligned}}} For the x-componentLx′=y′pz′−z′py′=ypz−zpy=Lx{\displaystyle L_{x}'=y'p_{z}'-z'p_{y}'=yp_{z}-zp_{y}=L_{x}}the y-componentLy′=z′px′−x′pz′=zγ(px−vEc2)−γ(x−vt)pz=γ[zpx−zvEc2−xpz+vtpz]=γ[(zpx−xpz)+v(pzt−zEc2)]=γ(Ly−vNz){\displaystyle {\begin{aligned}L_{y}'&=z'p_{x}'-x'p_{z}'\\&=z\gamma \left(p_{x}-{\frac {vE}{c^{2}}}\right)-\gamma \left(x-vt\right)p_{z}\\&=\gamma \left[zp_{x}-z{\frac {vE}{c^{2}}}-xp_{z}+vtp_{z}\right]\\&=\gamma \left[\left(zp_{x}-xp_{z}\right)+v\left(p_{z}t-z{\frac {E}{c^{2}}}\right)\right]\\&=\gamma \left(L_{y}-vN_{z}\right)\end{aligned}}}and z-componentLz′=x′py′−y′px′=γ(x−vt)py−yγ(px−vEc2)=γ[xpy−vtpy−ypx+yvEc2]=γ[(xpy−ypx)+v(yEc2−tpy)]=γ(Lz+vNy){\displaystyle {\begin{aligned}L_{z}'&=x'p_{y}'-y'p_{x}'\\&=\gamma \left(x-vt\right)p_{y}-y\gamma \left(p_{x}-{\frac {vE}{c^{2}}}\right)\\&=\gamma \left[xp_{y}-vtp_{y}-yp_{x}+y{\frac {vE}{c^{2}}}\right]\\&=\gamma \left[\left(xp_{y}-yp_{x}\right)+v\left(y{\frac {E}{c^{2}}}-tp_{y}\right)\right]\\&=\gamma \left(L_{z}+vN_{y}\right)\end{aligned}}} In the second terms ofLy′andLz′, theyandzcomponents of the cross productv×Ncan be inferred by recognizingcyclic permutationsofvx=vandvy=vz= 0with the components ofN,−vNz=vzNx−vxNz=(v×N)yvNy=vxNy−vyNx=(v×N)z{\displaystyle {\begin{aligned}-vN_{z}&=v_{z}N_{x}-v_{x}N_{z}=\left(\mathbf {v} \times \mathbf {N} \right)_{y}\\vN_{y}&=v_{x}N_{y}-v_{y}N_{x}=\left(\mathbf {v} \times \mathbf {N} \right)_{z}\\\end{aligned}}} Now,Lxis parallel to the relative velocityv, and the other componentsLyandLzare perpendicular tov. The parallel–perpendicular correspondence can be facilitated by splitting the entire 3-angular momentum pseudovector into components parallel (∥) and perpendicular (⊥) tov, in each frame,L=L∥+L⊥,L′=L∥′+L⊥′.{\displaystyle \mathbf {L} =\mathbf {L} _{\parallel }+\mathbf {L} _{\perp }\,,\quad \mathbf {L} '=\mathbf {L} _{\parallel }'+\mathbf {L} _{\perp }'\,.} Then the component equations can be collected into the pseudovector equationsL∥′=L∥L⊥′=γ(v)(L⊥+v×N){\displaystyle {\begin{aligned}\mathbf {L} _{\parallel }'&=\mathbf {L} _{\parallel }\\\mathbf {L} _{\perp }'&=\gamma (\mathbf {v} )\left(\mathbf {L} _{\perp }+\mathbf {v} \times \mathbf {N} \right)\\\end{aligned}}} Therefore, the components of angular momentum along the direction of motion do not change, while the components perpendicular do change. By contrast to the transformations of space and time, time and the spatial coordinates change along the direction of motion, while those perpendicular do not. These transformations are true forallv, not just for motion along thexx′axes. ConsideringLas a tensor, we get a similar resultL⊥′=γ(v)(L⊥+v∧N){\displaystyle \mathbf {L} _{\perp }'=\gamma (\mathbf {v} )\left(\mathbf {L} _{\perp }+\mathbf {v} \wedge \mathbf {N} \right)}wherevzNx−vxNz=(v∧N)zxvxNy−vyNx=(v∧N)xy{\displaystyle {\begin{aligned}v_{z}N_{x}-v_{x}N_{z}&=\left(\mathbf {v} \wedge \mathbf {N} \right)_{zx}\\v_{x}N_{y}-v_{y}N_{x}&=\left(\mathbf {v} \wedge \mathbf {N} \right)_{xy}\\\end{aligned}}} The boost of the dynamic mass moment along thexdirection isNx′=m′x′−px′t′=NxNy′=m′y′−py′t′=γ(v)(Ny+vLzc2)Nz′=m′z′−pz′t′=γ(v)(Nz−vLyc2){\displaystyle {\begin{aligned}N_{x}'&=m'x'-p_{x}'t'=N_{x}\\N_{y}'&=m'y'-p_{y}'t'=\gamma (v)\left(N_{y}+{\frac {vL_{z}}{c^{2}}}\right)\\N_{z}'&=m'z'-p_{z}'t'=\gamma (v)\left(N_{z}-{\frac {vL_{y}}{c^{2}}}\right)\\\end{aligned}}} For the x-componentNx′=E′c2x′−t′px′=γc2(E−vpx)γ(x−vt)−γ(t−xvc2)γ(px−vEc2)=γ2[1c2(E−vpx)(x−vt)−(t−xvc2)(px−vEc2)]=γ2[Exc2−Evtc2−vpxxc2+vpxvtc2−tpx+xvc2px+tvEc2−xvc2vEc2]=γ2[Exc2−Evtc2−vpxxc2+v2c2pxt−tpx+xvc2px+tvEc2−v2c2Exc2]=γ2[(Exc2−tpx)+v2c2(pxt−Exc2)]=γ2[1−v2c2]Nx=γ21γ2Nx{\displaystyle {\begin{aligned}N_{x}'&={\frac {E'}{c^{2}}}x'-t'p_{x}'\\&={\frac {\gamma }{c^{2}}}(E-vp_{x})\gamma (x-vt)-\gamma \left(t-{\frac {xv}{c^{2}}}\right)\gamma \left(p_{x}-{\frac {vE}{c^{2}}}\right)\\&=\gamma ^{2}\left[{\frac {1}{c^{2}}}\left(E-vp_{x}\right)(x-vt)-\left(t-{\frac {xv}{c^{2}}}\right)\left(p_{x}-{\frac {vE}{c^{2}}}\right)\right]\\&=\gamma ^{2}\left[{\frac {Ex}{c^{2}}}-{\frac {Evt}{c^{2}}}-{\frac {vp_{x}x}{c^{2}}}+{\frac {vp_{x}vt}{c^{2}}}-tp_{x}+{\frac {xv}{c^{2}}}p_{x}+t{\frac {vE}{c^{2}}}-{\frac {xv}{c^{2}}}{\frac {vE}{c^{2}}}\right]\\&=\gamma ^{2}\left[{\frac {Ex}{c^{2}}}{\cancel {-{\frac {Evt}{c^{2}}}}}{\cancel {-{\frac {vp_{x}x}{c^{2}}}}}+{\frac {v^{2}}{c^{2}}}p_{x}t-tp_{x}{\cancel {+{\frac {xv}{c^{2}}}p_{x}}}{\cancel {+t{\frac {vE}{c^{2}}}}}-{\frac {v^{2}}{c^{2}}}{\frac {Ex}{c^{2}}}\right]\\&=\gamma ^{2}\left[\left({\frac {Ex}{c^{2}}}-tp_{x}\right)+{\frac {v^{2}}{c^{2}}}\left(p_{x}t-{\frac {Ex}{c^{2}}}\right)\right]\\&=\gamma ^{2}\left[1-{\frac {v^{2}}{c^{2}}}\right]N_{x}\\&=\gamma ^{2}{\frac {1}{\gamma ^{2}}}N_{x}\end{aligned}}}the y-componentNy′=E′c2y′−t′py′=1c2γ(E−vpx)y−γ(t−xvc2)py=γ[1c2(E−vpx)y−(t−xvc2)py]=γ[1c2Ey−1c2vpxy−tpy+xvc2py]=γ[(1c2Ey−tpy)+vc2(xpy−ypx)]=γ(Ny+vc2Lz){\displaystyle {\begin{aligned}N_{y}'&={\frac {E'}{c^{2}}}y'-t'p_{y}'\\&={\frac {1}{c^{2}}}\gamma (E-vp_{x})y-\gamma \left(t-{\frac {xv}{c^{2}}}\right)p_{y}\\&=\gamma \left[{\frac {1}{c^{2}}}(E-vp_{x})y-\left(t-{\frac {xv}{c^{2}}}\right)p_{y}\right]\\&=\gamma \left[{\frac {1}{c^{2}}}Ey-{\frac {1}{c^{2}}}vp_{x}y-tp_{y}+{\frac {xv}{c^{2}}}p_{y}\right]\\&=\gamma \left[\left({\frac {1}{c^{2}}}Ey-tp_{y}\right)+{\frac {v}{c^{2}}}(xp_{y}-yp_{x})\right]\\&=\gamma \left(N_{y}+{\frac {v}{c^{2}}}L_{z}\right)\end{aligned}}}and z-componentNz′=E′c2z′−t′pz′=1c2γ(E−vpx)z−γ(t−xvc2)pz=γ[1c2(E−vpx)z−(t−xvc2)pz]=γ[1c2Ez−1c2vpzz−tpz+xvc2pz]=γ[(1c2Ez−tpz)+vc2(xpz−zpx)]=γ(Nz−vc2Ly){\displaystyle {\begin{aligned}N_{z}'&={\frac {E'}{c^{2}}}z'-t'p_{z}'\\&={\frac {1}{c^{2}}}\gamma (E-vp_{x})z-\gamma \left(t-{\frac {xv}{c^{2}}}\right)p_{z}\\&=\gamma \left[{\frac {1}{c^{2}}}(E-vp_{x})z-\left(t-{\frac {xv}{c^{2}}}\right)p_{z}\right]\\&=\gamma \left[{\frac {1}{c^{2}}}Ez-{\frac {1}{c^{2}}}vp_{z}z-tp_{z}+{\frac {xv}{c^{2}}}p_{z}\right]\\&=\gamma \left[\left({\frac {1}{c^{2}}}Ez-tp_{z}\right)+{\frac {v}{c^{2}}}(xp_{z}-zp_{x})\right]\\&=\gamma \left(N_{z}-{\frac {v}{c^{2}}}L_{y}\right)\end{aligned}}} Collecting parallel and perpendicular components as beforeN∥′=N∥N⊥′=γ(v)(N⊥−1c2v×L){\displaystyle {\begin{aligned}\mathbf {N} _{\parallel }'&=\mathbf {N} _{\parallel }\\\mathbf {N} _{\perp }'&=\gamma (\mathbf {v} )\left(\mathbf {N} _{\perp }-{\frac {1}{c^{2}}}\mathbf {v} \times \mathbf {L} \right)\\\end{aligned}}} Again, the components parallel to the direction of relative motion do not change, those perpendicular do change. So far these are only the parallel and perpendicular decompositions of the vectors. The transformations on the full vectors can be constructed from them as follows (throughout hereLis a pseudovector for concreteness and compatibility with vector algebra). Introduce aunit vectorin the direction ofv, given byn=v/v. The parallel components are given by thevector projectionofLorNintonL∥=(L⋅n)n,N∥=(N⋅n)n{\displaystyle \mathbf {L} _{\parallel }=(\mathbf {L} \cdot \mathbf {n} )\mathbf {n} \,,\quad \mathbf {N} _{\parallel }=(\mathbf {N} \cdot \mathbf {n} )\mathbf {n} }while the perpendicular component byvector rejectionofLorNfromnL⊥=L−(L⋅n)n,N⊥=N−(N⋅n)n{\displaystyle \mathbf {L} _{\perp }=\mathbf {L} -(\mathbf {L} \cdot \mathbf {n} )\mathbf {n} \,,\quad \mathbf {N} _{\perp }=\mathbf {N} -(\mathbf {N} \cdot \mathbf {n} )\mathbf {n} }and the transformations areL′=γ(v)(L+vn×N)−(γ(v)−1)(L⋅n)nN′=γ(v)(N−vc2n×L)−(γ(v)−1)(N⋅n)n{\displaystyle {\begin{aligned}\mathbf {L} '&=\gamma (\mathbf {v} )(\mathbf {L} +v\mathbf {n} \times \mathbf {N} )-(\gamma (\mathbf {v} )-1)(\mathbf {L} \cdot \mathbf {n} )\mathbf {n} \\\mathbf {N} '&=\gamma (\mathbf {v} )\left(\mathbf {N} -{\frac {v}{c^{2}}}\mathbf {n} \times \mathbf {L} \right)-(\gamma (\mathbf {v} )-1)(\mathbf {N} \cdot \mathbf {n} )\mathbf {n} \\\end{aligned}}}or reinstatingv=vn,L′=γ(v)(L+v×N)−(γ(v)−1)(L⋅v)vv2N′=γ(v)(N−1c2v×L)−(γ(v)−1)(N⋅v)vv2{\displaystyle {\begin{aligned}\mathbf {L} '&=\gamma (\mathbf {v} )(\mathbf {L} +\mathbf {v} \times \mathbf {N} )-(\gamma (\mathbf {v} )-1){\frac {(\mathbf {L} \cdot \mathbf {v} )\mathbf {v} }{v^{2}}}\\\mathbf {N} '&=\gamma (\mathbf {v} )\left(\mathbf {N} -{\frac {1}{c^{2}}}\mathbf {v} \times \mathbf {L} \right)-(\gamma (\mathbf {v} )-1){\frac {(\mathbf {N} \cdot \mathbf {v} )\mathbf {v} }{v^{2}}}\\\end{aligned}}} These are very similar to the Lorentz transformations of theelectric fieldEandmagnetic fieldB, seeClassical electromagnetism and special relativity. Alternatively, starting from the vector Lorentz transformations of time, space, energy, and momentum, for a boost with velocityv,t′=γ(v)(t−v⋅rc2),r′=r+γ(v)−1v2(r⋅v)v−γ(v)tv,p′=p+γ(v)−1v2(p⋅v)v−γ(v)Ec2v,E′=γ(v)(E−v⋅p),{\displaystyle {\begin{aligned}t'&=\gamma (\mathbf {v} )\left(t-{\frac {\mathbf {v} \cdot \mathbf {r} }{c^{2}}}\right)\,,\\\mathbf {r} '&=\mathbf {r} +{\frac {\gamma (\mathbf {v} )-1}{v^{2}}}(\mathbf {r} \cdot \mathbf {v} )\mathbf {v} -\gamma (\mathbf {v} )t\mathbf {v} \,,\\\mathbf {p} '&=\mathbf {p} +{\frac {\gamma (\mathbf {v} )-1}{v^{2}}}(\mathbf {p} \cdot \mathbf {v} )\mathbf {v} -\gamma (\mathbf {v} ){\frac {E}{c^{2}}}\mathbf {v} \,,\\E'&=\gamma (\mathbf {v} )\left(E-\mathbf {v} \cdot \mathbf {p} \right)\,,\\\end{aligned}}}inserting these into the definitionsL′=r′×p′,N′=E′c2r′−t′p′{\displaystyle {\begin{aligned}\mathbf {L} '&=\mathbf {r} '\times \mathbf {p} '\,,&\mathbf {N} '&={\frac {E'}{c^{2}}}\mathbf {r} '-t'\mathbf {p} '\end{aligned}}}gives the transformations. The orbital angular momentum in each frame areL′=r′×p′,L=r×p{\displaystyle \mathbf {L} '=\mathbf {r} '\times \mathbf {p} '\,,\quad \mathbf {L} =\mathbf {r} \times \mathbf {p} }so taking the cross product of the transformationsL′=[r+(γ−1)(r⋅n)n−γtvn]×[p+(γ−1)(p⋅n)n−γEc2vn]=[r+(γ−1)(r⋅n)n−γtvn]×p+(γ−1)(p⋅n)[r+(γ−1)(r⋅n)n−γtvn]×n−γEc2v[r+(γ−1)(r⋅n)n−γtvn]×n=r×p+(γ−1)(r⋅n)n×p−γtvn×p+(γ−1)(p⋅n)r×n−γEc2vr×n=L+[γ−1v2(r⋅v)−γt]v×p+[γ−1v2(p⋅v)−γEc2]r×v=L+v×[(γ−1v2(r⋅v)−γt)p−(γ−1v2(p⋅v)−γEc2)r]=L+v×[γ−1v2((r⋅v)p−(p⋅v)r)+γ(Ec2r−tp)]{\displaystyle {\begin{aligned}\mathbf {L} '&=\left[\mathbf {r} +(\gamma -1)(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} -\gamma tv\mathbf {n} \right]\times \left[\mathbf {p} +(\gamma -1)(\mathbf {p} \cdot \mathbf {n} )\mathbf {n} -\gamma {\frac {E}{c^{2}}}v\mathbf {n} \right]\\&=[\mathbf {r} +(\gamma -1)(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} -\gamma tv\mathbf {n} ]\times \mathbf {p} +(\gamma -1)(\mathbf {p} \cdot \mathbf {n} )[\mathbf {r} +(\gamma -1)(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} -\gamma tv\mathbf {n} ]\times \mathbf {n} -\gamma {\frac {E}{c^{2}}}v[\mathbf {r} +(\gamma -1)(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} -\gamma tv\mathbf {n} ]\times \mathbf {n} \\&=\mathbf {r} \times \mathbf {p} +(\gamma -1)(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} \times \mathbf {p} -\gamma tv\mathbf {n} \times \mathbf {p} +(\gamma -1)(\mathbf {p} \cdot \mathbf {n} )\mathbf {r} \times \mathbf {n} -\gamma {\frac {E}{c^{2}}}v\mathbf {r} \times \mathbf {n} \\&=\mathbf {L} +\left[{\frac {\gamma -1}{v^{2}}}(\mathbf {r} \cdot \mathbf {v} )-\gamma t\right]\mathbf {v} \times \mathbf {p} +\left[{\frac {\gamma -1}{v^{2}}}(\mathbf {p} \cdot \mathbf {v} )-\gamma {\frac {E}{c^{2}}}\right]\mathbf {r} \times \mathbf {v} \\&=\mathbf {L} +\mathbf {v} \times \left[\left({\frac {\gamma -1}{v^{2}}}(\mathbf {r} \cdot \mathbf {v} )-\gamma t\right)\mathbf {p} -\left({\frac {\gamma -1}{v^{2}}}(\mathbf {p} \cdot \mathbf {v} )-\gamma {\frac {E}{c^{2}}}\right)\mathbf {r} \right]\\&=\mathbf {L} +\mathbf {v} \times \left[{\frac {\gamma -1}{v^{2}}}\left((\mathbf {r} \cdot \mathbf {v} )\mathbf {p} -(\mathbf {p} \cdot \mathbf {v} )\mathbf {r} \right)+\gamma \left({\frac {E}{c^{2}}}\mathbf {r} -t\mathbf {p} \right)\right]\end{aligned}}} Using thetriple productrulea×(b×c)=b(a⋅c)−c(a⋅b)(a×b)×c=(c⋅a)b−(c⋅b)a{\displaystyle {\begin{aligned}\mathbf {a} \times (\mathbf {b} \times \mathbf {c} )&=\mathbf {b} (\mathbf {a} \cdot \mathbf {c} )-\mathbf {c} (\mathbf {a} \cdot \mathbf {b} )\\(\mathbf {a} \times \mathbf {b} )\times \mathbf {c} &=(\mathbf {c} \cdot \mathbf {a} )\mathbf {b} -(\mathbf {c} \cdot \mathbf {b} )\mathbf {a} \\\end{aligned}}}gives(r×p)×v=(v⋅r)p−(v⋅p)r(v⋅r)p−(v⋅p)r=L×v{\displaystyle {\begin{aligned}(\mathbf {r} \times \mathbf {p} )\times \mathbf {v} &=(\mathbf {v} \cdot \mathbf {r} )\mathbf {p} -(\mathbf {v} \cdot \mathbf {p} )\mathbf {r} \\(\mathbf {v} \cdot \mathbf {r} )\mathbf {p} -(\mathbf {v} \cdot \mathbf {p} )\mathbf {r} &=\mathbf {L} \times \mathbf {v} \\\end{aligned}}}and along with the definition ofNwe haveL′=L+v×[γ−1v2L×v+γN]{\displaystyle \mathbf {L} '=\mathbf {L} +\mathbf {v} \times \left[{\frac {\gamma -1}{v^{2}}}\mathbf {L} \times \mathbf {v} +\gamma \mathbf {N} \right]} Reinstating the unit vectorn,L′=L+n×[(γ−1)L×n+vγN]{\displaystyle \mathbf {L} '=\mathbf {L} +\mathbf {n} \times \left[(\gamma -1)\mathbf {L} \times \mathbf {n} +v\gamma \mathbf {N} \right]} Since in the transformation there is a cross product on the left withn,n×(L×n)=L(n⋅n)−n(n⋅L)=L−n(n⋅L){\displaystyle \mathbf {n} \times (\mathbf {L} \times \mathbf {n} )=\mathbf {L} (\mathbf {n} \cdot \mathbf {n} )-\mathbf {n} (\mathbf {n} \cdot \mathbf {L} )=\mathbf {L} -\mathbf {n} (\mathbf {n} \cdot \mathbf {L} )}thenL′=L+(γ−1)(L−n(n⋅L))+βγcn×N=γ(L+vn×N)−(γ−1)n(n⋅L){\displaystyle \mathbf {L} '=\mathbf {L} +(\gamma -1)(\mathbf {L} -\mathbf {n} (\mathbf {n} \cdot \mathbf {L} ))+\beta \gamma c\mathbf {n} \times \mathbf {N} =\gamma (\mathbf {L} +v\mathbf {n} \times \mathbf {N} )-(\gamma -1)\mathbf {n} (\mathbf {n} \cdot \mathbf {L} )} In relativistic mechanics, the COM boost and orbital 3-space angular momentum of a rotating object are combined into a four-dimensionalbivectorin terms of thefour-positionXand thefour-momentumPof the object[4][5]M=X∧P{\displaystyle \mathbf {M} =\mathbf {X} \wedge \mathbf {P} } In componentsMαβ=XαPβ−XβPα{\displaystyle M^{\alpha \beta }=X^{\alpha }P^{\beta }-X^{\beta }P^{\alpha }}which are six independent quantities altogether. Since the components ofXandPare frame-dependent, so isM. Three componentsMij=xipj−xjpi=Lij{\displaystyle M^{ij}=x^{i}p^{j}-x^{j}p^{i}=L^{ij}}are those of the familiar classical 3-space orbital angular momentum, and the other threeM0i=x0pi−xip0=c(tpi−xiEc2)=−cNi{\displaystyle M^{0i}=x^{0}p^{i}-x^{i}p^{0}=c\,\left(tp^{i}-x^{i}{\frac {E}{c^{2}}}\right)=-cN^{i}}are the relativistic mass moment, multiplied by−c. The tensor is antisymmetric;Mαβ=−Mβα{\displaystyle M^{\alpha \beta }=-M^{\beta \alpha }} The components of the tensor can be systematically displayed as amatrixM=(M00M01M02M03M10M11M12M13M20M21M22M23M30M31M32M33)=(0−N1c−N2c−N3cN1c0L12−L31N2c−L120L23N3cL31−L230)=(0−NcNTcx∧p){\displaystyle {\begin{aligned}\mathbf {M} &={\begin{pmatrix}M^{00}&M^{01}&M^{02}&M^{03}\\M^{10}&M^{11}&M^{12}&M^{13}\\M^{20}&M^{21}&M^{22}&M^{23}\\M^{30}&M^{31}&M^{32}&M^{33}\end{pmatrix}}\\[3pt]&=\left({\begin{array}{c|ccc}0&-N^{1}c&-N^{2}c&-N^{3}c\\\hline N^{1}c&0&L^{12}&-L^{31}\\N^{2}c&-L^{12}&0&L^{23}\\N^{3}c&L^{31}&-L^{23}&0\end{array}}\right)\\[3pt]&=\left({\begin{array}{c|c}0&-\mathbf {N} c\\\hline \mathbf {N} ^{\mathrm {T} }c&\mathbf {x} \wedge \mathbf {p} \\\end{array}}\right)\end{aligned}}}in which the last array is ablock matrixformed by treatingNas arow vectorwhichmatrix transposesto thecolumn vectorNT, andx∧pas a 3 × 3antisymmetric matrix. The lines are merely inserted to show where the blocks are. Again, this tensor is additive: the total angular momentum of a system is the sum of the angular momentum tensors for each constituent of the system:Mtot=∑nMn=∑nXn∧Pn.{\displaystyle \mathbf {M} _{\text{tot}}=\sum _{n}\mathbf {M} _{n}=\sum _{n}\mathbf {X} _{n}\wedge \mathbf {P} _{n}\,.} Each of the six components forms a conserved quantity when aggregated with the corresponding components for other objects and fields. The angular momentum tensorMis indeed a tensor, the components change according to a Lorentz transformation matrix Λ, as illustrated in the usual way by tensor index notationM′αβ=X′αP′β−X′βP′α=ΛαγXγΛβδPδ−ΛβδXδΛαγPγ=ΛαγΛβδ(XγPδ−XδPγ)=ΛαγΛβδMγδ,{\displaystyle {\begin{aligned}{M'}^{\alpha \beta }&={X'}^{\alpha }{P'}^{\beta }-{X'}^{\beta }{P'}^{\alpha }\\&={\Lambda ^{\alpha }}_{\gamma }X^{\gamma }{\Lambda ^{\beta }}_{\delta }P^{\delta }-{\Lambda ^{\beta }}_{\delta }X^{\delta }{\Lambda ^{\alpha }}_{\gamma }P^{\gamma }\\&={\Lambda ^{\alpha }}_{\gamma }{\Lambda ^{\beta }}_{\delta }\left(X^{\gamma }P^{\delta }-X^{\delta }P^{\gamma }\right)\\&={\Lambda ^{\alpha }}_{\gamma }{\Lambda ^{\beta }}_{\delta }M^{\gamma \delta }\\\end{aligned}},}where, for a boost (without rotations) with normalized velocityβ=v/c, the Lorentz transformation matrix elements areΛ00=γΛi0=Λ0i=−γβiΛij=δij+γ−1β2βiβj{\displaystyle {\begin{aligned}{\Lambda ^{0}}_{0}&=\gamma \\{\Lambda ^{i}}_{0}&={\Lambda ^{0}}_{i}=-\gamma \beta ^{i}\\{\Lambda ^{i}}_{j}&={\delta ^{i}}_{j}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{i}\beta _{j}\end{aligned}}}and the covariantβiand contravariantβicomponents ofβare the same since these are just parameters. In other words, one can Lorentz-transform the four position and four momentum separately, and then antisymmetrize those newly found components to obtain the angular momentum tensor in the new frame. The transformation of boost components are M′k0=ΛkμΛ0νMμν=Λk0Λ00M00+ΛkiΛ00Mi0+Λk0Λ0jM0j+ΛkiΛ0jMij=(ΛkiΛ00−Λk0Λ0i)Mi0+ΛkiΛ0jMij{\displaystyle {\begin{aligned}M'^{k0}&={\Lambda ^{k}}_{\mu }{\Lambda ^{0}}_{\nu }M^{\mu \nu }\\&={\Lambda ^{k}}_{0}{\Lambda ^{0}}_{0}M^{00}+{\Lambda ^{k}}_{i}{\Lambda ^{0}}_{0}M^{i0}+{\Lambda ^{k}}_{0}{\Lambda ^{0}}_{j}M^{0j}+{\Lambda ^{k}}_{i}{\Lambda ^{0}}_{j}M^{ij}\\&=\left({\Lambda ^{k}}_{i}{\Lambda ^{0}}_{0}-{\Lambda ^{k}}_{0}{\Lambda ^{0}}_{i}\right)M^{i0}+{\Lambda ^{k}}_{i}{\Lambda ^{0}}_{j}M^{ij}\\\end{aligned}}}as for the orbital angular momentumM′kℓ=ΛkμΛℓνMμν=Λk0Λℓ0M00+ΛkiΛℓ0Mi0+Λk0ΛℓjM0j+ΛkiΛℓjMij=(ΛkiΛℓ0−Λk0Λℓi)Mi0+ΛkiΛℓjMij{\displaystyle {\begin{aligned}{M'}^{k\ell }&={\Lambda ^{k}}_{\mu }{\Lambda ^{\ell }}_{\nu }M^{\mu \nu }\\&={\Lambda ^{k}}_{0}{\Lambda ^{\ell }}_{0}M^{00}+{\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{0}M^{i0}+{\Lambda ^{k}}_{0}{\Lambda ^{\ell }}_{j}M^{0j}+{\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{j}M^{ij}\\&=\left({\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{0}-{\Lambda ^{k}}_{0}{\Lambda ^{\ell }}_{i}\right)M^{i0}+{\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{j}M^{ij}\end{aligned}}} The expressions in the Lorentz transformation entries areΛkiΛℓ0−Λk0Λℓi=[δki+γ−1β2βkβi](−γβℓ)−(−γβk)[δℓi+γ−1β2βℓβi]=γ[βkδℓi−βℓδki]ΛkiΛ00−Λk0Λ0i=[δki+γ−1β2βkβi]γ−(−γβk)(−γβi)=γ[δki+γ−1β2βkβi−γβkβi]=γ[δki+(γ−1β2−γ)βkβi]=γ[δki+(γ−γβ2−1β2)βkβi]=γ[δki+(γ−1−1β2)βkβi]=γδki−[γ−1β2]βkβiΛkiΛℓj=[δki+γ−1β2βkβi][δℓj+γ−1β2βℓβj]=δkiδℓj+γ−1β2δkiβℓβj+γ−1β2βkβiδℓj+γ−1β2γ−1β2βℓβjβkβi{\displaystyle {\begin{aligned}{\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{0}-{\Lambda ^{k}}_{0}{\Lambda ^{\ell }}_{i}&=\left[{\delta ^{k}}_{i}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}\right]\left(-\gamma \beta ^{\ell }\right)-\left(-\gamma \beta ^{k}\right)\left[{\delta ^{\ell }}_{i}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{\ell }\beta _{i}\right]\\&=\gamma \left[\beta ^{k}{\delta ^{\ell }}_{i}-\beta ^{\ell }{\delta ^{k}}_{i}\right]\\{\Lambda ^{k}}_{i}{\Lambda ^{0}}_{0}-{\Lambda ^{k}}_{0}{\Lambda ^{0}}_{i}&=\left[{\delta ^{k}}_{i}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}\right]\gamma -(-\gamma \beta ^{k})(-\gamma \beta ^{i})\\&=\gamma \left[{\delta ^{k}}_{i}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}-\gamma \beta ^{k}\beta ^{i}\right]\\&=\gamma \left[{\delta ^{k}}_{i}+\left({\frac {\gamma -1}{\beta ^{2}}}-\gamma \right)\beta ^{k}\beta _{i}\right]\\&=\gamma \left[{\delta ^{k}}_{i}+\left({\frac {\gamma -\gamma \beta ^{2}-1}{\beta ^{2}}}\right)\beta ^{k}\beta _{i}\right]\\&=\gamma \left[{\delta ^{k}}_{i}+\left({\frac {\gamma ^{-1}-1}{\beta ^{2}}}\right)\beta ^{k}\beta _{i}\right]\\&=\gamma {\delta ^{k}}_{i}-\left[{\frac {\gamma -1}{\beta ^{2}}}\right]\beta ^{k}\beta _{i}\\{\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{j}&=\left[{\delta ^{k}}_{i}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}\right]\left[{\delta ^{\ell }}_{j}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{\ell }\beta _{j}\right]\\&={\delta ^{k}}_{i}{\delta ^{\ell }}_{j}+{\frac {\gamma -1}{\beta ^{2}}}{\delta ^{k}}_{i}\beta ^{\ell }\beta _{j}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}{\delta ^{\ell }}_{j}+{\frac {\gamma -1}{\beta ^{2}}}{\frac {\gamma -1}{\beta ^{2}}}\beta ^{\ell }\beta _{j}\beta ^{k}\beta _{i}\end{aligned}}}givescN′k=(ΛkiΛ00−Λk0Λ0i)cNi+ΛkiΛ0jεijnLn=[γδki−(γ−1β2)βkβi]cNi+−γβj[δki+γ−1β2βkβi]εijnLn=γcNk−(γ−1β2)βk(βicNi)−γβjδkiεijnLn−γγ−1β2βjβkβiεijnLn=γcNk−(γ−1β2)βk(βicNi)−γβjεkjnLn{\displaystyle {\begin{aligned}cN'^{k}&=\left({\Lambda ^{k}}_{i}{\Lambda ^{0}}_{0}-{\Lambda ^{k}}_{0}{\Lambda ^{0}}_{i}\right)cN^{i}+{\Lambda ^{k}}_{i}{\Lambda ^{0}}_{j}\varepsilon ^{ijn}L_{n}\\&=\left[\gamma {\delta ^{k}}_{i}-\left({\frac {\gamma -1}{\beta ^{2}}}\right)\beta ^{k}\beta _{i}\right]cN^{i}+-\gamma \beta ^{j}\left[{\delta ^{k}}_{i}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}\right]\varepsilon ^{ijn}L_{n}\\&=\gamma cN^{k}-\left({\frac {\gamma -1}{\beta ^{2}}}\right)\beta ^{k}\left(\beta _{i}cN^{i}\right)-\gamma \beta ^{j}{\delta ^{k}}_{i}\varepsilon ^{ijn}L_{n}-\gamma {\frac {\gamma -1}{\beta ^{2}}}\beta ^{j}\beta ^{k}\beta _{i}\varepsilon ^{ijn}L_{n}\\&=\gamma cN^{k}-\left({\frac {\gamma -1}{\beta ^{2}}}\right)\beta ^{k}\left(\beta _{i}cN^{i}\right)-\gamma \beta ^{j}\varepsilon ^{kjn}L_{n}\\\end{aligned}}}or in vector form, dividing bycN′=γN−(γ−1β2)β(β⋅N)−1cγβ×L{\displaystyle \mathbf {N} '=\gamma \mathbf {N} -\left({\frac {\gamma -1}{\beta ^{2}}}\right){\boldsymbol {\beta }}\left({\boldsymbol {\beta }}\cdot \mathbf {N} \right)-{\frac {1}{c}}\gamma {\boldsymbol {\beta }}\times \mathbf {L} }or reinstatingβ=v/c,N′=γN−(γ−1v2)v(v⋅N)−γv×L{\displaystyle \mathbf {N} '=\gamma \mathbf {N} -\left({\frac {\gamma -1}{v^{2}}}\right)\mathbf {v} \left(\mathbf {v} \cdot \mathbf {N} \right)-\gamma \mathbf {v} \times \mathbf {L} }andL′kℓ=(ΛkiΛℓ0−Λk0Λℓi)cNi+ΛkiΛℓjLij=γc(βkδℓi−βℓδki)Ni+[δkiδℓj+γ−1β2δkiβℓβj+γ−1β2βkβiδℓj+γ−1β2γ−1β2βℓβjβkβi]Lij=γc(βkNℓ−βℓNk)+Lkℓ+γ−1β2βℓβjLkj+γ−1β2βkβiLiℓ{\displaystyle {\begin{aligned}L'^{k\ell }&=\left({\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{0}-{\Lambda ^{k}}_{0}{\Lambda ^{\ell }}_{i}\right)cN^{i}+{\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{j}L^{ij}\\&=\gamma c\left(\beta ^{k}{\delta ^{\ell }}_{i}-\beta ^{\ell }{\delta ^{k}}_{i}\right)N^{i}+\left[{\delta ^{k}}_{i}{\delta ^{\ell }}_{j}+{\frac {\gamma -1}{\beta ^{2}}}{\delta ^{k}}_{i}\beta ^{\ell }\beta _{j}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}{\delta ^{\ell }}_{j}+{\frac {\gamma -1}{\beta ^{2}}}{\frac {\gamma -1}{\beta ^{2}}}\beta ^{\ell }\beta _{j}\beta ^{k}\beta _{i}\right]L^{ij}\\&=\gamma c\left(\beta ^{k}N^{\ell }-\beta ^{\ell }N^{k}\right)+L^{k\ell }+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{\ell }\beta _{j}L^{kj}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}L^{i\ell }\\\end{aligned}}}or converting to pseudovector formεkℓnLn′=γc(βkNℓ−βℓNk)+εkℓnLn+γ−1β2(βℓβjεkjnLn−βkβiεℓinLn){\displaystyle {\begin{aligned}\varepsilon ^{k\ell n}L'_{n}&=\gamma c\left(\beta ^{k}N^{\ell }-\beta ^{\ell }N^{k}\right)+\varepsilon ^{k\ell n}L_{n}+{\frac {\gamma -1}{\beta ^{2}}}\left(\beta ^{\ell }\beta _{j}\varepsilon ^{kjn}L_{n}-\beta ^{k}\beta _{i}\varepsilon ^{\ell in}L_{n}\right)\\\end{aligned}}}in vector notationL′=γcβ×N+L+γ−1β2β×(β×L){\displaystyle \mathbf {L} '=\gamma c{\boldsymbol {\beta }}\times \mathbf {N} +\mathbf {L} +{\frac {\gamma -1}{\beta ^{2}}}{\boldsymbol {\beta }}\times ({\boldsymbol {\beta }}\times \mathbf {L} )}or reinstatingβ=v/c,L′=γv×N+L+γ−1v2v×(v×L){\displaystyle \mathbf {L} '=\gamma \mathbf {v} \times \mathbf {N} +\mathbf {L} +{\frac {\gamma -1}{v^{2}}}\mathbf {v} \times \left(\mathbf {v} \times \mathbf {L} \right)} For a particle moving in a curve, thecross productof itsangular velocityω(a pseudovector) and positionxgive its tangential velocityu=ω×x{\displaystyle \mathbf {u} ={\boldsymbol {\omega }}\times \mathbf {x} } which cannot exceed a magnitude ofc, since in SR the translational velocity of any massive object cannot exceed thespeed of lightc. Mathematically this constraint is0 ≤ |u| <c, the vertical bars denote themagnitudeof the vector. If the angle betweenωandxisθ(assumed to be nonzero, otherwiseuwould be zero corresponding to no motion at all), then|u| = |ω| |x| sinθand the angular velocity is restricted by0≤|ω|<c|x|sin⁡θ{\displaystyle 0\leq |{\boldsymbol {\omega }}|<{\frac {c}{|\mathbf {x} |\sin \theta }}} The maximum angular velocity of any massive object therefore depends on the size of the object. For a given |x|, the minimum upper limit occurs whenωandxare perpendicular, so thatθ=π/2andsinθ= 1. For a rotatingrigid bodyrotating with an angular velocityω, theuis tangential velocity at a pointxinside the object. For every point in the object, there is a maximum angular velocity. The angular velocity (pseudovector) is related to the angular momentum (pseudovector) through themoment of inertiatensorIL=I⋅ω⇌Li=Iijωj{\displaystyle \mathbf {L} =\mathbf {I} \cdot {\boldsymbol {\omega }}\quad \rightleftharpoons \quad L_{i}=I_{ij}\omega _{j}}(the dot·denotestensor contractionon one index). The relativistic angular momentum is also limited by the size of the object. A particle may have a "built-in" angular momentum independent of its motion, calledspinand denoteds. It is a 3d pseudovector like orbital angular momentumL. The spin has a correspondingspin magnetic moment, so if the particle is subject to interactions (likeelectromagnetic fieldsorspin-orbit coupling), the direction of the particle's spin vector will change, but its magnitude will be constant. The extension to special relativity is straightforward.[6]For somelab frameF, let F′ be the rest frame of the particle and suppose the particle moves with constant 3-velocityu. Then F′ is boosted with the same velocity and the Lorentz transformations apply as usual; it is more convenient to useβ=u/c. As a four-vector in special relativity, the four-spinSgenerally takes the usual form of a four-vector with a timelike componentstand spatial componentss, in the lab frameS≡(S0,S1,S2,S3)=(st,sx,sy,sz){\displaystyle \mathbf {S} \equiv \left(S^{0},S^{1},S^{2},S^{3}\right)=(s_{t},s_{x},s_{y},s_{z})}although in the rest frame of the particle, it is defined so the timelike component is zero and the spatial components are those of particle's actual spin vector, in the notation heres′, so in the particle's frameS′≡(S′0,S′1,S′2,S′3)=(0,sx′,sy′,sz′){\displaystyle \mathbf {S} '\equiv \left({S'}^{0},{S'}^{1},{S'}^{2},{S'}^{3}\right)=\left(0,s_{x}',s_{y}',s_{z}'\right)} Equating norms leads to the invariant relationst2−s⋅s=−s′⋅s′{\displaystyle s_{t}^{2}-\mathbf {s} \cdot \mathbf {s} =-\mathbf {s} '\cdot \mathbf {s} '}so if the magnitude of spin is given in the rest frame of the particle and lab frame of an observer, the magnitude of the timelike componentstis given in the lab frame also. The boosted components of the four spin relative to the lab frame areS′0=Λ0αSα=Λ00S0+Λ0iSi=γ(S0−βiSi)=γ(ccS0−uicSi)=1cU0S0−1cUiSiS′i=ΛiαSα=Λi0S0+ΛijSj=−γβiS0+[δij+γ−1β2βiβj]Sj=Si+γ2γ+1βiβjSj−γβiS0{\displaystyle {\begin{aligned}{S'}^{0}&={\Lambda ^{0}}_{\alpha }S^{\alpha }={\Lambda ^{0}}_{0}S^{0}+{\Lambda ^{0}}_{i}S^{i}=\gamma \left(S^{0}-\beta _{i}S^{i}\right)\\&=\gamma \left({\frac {c}{c}}S^{0}-{\frac {u_{i}}{c}}S^{i}\right)={\frac {1}{c}}U_{0}S^{0}-{\frac {1}{c}}U_{i}S^{i}\\[3pt]{S'}^{i}&={\Lambda ^{i}}_{\alpha }S^{\alpha }={\Lambda ^{i}}_{0}S^{0}+{\Lambda ^{i}}_{j}S^{j}\\&=-\gamma \beta ^{i}S^{0}+\left[\delta _{ij}+{\frac {\gamma -1}{\beta ^{2}}}\beta _{i}\beta _{j}\right]S^{j}\\&=S^{i}+{\frac {\gamma ^{2}}{\gamma +1}}\beta _{i}\beta _{j}S^{j}-\gamma \beta ^{i}S^{0}\end{aligned}}} Hereγ=γ(u).S′ is in the rest frame of the particle, so its timelike component is zero,S′0= 0, notS0. Also, the first is equivalent to the inner product of the four-velocity (divided byc) and the four-spin. Combining these facts leads toS′0=1cUαSα=0{\displaystyle {S'}^{0}={\frac {1}{c}}U_{\alpha }S^{\alpha }=0}which is an invariant. Then this combined with the transformation on the timelike component leads to the perceived component in the lab frame;S0=βiSi{\displaystyle S^{0}=\beta _{i}S^{i}} The inverse relations areS0=γ(S′0+βiS′i)Si=S′i+γ2γ+1βiβjS′j+γβiS′0{\displaystyle {\begin{aligned}S^{0}&=\gamma \left({S'}^{0}+\beta _{i}{S'}^{i}\right)\\S^{i}&={S'}^{i}+{\frac {\gamma ^{2}}{\gamma +1}}\beta _{i}\beta _{j}{S'}^{j}+\gamma \beta ^{i}{S'}^{0}\end{aligned}}} The covariant constraint on the spin is orthogonality to the velocity vector,UαSα=0{\displaystyle U_{\alpha }S^{\alpha }=0} In 3-vector notation for explicitness, the transformations arest=β⋅ss′=s+γ2γ+1β(β⋅s)−γβst{\displaystyle {\begin{aligned}s_{t}&={\boldsymbol {\beta }}\cdot \mathbf {s} \\\mathbf {s} '&=\mathbf {s} +{\frac {\gamma ^{2}}{\gamma +1}}{\boldsymbol {\beta }}\left({\boldsymbol {\beta }}\cdot \mathbf {s} \right)-\gamma {\boldsymbol {\beta }}s_{t}\end{aligned}}} The inverse relationsst=γβ⋅s′s=s′+γ2γ+1β(β⋅s′){\displaystyle {\begin{aligned}s_{t}&=\gamma {\boldsymbol {\beta }}\cdot \mathbf {s} '\\\mathbf {s} &=\mathbf {s} '+{\frac {\gamma ^{2}}{\gamma +1}}{\boldsymbol {\beta }}\left({\boldsymbol {\beta }}\cdot \mathbf {s} '\right)\end{aligned}}}are the components of spin the lab frame, calculated from those in the particle's rest frame. Although the spin of the particle is constant for a given particle, it appears to be different in the lab frame. ThePauli–Lubanski pseudovectorSμ=12εμνρσJνρPσ,{\displaystyle S_{\mu }={\frac {1}{2}}\varepsilon _{\mu \nu \rho \sigma }J^{\nu \rho }P^{\sigma },}applies to both massive andmassless particles. In general, the total angular momentum tensor splits into an orbital component and aspin component,Jμν=Mμν+Sμν.{\displaystyle J^{\mu \nu }=M^{\mu \nu }+S^{\mu \nu }~.}This applies to a particle, a mass–energy–momentum distribution, or field. The following is a summary fromMTW.[7]Throughout for simplicity, Cartesian coordinates are assumed. In special and general relativity, a distribution of mass–energy–momentum, e.g. a fluid, or a star, is described by the stress–energy tensorTβγ(a second ordertensor fielddepending on space and time). SinceT00is the energy density,Tj0forj= 1, 2, 3 is thejth component of the object's 3d momentum per unit volume, andTijform components of thestress tensorincluding shear and normal stresses, theorbital angular momentum densityabout the position 4-vectorXβis given by a 3rd order tensorMαβγ=(Xα−X¯α)Tβγ−(Xβ−X¯β)Tαγ{\displaystyle {\mathcal {M}}^{\alpha \beta \gamma }=\left(X^{\alpha }-{\bar {X}}^{\alpha }\right)T^{\beta \gamma }-\left(X^{\beta }-{\bar {X}}^{\beta }\right)T^{\alpha \gamma }} This is antisymmetric inαandβ. In special and general relativity,Tis a symmetric tensor, but in other contexts (e.g., quantum field theory), it may not be. Let Ω be a region of 4d spacetime. Theboundaryis a 3d spacetime hypersurface ("spacetime surface volume" as opposed to "spatial surface area"), denoted ∂Ω where "∂" means "boundary". Integrating the angular momentum density over a 3d spacetime hypersurface yields the angular momentum tensor aboutX,Mαβ(X¯)=∮∂ΩMαβγdΣγ{\displaystyle M^{\alpha \beta }\left({\bar {X}}\right)=\oint _{\partial \Omega }{\mathcal {M}}^{\alpha \beta \gamma }d\Sigma _{\gamma }}where dΣγis the volume1-formplaying the role of aunit vectornormal to a 2d surface in ordinary 3d Euclidean space. The integral is taken over the coordinatesX, notX(i.e. Y). The integral within a spacelike surface of constant time isMij=∮∂ΩMij0dΣ0=∮∂Ω[(Xi−Yi)Tj0−(Xj−Yj)Ti0]dxdydz{\displaystyle M^{ij}=\oint _{\partial \Omega }{\mathcal {M}}^{ij0}d\Sigma _{0}=\oint _{\partial \Omega }\left[\left(X^{i}-Y^{i}\right)T^{j0}-\left(X^{j}-Y^{j}\right)T^{i0}\right]dx\,dy\,dz}which collectively form the angular momentum tensor. There is an intrinsic angular momentum in the centre-of-mass frame, in other words, the angular momentum about any eventXCOM=(XCOM0,XCOM1,XCOM2,XCOM3){\displaystyle \mathbf {X} _{\text{COM}}=\left(X_{\text{COM}}^{0},X_{\text{COM}}^{1},X_{\text{COM}}^{2},X_{\text{COM}}^{3}\right)}onthe wordline of the object's center of mass. SinceT00is the energy density of the object, the spatial coordinates of thecenter of massare given byXCOMi=1m0∫∂ΩXiT00dxdydz{\displaystyle X_{\text{COM}}^{i}={\frac {1}{m_{0}}}\int _{\partial \Omega }X^{i}T^{00}dxdydz} SettingY=XCOMobtains the orbital angular momentum density about the centre-of-mass of the object. Theconservationof energy–momentum is given in differential form by thecontinuity equation∂γTβγ=0{\displaystyle \partial _{\gamma }T^{\beta \gamma }=0}where ∂γis thefour-gradient. (In non-Cartesian coordinates and general relativity this would be replaced by thecovariant derivative). The total angular momentum conservation is given by another continuity equation∂γJαβγ=0{\displaystyle \partial _{\gamma }{\mathcal {J}}^{\alpha \beta \gamma }=0} The integral equations useGauss' theoremin spacetime∫V∂γTβγcdtdxdydz=∮∂VTβγd3Σγ=0∫V∂γJαβγcdtdxdydz=∮∂VJαβγd3Σγ=0{\displaystyle {\begin{aligned}\int _{\mathcal {V}}\partial _{\gamma }T^{\beta \gamma }\,cdt\,dx\,dy\,dz&=\oint _{\partial {\mathcal {V}}}T^{\beta \gamma }d^{3}\Sigma _{\gamma }=0\\\int _{\mathcal {V}}\partial _{\gamma }{\mathcal {J}}^{\alpha \beta \gamma }\,cdt\,dx\,dy\,dz&=\oint _{\partial {\mathcal {V}}}{\mathcal {J}}^{\alpha \beta \gamma }d^{3}\Sigma _{\gamma }=0\end{aligned}}} The torque acting on a point-like particle is defined as the derivative of the angular momentum tensor given above with respect to proper time:[8][9]Γ=dMdτ=X∧F{\displaystyle {\boldsymbol {\Gamma }}={\frac {d\mathbf {M} }{d\tau }}=\mathbf {X} \wedge \mathbf {F} }or in tensor components:Γαβ=XαFβ−XβFα{\displaystyle \Gamma _{\alpha \beta }=X_{\alpha }F_{\beta }-X_{\beta }F_{\alpha }}whereFis the 4d force acting on the particle at the eventX. As with angular momentum, torque is additive, so for an extended object one sums or integrates over the distribution of mass. The angular momentum tensor is the generator of boosts and rotations for theLorentz group.[10][11]Lorentz boostscan be parametrized byrapidity, and a 3d unit vectornpointing in the direction of the boost, which combine into the "rapidity vector"ζ=ζn=ntanh−1⁡β{\displaystyle {\boldsymbol {\zeta }}=\zeta \mathbf {n} =\mathbf {n} \tanh ^{-1}\beta }whereβ=v/cis the speed of the relative motion divided by the speed of light. Spatial rotations can be parametrized by theaxis–angle representation, the angleθand a unit vectorapointing in the direction of the axis, which combine into an "axis-angle vector"θ=θa{\displaystyle {\boldsymbol {\theta }}=\theta \mathbf {a} } Each unit vector only has two independent components, the third is determined from the unit magnitude. Altogether there are six parameters of the Lorentz group; three for rotations and three for boosts. The (homogeneous) Lorentz group is 6-dimensional. The boost generatorsKand rotation generatorsJcan be combined into one generator for Lorentz transformations;Mthe antisymmetric angular momentum tensor, with componentsM0i=−Mi0=Ki,Mij=εijkJk.{\displaystyle M^{0i}=-M^{i0}=K_{i}\,,\quad M^{ij}=\varepsilon _{ijk}J_{k}\,.}and correspondingly, the boost and rotation parameters are collected into another antisymmetric four-dimensional matrixω, with entries:ω0i=−ωi0=ζi,ωij=εijkθk,{\displaystyle \omega _{0i}=-\omega _{i0}=\zeta _{i}\,,\quad \omega _{ij}=\varepsilon _{ijk}\theta _{k}\,,}where thesummation conventionover the repeated indicesi, j, khas been used to prevent clumsy summation signs. The generalLorentz transformationis then given by thematrix exponentialΛ(ζ,θ)=exp⁡(12ωαβMαβ)=exp⁡(ζ⋅K+θ⋅J){\displaystyle \Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})=\exp \left({\frac {1}{2}}\omega _{\alpha \beta }M^{\alpha \beta }\right)=\exp \left({\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} \right)}and the summation convention has been applied to the repeated matrix indicesαandβ. The general Lorentz transformation Λ is the transformation law for anyfour vectorA= (A0,A1,A2,A3), giving the components of this same 4-vector in another inertial frame of referenceA′=Λ(ζ,θ)A{\displaystyle \mathbf {A} '=\Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})\mathbf {A} } The angular momentum tensor forms 6 of the 10 generators of thePoincaré group, the other four are the components of the four-momentum for spacetime translations. The angular momentum of test particles in a gently curved background is more complicated in GR but can be generalized in a straightforward manner. If theLagrangianis expressed with respect to angular variables as thegeneralized coordinates, then the angular momenta are thefunctional derivativesof the Lagrangian with respect to theangular velocities. Referred to Cartesian coordinates, these are typically given by the off-diagonal shear terms of the spacelike part of thestress–energy tensor. If the spacetime supports aKilling vector fieldtangent to a circle, then the angular momentum about the axis is conserved. One also wishes to study the effect of a compact, rotating mass on its surrounding spacetime. The prototype solution is of theKerr metric, which describes the spacetime around an axially symmetricblack hole. It is obviously impossible to draw a point on the event horizon of a Kerr black hole and watch it circle around. However, the solution does support a constant of the system that acts mathematically similarly to an angular momentum. Ingeneral relativitywheregravitational wavesexist, the asymptoticsymmetry groupin asymptotically flat spacetimes is not the expected ten-dimensionalPoincaré groupofspecial relativity, but the infinite-dimensional group formulated in 1962 byBondi, van der Burg, Metzner, and Sachs, the so-called BMS group, which contains an infinite superset of the four spacetime translations, namedsupertranslations. Despite half a century of research, difficulties with “supertranslation ambiguity” persisted in fundamental notions like the angular momentum carried away by gravitational waves. In 2020, novel supertranslation-invariant definitions of angular momentum began to be formulated by different researchers. Supertranslation invariance of angular momentum and other Lorentz charges in general relativity continues to be an active area of research.[12]
https://en.wikipedia.org/wiki/Four-spin
TheAI effectis the discounting of the behavior of anartificial intelligenceprogram as not "real" intelligence.[1] The authorPamela McCorduckwrites: "It's part of thehistory of the field of artificial intelligencethat every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."[2] ResearcherRodney Brookscomplains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"[3] "The AI effect" refers to a phenomenon where either the definition of AI or the concept of intelligence is adjusted to exclude capabilities that AI systems have mastered. This often manifests as tasks that AI can now perform successfully no longer being considered part of AI, or as the notion of intelligence itself being redefined to exclude AI achievements.[4][2][1]Edward Geist creditsJohn McCarthyfor coining the term "AI effect" to describe this phenomenon.[4] McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked."[5]It is an example ofmoving the goalposts.[6] Tesler's Theoremis: AI is whatever hasn't been done yet. Douglas Hofstadterquotes this[7]as do many other commentators.[8] When problems have not yet been formalised, they can still be characterised by amodel of computationthat includeshuman computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by a human. This formalisation is referred to as ahuman-assisted Turing machine.[9] Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI. This underappreciation is known from such diverse fields ascomputer chess,[10]marketing,[11]agricultural automation,[8]hospitality[12]andoptical character recognition.[13] Michael Swainereports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous",Patrick Winstonsays. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."[14] According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"[11] Marvin Minskywrites "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"[15] Nick Bostromobserves that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."[16] The AI effect on decision-making insupply chain risk managementis a severely understudied area.[17] To avoid the AI effect problem, the editors of a special issue ofIEEE Softwareon AI andsoftware engineeringrecommend not overselling – nothyping– the real achievable results to start with.[18] TheBulletin of the Atomic Scientistsorganization views the AI effect as a worldwide strategic military threat.[4]They point out that it obscures the fact thatapplications of AIhad already found their way into both US andSoviet militariesduring theCold War.[4]AI tools to advise humans regarding weapons deployment were developed by both sides and received very limited usage during that time.[4]They believe this constantly shifting failure to recognise AI continues to undermine human recognition of security threats in the present day.[4] Some experts think that the AI effect will continue, with advances in AI continually producing objections and redefinitions of public expectations.[19][20][21]Some also believe that the AI effect will expand to include the dismissal of specialised artificial intelligences.[21] In the early 1990s, during the second "AI winter" many AI researchers found that they could get more funding and sell more software if they avoided the bad name of "artificial intelligence" and instead pretended their work had nothing to do with intelligence.[citation needed] Patty Tascarella wrote in 2006: "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding."[22] Michael Kearnssuggests that "people subconsciously are trying to preserve for themselves some special role in the universe".[23]By discounting artificial intelligence people can continue to feel unique and special. Kearns argues that the change in perception known as the AI effect can be traced to themysterybeing removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.[citation needed] A related effect has been noted in the history ofanimal cognitionand inconsciousnessstudies, where every time a capacity formerly thought of as uniquely human is discovered in animals (e.g. theability to make tools, or passing themirror test), the overall importance of that capacity is deprecated.[citation needed] Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."[24] Mueller 1987 proposed comparing AI to human intelligence, coining the standard of Human-Level Machine Intelligence.[25]This nonetheless suffers from the AI effect however when different humans are used as the standard.[25] When IBM's chess-playing computer Deep Bluesucceeded in defeating Garry Kasparov in 1997, public perception of chess playing shifted from a difficult mental task to a routine operation.[26] The public complained that Deep Blue had only used "brute force methods" and it wasn't real intelligence.[10]Notably,John McCarthy, an AI pioneer and founder of the term "artificial intelligence", was disappointed by Deep Blue. He described it as a mere brute force machine that did not have any deep understanding of the game. McCarthy would also criticize how widespread the AI effect is ("As soon as it works, no one calls it AI anymore"[27][28]: 12), but in this case did not think that Deep Blue was a good example.[27] On the other side,Fred A. Reedwrites:[29] A problem that proponents of AI regularly face is this: When we know how a machine does something "intelligent", it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright.
https://en.wikipedia.org/wiki/AI_effect
Inpropositional logic,conjunction elimination(also calledandelimination,∧ elimination,[1]orsimplification)[2][3][4]is avalidimmediate inference,argument formandrule of inferencewhich makes theinferencethat, if theconjunctionA and Bis true, thenAis true, andBis true. The rule makes it possible to shorten longerproofsby deriving one of the conjuncts of a conjunction on a line by itself. An example inEnglish: The rule consists of two separate sub-rules, which can be expressed informal languageas: and The two sub-rules together mean that, whenever an instance of "P∧Q{\displaystyle P\land Q}" appears on a line of a proof, either "P{\displaystyle P}" or "Q{\displaystyle Q}" can be placed on a subsequent line by itself. The above example in English is an application of the first sub-rule. Theconjunction eliminationsub-rules may be written insequentnotation: and where⊢{\displaystyle \vdash }is ametalogicalsymbol meaning thatP{\displaystyle P}is asyntactic consequenceofP∧Q{\displaystyle P\land Q}andQ{\displaystyle Q}is also a syntactic consequence ofP∧Q{\displaystyle P\land Q}inlogical system; and expressed as truth-functionaltautologiesortheoremsof propositional logic: and whereP{\displaystyle P}andQ{\displaystyle Q}are propositions expressed in someformal system. Thislogic-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Conjunction_elimination
Model-driven architecture(MDA) is a software design approach for the development of software systems. It provides a set of guidelines for the structuring of specifications, which are expressed as models. Model Driven Architecture is a kind of domain engineering, and supportsmodel-driven engineeringof software systems. It was launched by theObject Management Group(OMG) in 2001.[1] Model Driven Architecture® (MDA®) "provides an approach for deriving value from models and architecture in support of the full life cycle of physical, organizational and I.T. systems". A model is a (representation of) an abstraction of a system. MDA® provides value by producing models at varying levels of abstraction, from a conceptual view down to the smallest implementation detail. OMG literature speaks of three such levels of abstraction, or architectural viewpoints: the Computation-independent Model (CIM), the Platform-independent model (PIM), and thePlatform-specific model(PSM). The CIM describes a system conceptually, the PIM describes the computational aspects of a system without reference to the technologies that may be used to implement it, and the PSM provides the technical details necessary to implement the system. The OMG Guide notes, though, that these three architectural viewpoints are useful, but are just three of many possible viewpoints.[2] The OMG organization provides specifications rather than implementations, often as answers toRequests for Proposals(RFPs). Implementations come from private companies or open source groups. The MDA model is related to multiple standards, including theUnified Modeling Language(UML), theMeta-Object Facility(MOF),XML Metadata Interchange(XMI),Enterprise Distributed Object Computing(EDOC), theSoftware Process Engineering Metamodel(SPEM), and theCommon Warehouse Metamodel(CWM). Note that the term “architecture” in Model Driven Architecture does not refer to the architecture of the system being modeled, but rather to the architecture of the various standards and model forms that serve as the technology basis for MDA.[citation needed] Executable UMLwas the UML profile used when MDA was born. Now, the OMG is promotingfUML, instead. (The action language for fUML is ALF.) TheObject Management Groupholds registered trademarks on the term Model Driven Architecture and its acronym MDA, as well as trademarks for terms such as: Model Based Application Development, Model Driven Application Development, Model Based Application Development, Model Based Programming, Model Driven Systems, and others.[3] OMG focuses Model Driven Architecture® on forward engineering, i.e. producing code from abstract, human-elaborated modeling diagrams (e.g. class diagrams)[citation needed]. OMG's ADTF (Analysis and Design Task Force) group leads this effort. With some humour, the group chose ADM (MDA backwards) to name the study of reverse engineering. ADM decodes to Architecture-Driven Modernization. The objective of ADM is to produce standards for model-based reverse engineering of legacy systems.[4]Knowledge Discovery Metamodel(KDM) is the furthest along of these efforts, and describes information systems in terms of various assets (programs, specifications, data, test files, database schemas, etc.). As the concepts and technologies used to realize designs and the concepts and technologies used to realize architectures have changed at their own pace, decoupling them allows system developers to choose from the best and most fitting in both domains. The design addresses the functional (use case) requirements while architecture provides the infrastructure through which non-functional requirements like scalability, reliability and performance are realized. MDA envisages that the platform independent model (PIM), which represents a conceptual design realizing the functional requirements, will survive changes in realization technologies andsoftware architectures. Of particular importance to Model Driven Architecture is the notion ofmodel transformation. A specific standard language for model transformation has been defined byOMGcalledQVT. The OMG organization provides rough specifications rather than implementations, often as answers toRequests for Proposals(RFPs). The OMG documents the overall process in a document called the MDA Guide. Basically, an MDA tool is a tool used to develop, interpret, compare, align, measure, verify, transform, etc. models or metamodels.[5]In the following section "model" is interpreted as meaning any kind of model (e.g. a UML model) or metamodel (e.g. the CWM metamodel). In any MDA approach we have essentially two kinds of models:initial modelsare created manually by human agents whilederived modelsare created automatically by programs. For example, an analyst may create a UML initial model from its observation of some loose business situation while a Java model may be automatically derived from this UML model by aModel transformationoperation. An MDA tool may be a tool used to check models for completeness, inconsistencies, or error and warning conditions. Some tools perform more than one of the functions listed above. For example, some creation tools may also have transformation and test capabilities. There are other tools that are solely for creation, solely for graphical presentation, solely for transformation, etc. Implementations of the OMG specifications come from private companies oropen sourcegroups. One important source of implementations for OMG specifications is theEclipse Foundation(EF). Many implementations of OMG modeling standards may be found in theEclipse Modeling Framework(EMF) orGraphical Modeling Framework(GMF), the Eclipse foundation is also developing other tools of various profiles as GMT. Eclipse's compliance to OMG specifications is often not strict. This is true for example for OMG's EMOF standard, which EMF approximates with its Ecore implementation. More examples may be found in the M2M project implementing the QVT standard or in the M2T project implementing the MOF2Text standard. One should be careful not to confuse theList of MDA Toolsand theList of UML tools, the former being much broader. This distinction can be made more general by distinguishing 'variable metamodel tools' and 'fixed metamodel tools'. A UML CASE tool is typically a 'fixed metamodel tool' since it has been hard-wired to work only with a given version of the UML metamodel (e.g. UML 2.1). On the contrary, other tools have internal generic capabilities allowing them to adapt to arbitrary metamodels or to a particular kind of metamodels. Usually MDA tools focus rudimentary architecture specification, although in some cases the tools are architecture-independent (or platform independent). Simple examples of architecture specifications include: Some key concepts that underpin the MDA approach (launched in 2001) were first elucidated by theShlaer–Mellor methodduring the late 1980s. Indeed, a key absent technical standard of the MDA approach (that of an action language syntax forExecutable UML) has been bridged by some vendors by adapting the original Shlaer–Mellor Action Language (modified for UML)[citation needed]. However, during this period the MDA approach has not gained mainstream industry acceptance; with theGartner Groupstill identifying MDA as an "on the rise" technology in its 2006 "Hype Cycle",[6]andForrester Researchdeclaring MDA to be "D.O.A." in 2006.[7]Potential concerns that have been raised with the OMG MDA approach include:
https://en.wikipedia.org/wiki/Model-driven_architecture
Theversineorversed sineis atrigonometric functionfound in some of the earliest (SanskritAryabhatia,[1]Section I)trigonometric tables. The versine of an angle is 1 minus itscosine. There are several related functions, most notably thecoversineandhaversine. The latter, half a versine, is of particular importance in thehaversine formulaof navigation. Theversine[3][4][5][6][7]orversed sine[8][9][10][11][12]is atrigonometric functionalready appearing in some of the earliest trigonometric tables. It is symbolized in formulas using the abbreviationsversin,sinver,[13][14]vers, orsiv.[15][16]InLatin, it is known as thesinus versus(flipped sine),versinus,versus, orsagitta(arrow).[17] Expressed in terms of commontrigonometric functionssine, cosine, and tangent, the versine is equal toversin⁡θ=1−cos⁡θ=2sin2⁡θ2=sin⁡θtan⁡θ2{\displaystyle \operatorname {versin} \theta =1-\cos \theta =2\sin ^{2}{\frac {\theta }{2}}=\sin \theta \,\tan {\frac {\theta }{2}}} There are several related functions corresponding to the versine: Special tables were also made of half of the versed sine, because of its particular use in thehaversine formulaused historically innavigation. havθ=sin2⁡(θ2)=1−cos⁡θ2{\displaystyle {\text{hav}}\ \theta =\sin ^{2}\left({\frac {\theta }{2}}\right)={\frac {1-\cos \theta }{2}}} The ordinarysinefunction (see note on etymology) was sometimes historically called thesinus rectus("straight sine"), to contrast it with the versed sine (sinus versus).[31]The meaning of these terms is apparent if one looks at the functions in the original context for their definition, aunit circle: For a verticalchordABof the unit circle, the sine of the angleθ(representing half of the subtended angleΔ) is the distanceAC(half of the chord). On the other hand, the versed sine ofθis the distanceCDfrom the center of the chord to the center of the arc. Thus, the sum of cos(θ) (equal to the length of lineOC) and versin(θ) (equal to the length of lineCD) is the radiusOD(with length 1). Illustrated this way, the sine is vertical (rectus, literally "straight") while the versine is horizontal (versus, literally "turned against, out-of-place"); both are distances fromCto the circle. This figure also illustrates the reason why the versine was sometimes called thesagitta, Latin forarrow.[17][30]If the arcADBof the double-angleΔ= 2θis viewed as a "bow" and the chordABas its "string", then the versineCDis clearly the "arrow shaft". In further keeping with the interpretation of the sine as "vertical" and the versed sine as "horizontal",sagittais also an obsolete synonym for theabscissa(the horizontal axis of a graph).[30] In 1821,Cauchyused the termssinus versus(siv) for the versine andcosinus versus(cosiv) for the coversine.[15][16][nb 1] Asθgoes to zero, versin(θ) is the difference between two nearly equal quantities, so a user of atrigonometric tablefor the cosine alone would need a very high accuracy to obtain the versine in order to avoidcatastrophic cancellation, making separate tables for the latter convenient.[12]Even with a calculator or computer,round-off errorsmake it advisable to use the sin2formula for smallθ. Another historical advantage of the versine is that it is always non-negative, so itslogarithmis defined everywhere except for the single angle (θ= 0, 2π, …) where it is zero—thus, one could uselogarithmic tablesfor multiplications in formulas involving versines. In fact, the earliest surviving table of sine (half-chord) values (as opposed to thechords tabulated by Ptolemyand other Greek authors), calculated from theSurya Siddhanthaof India dated back to the 3rd century BC, was a table of values for the sine and versed sine (in 3.75° increments from 0 to 90°).[31] The versine appears as an intermediate step in the application of thehalf-angle formulasin2(⁠θ/2⁠)=⁠1/2⁠versin(θ), derived byPtolemy, that was used to construct such tables. The haversine, in particular, was important innavigationbecause it appears in thehaversine formula, which is used to reasonably accurately compute distances on an astronomicspheroid(see issues with theEarth's radius vs. sphere) given angular positions (e.g.,longitudeandlatitude). One could also use sin2(⁠θ/2⁠)directly, but having a table of the haversine removed the need to compute squares and square roots.[12] An early utilization byJosé de Mendoza y Ríosof what later would be called haversines is documented in 1801.[14][32] The first known English equivalent to atable of haversineswas published by James Andrew in 1805, under the name "Squares of Natural Semi-Chords".[33][34][17] In 1835, the termhaversine(notated naturally ashav.orbase-10 logarithmicallyaslog. haversineorlog. havers.) was coined[35]byJames Inman[14][36][37]in the third edition of his workNavigation and Nautical Astronomy: For the Use of British Seamento simplify the calculation of distances between two points on the surface of the Earth usingspherical trigonometryfor applications in navigation.[3][35]Inman also used the termsnat. versineandnat. vers.for versines.[3] Other high-regarded tables of haversines were those of Richard Farley in 1856[33][38]and John Caulfield Hannyngton in 1876.[33][39] The haversine continues to be used in navigation and has found new applications in recent decades, as in Bruce D. Stark's method for clearinglunar distancesutilizingGaussian logarithmssince 1995[40][41]or in a more compact method forsight reductionsince 2014.[29] While the usage of the versine, coversine and haversine as well as theirinverse functionscan be traced back centuries, the names for the other fivecofunctionsappear to be of much younger origin. One period (0 <θ< 2π) of a versine or, more commonly, a haversine waveform is also commonly used insignal processingandcontrol theoryas the shape of apulseor awindow function(includingHann,Hann–PoissonandTukey windows), because it smoothly (continuousin value andslope) "turns on" fromzerotoone(for haversine) and back to zero.[nb 2]In these applications, it is namedHann functionorraised-cosine filter. The functions are circular rotations of each other. Inverse functions likearcversine(arcversin, arcvers,[8]avers,[43][44]aver),arcvercosine(arcvercosin, arcvercos, avercos, avcs),arccoversine(arccoversin, arccovers,[8]acovers,[43][44]acvs),arccovercosine(arccovercosin, arccovercos, acovercos, acvc),archaversine(archaversin, archav, haversin−1,[45]invhav,[46][47][48]ahav,[43][44]ahvs, ahv, hav−1[49][50]),archavercosine(archavercosin, archavercos, ahvc),archacoversine(archacoversin, ahcv) orarchacovercosine(archacovercosin, archacovercos, ahcc) exist as well: These functions can be extended into thecomplex plane.[42][19][24] Maclaurin series:[24] When the versinevis small in comparison to the radiusr, it may be approximated from the half-chord lengthL(the distanceACshown above) by the formula[51]v≈L22r.{\displaystyle v\approx {\frac {L^{2}}{2r}}.} Alternatively, if the versine is small and the versine, radius, and half-chord length are known, they may be used to estimate the arc lengths(ADin the figure above) by the formulas≈L+v2r{\displaystyle s\approx L+{\frac {v^{2}}{r}}}This formula was known to the Chinese mathematicianShen Kuo, and a more accurate formula also involving the sagitta was developed two centuries later byGuo Shoujing.[52] A more accurate approximation used in engineering[53]isv≈s32L128r{\displaystyle v\approx {\frac {s^{\frac {3}{2}}L^{\frac {1}{2}}}{8r}}} The termversineis also sometimes used to describe deviations from straightness in an arbitrary planar curve, of which the above circle is a special case. Given a chord between two points in a curve, the perpendicular distancevfrom the chord to the curve (usually at the chord midpoint) is called aversinemeasurement. For a straight line, the versine of any chord is zero, so this measurement characterizes the straightness of the curve. In thelimitas the chord lengthLgoes to zero, the ratio⁠8v/L2⁠goes to the instantaneouscurvature. This usage is especially common inrail transport, where it describes measurements of the straightness of therail tracks[54]and it is the basis of theHallade methodforrail surveying. The termsagitta(often abbreviatedsag) is used similarly inoptics, for describing the surfaces oflensesandmirrors.
https://en.wikipedia.org/wiki/Versine
Video feedbackis the process that starts and continues when avideo camerais pointed at its own playbackvideo monitor. The loop delay from camera to display back to camera is at least onevideoframe time, due to the input and output scanning processes; it can be more if there is more processing in the loop. First discovered shortly afterCharlie Ginsburginvented the first video recorder forAmpexin 1956, video feedback was considered a nuisance and unwanted noise.[citation needed]Technicians and studio camera operators were chastised for allowing a video camera to see its own monitor as the overload of self-amplified video signal caused significant problems with the 1950s video pickup, often ruining the pickup.[citation needed]It could also causescreen burn-inon television screens and monitors of the time as well, by generating static brightly illuminated display patterns. In the 1960s early examples of video feedback art became introduced into thepsychedelic artscene inNew York City.Nam June Paikis often cited as the firstvideo artist; he had clips of video feedback on display in New York City at theGreenwich Cafein the mid 1960s. Early video feedback works were produced by media artist experimenters on the East and West Coasts of the United States in the late 1960s and early 1970s. Video feedback artistsSteina and Woody Vasulka, withRichard Lowenbergand others, formed The Kitchen, which was located in the kitchen of a broken-down hotel in lowerManhattan; whileSkip Sweeneyand others founded Video Free America inSan Francisco, to nurture their video art and feedback experiments. David Sohnmentions video feedback in his 1970 bookFilm, the Creative Eye. This book was part of the base curriculum forRichard LedererofSt. Paul's SchoolinConcord, New Hampshire, when he made video feedback part of an English curriculum in his 1970s course Creative Eye in Film. Several students in this class participated regularly in the making and recording of video feedback.Sonyhad released the VuMax series of recording video cameras and manually "hand-looped" video tape decks by this time which did two things: it increased the resolution of the video image, which improved picture quality, and it made video tape recording technology available to the general public for the first time and allowed for such video experimentation by anyone. During the 1980s and into the 1990s video technology became enhanced and evolved into high quality, high definition video recording.Michael C. Andersengenerated the first known mathematical formula of the video feedback process,[1]and he has also generated aMendeleev's square to show the gradual progressive formulaic change of the video image as certain parameters are adjusted.[2] In the 1990s the rave scene and a social return to art of a more psychedelic nature brought back displays of video feedback on large disco dance floor video screens around the world. There are filters for non-linear video editors that often have video feedback as the filter description, or as a setting on a filter. These filter types either mimic or directly utilize video feedback for its result effect and can be recognized by its vortex, phantasmagoric manipulation of the original recorded image. Many artists have used optical feedback. A famous example isQueen'smusic video for "Bohemian Rhapsody" (1975). The effect (in this simple case) can be compared to looking at oneself between two mirrors. Other videos that use variations of video feedback include: This technique—under the name "howl-around"—was employed for the opening titles sequence for the Britishscience fictionseriesDoctor Who,[3]which employed this technique from 1963 to 1973. Initially this was in black and white, and redone in 1967 to showcase the show's new625-linebroadcast resolution and feature theDoctor's face (Patrick Troughtonat that time). It was redone again, in colour this time, in 1970. The next title sequence for the show, which debuted in 1973, abandoned this technique in favour ofslit-scan photography. An example of optical feedback in science is theoptical cavityfound in almost everylaser, which typically consists of two mirrors between which light is amplified. In the late 1990s it was found that so-called unstable-cavity lasers produce light beams whose cross-section present afractalpattern.[4] Optical feedback in science is often closely related to video feedback, so an understanding of video feedback can be useful for other applications of optical feedback. Video feedback has been used to explain the essence of fractal structure of unstable-cavity laser beams.[5] Video feedback is also useful as an experimental-mathematics tool. Examples of its use include the making of fractal patterns using multiple monitors, and multiple images produced using mirrors. Optical feedback is also found in theimage intensifiertube and its variants. Here the feedback is usually an undesirable phenomenon, where the light generated by the phosphor screen "feeds back" to the photocathode, causing the tube to oscillate, and ruining the image. This is typically suppressed by an aluminium reflective screen deposited on the back of the phosphor screen, or by incorporating amicrochannel plate detector. Optical feedback has been used experimentally in these tubes to amplify an image, in the manner of the cavity laser, but this technique has had limited use. Optical feedback has also been experimented with as an electron source, since a photocathode-phosphor cell will 'latch' when triggered, providing a steady stream of electrons. Douglas Hofstadterdiscusses video feedback in his bookI Am a Strange Loopabout the human mind and consciousness. He devotes a chapter to describing his experiments with video feedback. At some point during the session, I accidentally stuck my hand momentarily in front of the camera's lens. Of course the screen went all dark, but when I removed my hand, the previous pattern did not just pop right back onto the screen, as expected. Instead I saw a different pattern on the screen, but this pattern, unlike anything I'd seen before, was not stationary.[6]
https://en.wikipedia.org/wiki/Video_feedback
Multi-factor authentication(MFA;two-factor authentication, or2FA) is anelectronic authenticationmethod in which a user is granted access to awebsiteorapplicationonly after successfully presenting two or more distinct types of evidence (orfactors) to anauthenticationmechanism. MFA protectspersonal data—which may include personal identification orfinancial assets—from being accessed by an unauthorized third party that may have been able to discover, for example, a singlepassword. Usage of MFA has increased in recent years. Security issues which can cause the bypass of MFA arefatigue attacks,phishingandSIM swapping.[1] Accounts with MFA enabled are significantly less likely to be compromised.[2] Authentication takes place when someone tries tolog intoa computer resource (such as acomputer network, device, or application). The resource requires the user to supply theidentityby which the user is known to the resource, along with evidence of the authenticity of the user's claim to that identity. Simple authentication requires only one such piece of evidence (factor), typically a password, or occasionally multiple pieces of evidence all of the same type, as with a credit card number and a card verification code (CVC). For additional security, the resource may require more than one factor—multi-factor authentication, or two-factor authentication in cases where exactly two types of evidence are to be supplied.[3] The use of multiple authentication factors to prove one's identity is based on the premise that an unauthorized actor is unlikely to be able to supply all of the factors required for access. If, in an authentication attempt, at least one of the components is missing or supplied incorrectly, the user's identity is not established with sufficient certainty and access to the asset (e.g., a building, or data) being protected by multi-factor authentication then remains blocked. The authentication factors of a multi-factor authentication scheme may include:[4] An example of two-factor authentication is the withdrawing of money from anATM; only the correct combination of a physically presentbank card(something the user possesses) and a PIN (something the user knows) allows the transaction to be carried out. Two other examples are to supplement a user-controlled password with aone-time password(OTP) or code generated or received by anauthenticator(e.g. asecurity tokenorsmartphone) that only the user possesses.[5] Anauthenticatorapp enables two-factor authentication in a different way, by showing a randomly generated and constantly refreshing code, rather than sending anSMSor using another method.[6]This code is aTime-based one-time password(aTOTP)), and the authenticator app contains the key material that allows the generation of these codes. Knowledge factors ("something only the user knows") are a form of authentication. In this form, the user is required to prove knowledge of a secret in order to authenticate. A password is a secret word or string of characters that is used for user authentication. This is the most commonly used mechanism of authentication.[4]Many multi-factor authentication techniques rely on passwords as one factor of authentication. Variations include both longer ones formed from multiple words (apassphrase) and the shorter, purely numeric, PIN commonly used forATMaccess. Traditionally, passwords are expected to bememorized, but can also be written down on a hidden paper or text file. Possession factors ("something only the user has") have been used for authentication for centuries, in the form of a key to a lock. The basic principle is that the key embodies a secret that is shared between the lock and the key, and the same principle underlies possession factor authentication in computer systems. Asecurity tokenis an example of a possession factor. Disconnected tokenshave no connections to the client computer. They typically use a built-in screen to display the generated authentication data, which is manually typed in by the user. This type of token mostly uses aOTPthat can only be used for that specific session.[7] Connected tokensaredevicesthat arephysicallyconnected to the computer to be used. Those devices transmit data automatically.[8]There are a number of different types, including USB tokens,smart cardsandwireless tags.[8]Increasingly,FIDO2capable tokens, supported by theFIDO Allianceand theWorld Wide Web Consortium(W3C), have become popular with mainstream browser support beginning in 2015. Asoftware token(a.k.a.soft token) is a type of two-factor authentication security device that may be used to authorize the use of computer services. Software tokens are stored on a general-purpose electronic device such as adesktop computer,laptop,PDA, ormobile phoneand can be duplicated. (Contrasthardware tokens, where the credentials are stored on a dedicated hardware device and therefore cannot be duplicated, absent physical invasion of the device). A soft token may not be a device the user interacts with. Typically an X.509v3 certificate is loaded onto the device and stored securely to serve this purpose.[citation needed] Multi-factor authenticationcan also be applied in physical security systems. These physical security systems are known and commonly referred to as access control. Multi-factor authentication is typically deployed in access control systems through the use, firstly, of a physical possession (such as a fob,keycard, orQR-codedisplayed on a device) which acts as the identification credential, and secondly, a validation of one's identity such as facial biometrics or retinal scan. This form of multi-factor authentication is commonly referred to as facial verification or facial authentication. Inherent factors ("something the user is"), are factors associated with the user, and are usuallybiometricmethods, includingfingerprint,face,[9]voice, oririsrecognition. Behavioral biometrics such askeystroke dynamicscan also be used. Increasingly, a fourth factor is coming into play involving the physical location of the user. While hard wired to the corporate network, a user could be allowed to login using only a pin code. Whereas if the user was off the network or working remotely, a more secure MFA method such as entering a code from a soft token as well could be required. Adapting the type of MFA method and frequency to a users' location will enable you to avoid risks common to remote working.[10] Systems for network admission control work in similar ways where the level of network access can be contingent on the specific network a device is connected to, such asWi-Fivs wired connectivity. This also allows a user to move between offices and dynamically receivethe same level of network access[clarification needed]in each.[citation needed] Two-factor authentication over text message was developed as early as 1996, when AT&T described a system for authorizing transactions based on an exchange of codes over two-way pagers.[11][12] Many multi-factor authentication vendors offer mobile phone-based authentication. Some methods include push-based authentication,QR code-based authentication, one-time password authentication (event-based and time-based), and SMS-based verification. SMS-based verification suffers from some security concerns. Phones can be cloned, apps can run on several phones and cell-phone maintenance personnel can read SMS texts. Not least, cell phones can be compromised in general, meaning the phone is no longer something only the user has. The major drawback of authentication including something the user possesses is that the user must carry around the physical token (the USB stick, the bank card, the key or similar), practically at all times. Loss and theft are risks. Many organizations forbid carrying USB and electronic devices in or out of premises owing tomalwareand data theft risks, and most important machines do not have USB ports for the same reason. Physical tokens usually do not scale, typically requiring a new token for each new account and system. Procuring and subsequently replacing tokens of this kind involves costs. In addition, there are inherent conflicts and unavoidable trade-offs between usability and security.[13] Two-step authentication involvingmobile phonesandsmartphonesprovides an alternative to dedicated physical devices. To authenticate, people can use their personal access codes to the device (i.e. something that only the individual user knows) plus a one-time-valid, dynamic passcode, typically consisting of 4 to 6 digits. The passcode can be sent to their mobile device[3]bySMSor can be generated by a one-time passcode-generator app. In both cases, the advantage of using a mobile phone is that there is no need for an additional dedicated token, as users tend to carry theirmobile devicesaround at all times. Notwithstanding the popularity of SMS verification, security advocates have publicly criticized SMS verification,[14]and in July 2016, a United StatesNISTdraft guideline proposed deprecating it as a form of authentication.[15]A year later NIST reinstated SMS verification as a valid authentication channel in the finalized guideline.[16] As early as 2011, Duo Security was offeringpush notificationsfor MFA via a mobile app.[17]In 2016 and 2017 respectively, both Google and Apple started offering user two-step authentication with push notifications[4]as an alternative method.[18][19] Security of mobile-delivered security tokens fully depends on the mobile operator's operational security and can be easily breached by wiretapping orSIM cloningby national security agencies.[20] Advantages: Disadvantages: ThePayment Card Industry (PCI)Data Security Standard, requirement 8.3, requires the use of MFA for all remote network access that originates from outside the network to a Card Data Environment (CDE).[24]Beginning with PCI-DSS version 3.2, the use of MFA is required for all administrative access to the CDE, even if the user is within a trusted network. The secondPayment Services Directiverequires "strong customer authentication" on most electronic payments in theEuropean Economic Areasince September 14, 2019.[25] In India, theReserve Bank of Indiamandated two-factor authentication for all online transactions made using a debit or credit card using either a password or a one-time password sent overSMS. This requirement was removed in 2016 for transactions up to ₹2,000 after opting-in with the issuing bank.[26]Vendors such asUberhave been mandated by the bank to amend their payment processing systems in compliance with this two-factor authentication rollout.[27][28][29] Details for authentication for federal employees and contractors in the U.S. are defined in Homeland Security Presidential Directive 12 (HSPD-12).[30] IT regulatory standards for access to federal government systems require the use of multi-factor authentication to access sensitive IT resources, for example when logging on to network devices to perform administrative tasks[31]and when accessing any computer using a privileged login.[32] NISTSpecial Publication 800-63-3 discusses various forms of two-factor authentication and provides guidance on using them in business processes requiring different levels of assurance.[33] In 2005, the United States'Federal Financial Institutions Examination Councilissued guidance for financial institutions recommending financial institutions conduct risk-based assessments, evaluate customer awareness programs, and develop security measures to reliably authenticate customers remotely accessingonline financial services, officially recommending the use of authentication methods that depend on more than one factor (specifically, what a user knows, has, and is) to determine the user's identity.[34]In response to the publication, numerous authentication vendors began improperly promoting challenge-questions, secret images, and other knowledge-based methods as "multi-factor" authentication. Due to the resulting confusion and widespread adoption of such methods, on August 15, 2006, the FFIEC published supplemental guidelines—which state that by definition, a "true" multi-factor authentication system must use distinct instances of the three factors of authentication it had defined, and not just use multiple instances of a single factor.[35] According to proponents, multi-factor authentication could drastically reduce the incidence of onlineidentity theftand other onlinefraud, because the victim's password would no longer be enough to give a thief permanent access to their information. However, many multi-factor authentication approaches remain vulnerable tophishing,[36]man-in-the-browser, andman-in-the-middle attacks.[37]Two-factor authentication in web applications are especially susceptible to phishing attacks, particularly in SMS and e-mails, and, as a response, many experts advise users not to share their verification codes with anyone,[38]and many web application providers will place an advisory in an e-mail or SMS containing a code.[39] Multi-factor authentication may be ineffective[40]against modern threats, like ATM skimming, phishing, and malware.[41][vague][needs update?] In May 2017,O2 Telefónica, a German mobile service provider, confirmed that cybercriminals had exploitedSS7vulnerabilities to bypass SMS based two-step authentication to do unauthorized withdrawals from users' bank accounts. The criminals firstinfectedthe account holder's computers in an attempt to steal their bank account credentials and phone numbers. Then the attackers purchased access to a fake telecom provider and set up a redirect for the victim's phone number to a handset controlled by them. Finally, the attackers logged into victims' online bank accounts and requested for the money on the accounts to be withdrawn to accounts owned by the criminals. SMS passcodes were routed to phone numbers controlled by the attackers and the criminals transferred the money out.[42] An increasingly common approach to defeating MFA is to bombard the user with many requests to accept a log-in, until the user eventually succumbs to the volume of requests and accepts one.[43]This is called a multi-factor authentication fatigue attack (also MFA fatigue attack or MFA bombing) makes use ofsocial engineering.[44][45][46]When MFA applications are configured to send push notifications to end users, an attacker can send a flood of login attempts in the hope that a user will click on accept at least once.[44] In 2022,Microsofthas deployed a mitigation against MFA fatigue attacks with their authenticator app.[47] In September 2022Ubersecurity was breached by a member ofLapsus$using a multi-factor fatigue attack.[48][49]On March 24, 2023, YouTuberLinus Sebastiandeclared on theLinus Tech Tipschannel on theYouTubeplatform that he had suffered an Multi-factor authentication fatigue attack.[50]In early 2024, a small percentage ofAppleconsumers experienced a MFA fatigue attack that was caused by a hacker that bypassed the rate limit andCaptchaon Apple’s “Forgot Password” page. Manymulti-factor authenticationproducts require users to deployclientsoftwareto make multi-factor authentication systems work. Some vendors have created separate installation packages fornetworklogin,Webaccesscredentials, andVPNconnectioncredentials. For such products, there may be four or five differentsoftwarepackages to push down to theclientPC in order to make use of thetokenorsmart card. This translates to four or five packages on which version control has to be performed, and four or five packages to check for conflicts with business applications. If access can be operated usingweb pages, it is possible to limit the overheads outlined above to a single application. With other multi-factor authentication technology such as hardware token products, no software must be installed by end-users.[citation needed]Some studies have shown that poorly implemented MFA recovery procedures can introduce new vulnerabilities that attackers may exploit.[51] There are drawbacks to multi-factor authentication that are keeping many approaches from becoming widespread. Some users have difficulty keeping track of a hardware token or USB plug. Many users do not have the technical skills needed to install a client-side software certificate by themselves. Generally, multi-factor solutions require additional investment for implementation and costs for maintenance. Most hardware token-based systems are proprietary, and some vendors charge an annual fee per user. Deployment ofhardware tokensis logistically challenging. Hardwaretokensmay get damaged or lost, and issuance oftokensin large industries such as banking or even within large enterprises needs to be managed. In addition to deployment costs, multi-factor authentication often carries significant additional support costs.[citation needed]A 2008 survey[52]of over 120U.S. credit unionsby theCredit Union Journalreported on the support costs associated with two-factor authentication. In their report,software certificates and software toolbar approaches[clarification needed]were reported to have the highest support costs. Research into deployments of multi-factor authentication schemes[53]has shown that one of the elements that tend to impact the adoption of such systems is the line of business of the organization that deploys the multi-factor authentication system. Examples cited include the U.S. government, which employs an elaborate system of physical tokens (which themselves are backed by robustPublic Key Infrastructure), as well as private banks, which tend to prefer multi-factor authentication schemes for their customers that involve more accessible, less expensive means of identity verification, such as an app installed onto a customer-owned smartphone. Despite the variations that exist among available systems that organizations may have to choose from, once a multi-factor authentication system is deployed within an organization, it tends to remain in place, as users invariably acclimate to the presence and use of the system and embrace it over time as a normalized element of their daily process of interaction with their relevant information system. While the perception is that multi-factor authentication is within the realm of perfect security, Roger Grimes writes[54]that if not properly implemented and configured, multi-factor authentication can in fact be easily defeated. In 2013,Kim Dotcomclaimed to have invented two-factor authentication in a 2000 patent,[55]and briefly threatened to sue all the major web services. However, the European Patent Office revoked his patent[56]in light of an earlier 1998 U.S. patent held by AT&T.[57]
https://en.wikipedia.org/wiki/Multi-factor_authentication
Inprobability theoryandergodic theory, aMarkov operatoris anoperatoron a certainfunction spacethat conserves the mass (the so-called Markov property). If the underlyingmeasurable spaceistopologicallysufficiently rich enough, then the Markov operator admits akernelrepresentation. Markov operators can belinearor non-linear. Closely related to Markov operators is theMarkov semigroup.[1] The definition of Markov operators is not entirely consistent in the literature. Markov operators are named after the Russian mathematicianAndrey Markov. Let(E,F){\displaystyle (E,{\mathcal {F}})}be ameasurable spaceandV{\displaystyle V}a set of real, measurable functionsf:(E,F)→(R,B(R)){\displaystyle f:(E,{\mathcal {F}})\to (\mathbb {R} ,{\mathcal {B}}(\mathbb {R} ))}. A linear operatorP{\displaystyle P}onV{\displaystyle V}is aMarkov operatorif the following is true[1]: 9–12 Some authors define the operators on theLpspacesasP:Lp(X)→Lp(Y){\displaystyle P:L^{p}(X)\to L^{p}(Y)}and replace the first condition (bounded, measurable functions on such) with the property[2][3] LetP={Pt}t≥0{\displaystyle {\mathcal {P}}=\{P_{t}\}_{t\geq 0}}be a family of Markov operators defined on the set of bounded, measurables function on(E,F){\displaystyle (E,{\mathcal {F}})}. ThenP{\displaystyle {\mathcal {P}}}is aMarkov semigroupwhen the following is true[1]: 12 Each Markov semigroupP={Pt}t≥0{\displaystyle {\mathcal {P}}=\{P_{t}\}_{t\geq 0}}induces adual semigroup(Pt∗)t≥0{\displaystyle (P_{t}^{*})_{t\geq 0}}through Ifμ{\displaystyle \mu }is invariant underP{\displaystyle {\mathcal {P}}}thenPt∗μ=μ{\displaystyle P_{t}^{*}\mu =\mu }. Let{Pt}t≥0{\displaystyle \{P_{t}\}_{t\geq 0}}be a family of bounded, linear Markov operators on theHilbert spaceL2(μ){\displaystyle L^{2}(\mu )}, whereμ{\displaystyle \mu }is an invariant measure. Theinfinitesimal generatorL{\displaystyle L}of the Markov semigroupP={Pt}t≥0{\displaystyle {\mathcal {P}}=\{P_{t}\}_{t\geq 0}}is defined as and the domainD(L){\displaystyle D(L)}is theL2(μ){\displaystyle L^{2}(\mu )}-space of all such functions where this limit exists and is inL2(μ){\displaystyle L^{2}(\mu )}again.[1]: 18[4] Thecarré du champ operatorΓ{\displaystyle \Gamma }measuers how farL{\displaystyle L}is from being aderivation. A Markov operatorPt{\displaystyle P_{t}}has a kernel representation with respect to someprobability kernelpt(x,A){\displaystyle p_{t}(x,A)}, if the underlying measurable space(E,F){\displaystyle (E,{\mathcal {F}})}has the following sufficient topological properties: If one defines now aσ-finitemeasure on(E,F){\displaystyle (E,{\mathcal {F}})}then it is possible to prove that ever Markov operatorP{\displaystyle P}admits such a kernel representation with respect tok(x,dy){\displaystyle k(x,\mathrm {d} y)}.[1]: 7–13
https://en.wikipedia.org/wiki/Markov_operator
Instochastic processes,chaos theoryandtime series analysis,detrended fluctuation analysis(DFA) is a method for determining the statisticalself-affinityof a signal. It is useful for analysingtime seriesthat appear to belong-memoryprocesses (divergingcorrelation time, e.g. power-law decayingautocorrelation function) or1/f noise. The obtained exponent is similar to theHurst exponent, except that DFA may also be applied to signals whose underlying statistics (such as mean and variance) or dynamics arenon-stationary(changing with time). It is related to measures based upon spectral techniques such asautocorrelationandFourier transform. Penget al. introduced DFA in 1994 in a paper that has been cited over 3,000 times as of 2022[1]and represents an extension of the (ordinary)fluctuation analysis(FA), which is affected by non-stationarities. Systematic studies of the advantages and limitations of the DFA method were performed by PCh Ivanov et al. in a series of papers focusing on the effects of different types of nonstationarities in real-world signals: (1) types of trends;[2](2) random outliers/spikes, noisy segments, signals composed of parts with different correlation;[3](3) nonlinear filters;[4](4) missing data;[5](5) signal coarse-graining procedures[6]and comparing DFA performance with moving average techniques[7](cumulative citations > 4,000).Datasetsgenerated to test DFA are available on PhysioNet.[8] Given: atime seriesx1,x2,...,xN{\displaystyle x_{1},x_{2},...,x_{N}}. Compute its average value⟨x⟩=1N∑t=1Nxt{\displaystyle \langle x\rangle ={\frac {1}{N}}\sum _{t=1}^{N}x_{t}}. Sum it into a processXt=∑i=1t(xi−⟨x⟩){\displaystyle X_{t}=\sum _{i=1}^{t}(x_{i}-\langle x\rangle )}. This is thecumulative sum, orprofile, of the original time series. For example, the profile of ani.i.d.white noiseis a standardrandom walk. Select a setT={n1,...,nk}{\displaystyle T=\{n_{1},...,n_{k}\}}of integers, such thatn1<n2<⋯<nk{\displaystyle n_{1}<n_{2}<\cdots <n_{k}}, the smallestn1≈4{\displaystyle n_{1}\approx 4}, the largestnk≈N{\displaystyle n_{k}\approx N}, and the sequence is roughly distributed evenly in log-scale:log⁡(n2)−log⁡(n1)≈log⁡(n3)−log⁡(n2)≈⋯{\displaystyle \log(n_{2})-\log(n_{1})\approx \log(n_{3})-\log(n_{2})\approx \cdots }. In other words, it is approximately ageometric progression.[9] For eachn∈T{\displaystyle n\in T}, divide the sequenceXt{\displaystyle X_{t}}into consecutive segments of lengthn{\displaystyle n}. Within each segment, compute theleast squaresstraight-line fit (thelocal trend). LetY1,n,Y2,n,...,YN,n{\displaystyle Y_{1,n},Y_{2,n},...,Y_{N,n}}be the resulting piecewise-linear fit. Compute theroot-mean-square deviationfrom the local trend (local fluctuation):F(n,i)=1n∑t=in+1in+n(Xt−Yt,n)2.{\displaystyle F(n,i)={\sqrt {{\frac {1}{n}}\sum _{t=in+1}^{in+n}\left(X_{t}-Y_{t,n}\right)^{2}}}.}And their root-mean-square is the total fluctuation: (IfN{\displaystyle N}is not divisible byn{\displaystyle n}, then one can either discard the remainder of the sequence, or repeat the procedure on the reversed sequence, then take their root-mean-square.[10]) Make thelog-log plotlog⁡n−log⁡F(n){\displaystyle \log n-\log F(n)}.[11][12] A straight line of slopeα{\displaystyle \alpha }on the log-log plot indicates a statisticalself-affinityof formF(n)∝nα{\displaystyle F(n)\propto n^{\alpha }}. SinceF(n){\displaystyle F(n)}monotonically increases withn{\displaystyle n}, we always haveα>0{\displaystyle \alpha >0}. The scaling exponentα{\displaystyle \alpha }is a generalization of theHurst exponent, with the precise value giving information about the series self-correlations: Because the expected displacement in anuncorrelated random walkof length N grows likeN{\displaystyle {\sqrt {N}}}, an exponent of12{\displaystyle {\tfrac {1}{2}}}would correspond to uncorrelated white noise. When the exponent is between 0 and 1, the result isfractional Gaussian noise. Though the DFA algorithm always produces a positive numberα{\displaystyle \alpha }for any time series, it does not necessarily imply that the time series is self-similar.Self-similarityrequires the log-log graph to be sufficiently linear over a wide range ofn{\displaystyle n}. Furthermore, a combination of techniques includingmaximum likelihood estimation(MLE), rather than least-squares has been shown to better approximate the scaling, or power-law, exponent.[13] Also, there are many scaling exponent-like quantities that can be measured for a self-similar time series, including the divider dimension andHurst exponent. Therefore, the DFA scaling exponentα{\displaystyle \alpha }is not afractal dimension, and does not have certain desirable properties that theHausdorff dimensionhas, though in certain special cases it is related to thebox-counting dimensionfor the graph of a time series. The standard DFA algorithm given above removes a linear trend in each segment. If we remove a degree-n polynomial trend in each segment, it is called DFAn, or higher order DFA.[14] SinceXt{\displaystyle X_{t}}is a cumulative sum ofxt−⟨x⟩{\displaystyle x_{t}-\langle x\rangle }, a linear trend inXt{\displaystyle X_{t}}is a constant trend inxt−⟨x⟩{\displaystyle x_{t}-\langle x\rangle }, which is a constant trend inxt{\displaystyle x_{t}}(visible as short sections of "flat plateaus"). In this regard, DFA1 removes the mean from segments of the time seriesxt{\displaystyle x_{t}}before quantifying the fluctuation. Similarly, a degree n trend inXt{\displaystyle X_{t}}is a degree (n-1) trend inxt{\displaystyle x_{t}}. For example, DFA1 removes linear trends from segments of the time seriesxt{\displaystyle x_{t}}before quantifying the fluctuation, DFA1 removes parabolic trends fromxt{\displaystyle x_{t}}, and so on. The HurstR/S analysisremoves constant trends in the original sequence and thus, in its detrending it is equivalent to DFA1. DFA can be generalized by computingFq(n)=(1N/n∑i=1N/nF(n,i)q)1/q{\displaystyle F_{q}(n)=\left({\frac {1}{N/n}}\sum _{i=1}^{N/n}F(n,i)^{q}\right)^{1/q}}then making the log-log plot oflog⁡n−log⁡Fq(n){\displaystyle \log n-\log F_{q}(n)}, If there is a strong linearity in the plot oflog⁡n−log⁡Fq(n){\displaystyle \log n-\log F_{q}(n)}, then that slope isα(q){\displaystyle \alpha (q)}.[15]DFA is the special case whereq=2{\displaystyle q=2}. Multifractal systems scale as a functionFq(n)∝nα(q){\displaystyle F_{q}(n)\propto n^{\alpha (q)}}. Essentially, the scaling exponents need not be independent of the scale of the system. In particular, DFA measures the scaling-behavior of the second moment-fluctuations. Kantelhardt et al. intended this scaling exponent as a generalization of the classical Hurst exponent. The classical Hurst exponent corresponds toH=α(2){\displaystyle H=\alpha (2)}for stationary cases, andH=α(2)−1{\displaystyle H=\alpha (2)-1}for nonstationary cases.[15][16][17] The DFA method has been applied to many systems, e.g. DNA sequences;[18][19]heartbeat dynamics in sleep and wake,[20]sleep stages,[21][22]rest and exercise,[23]and across circadian phases;[24][25]locomotor gate and wrist dynamics,[26][27][28][29]neuronal oscillations,[17]speech pathology detection,[30]and animal behavior pattern analysis.[31][32] In the case of power-law decaying auto-correlations, thecorrelation functiondecays with an exponentγ{\displaystyle \gamma }:C(L)∼L−γ{\displaystyle C(L)\sim L^{-\gamma }\!\ }. In addition thepower spectrumdecays asP(f)∼f−β{\displaystyle P(f)\sim f^{-\beta }\!\ }. The three exponents are related by:[18] The relations can be derived using theWiener–Khinchin theorem. The relation of DFA to the power spectrum method has been well studied.[33] Thus,α{\displaystyle \alpha }is tied to the slope of the power spectrumβ{\displaystyle \beta }and is used to describe thecolor of noiseby this relationship:α=(β+1)/2{\displaystyle \alpha =(\beta +1)/2}. Forfractional Gaussian noise(FGN), we haveβ∈[−1,1]{\displaystyle \beta \in [-1,1]}, and thusα∈[0,1]{\displaystyle \alpha \in [0,1]}, andβ=2H−1{\displaystyle \beta =2H-1}, whereH{\displaystyle H}is theHurst exponent.α{\displaystyle \alpha }for FGN is equal toH{\displaystyle H}.[34] Forfractional Brownian motion(FBM), we haveβ∈[1,3]{\displaystyle \beta \in [1,3]}, and thusα∈[1,2]{\displaystyle \alpha \in [1,2]}, andβ=2H+1{\displaystyle \beta =2H+1}, whereH{\displaystyle H}is theHurst exponent.α{\displaystyle \alpha }for FBM is equal toH+1{\displaystyle H+1}.[16]In this context, FBM is the cumulative sum or theintegralof FGN, thus, the exponents of their power spectra differ by 2.
https://en.wikipedia.org/wiki/Detrended_fluctuation_analysis
Incomputeroperating systems, aprocess(ortask) maywaitfor another process to complete its execution. In most systems, aparent processcan create an independently executingchild process. The parent process may then issue awaitsystem call, which suspends the execution of the parent process while the child executes. When the child process terminates, it returns anexit statusto the operating system, which is then returned to the waiting parent process. The parent process then resumes execution.[1] Modern operating systems also provide system calls that allow a process'sthreadto create other threads and wait for them to terminate ("join" them) in a similar fashion. An operating system may provide variations of thewaitcall that allow a process to wait for any of its child processes toexit, or to wait for a single specific child process (identified by itsprocess ID) to exit. Some operating systems issue asignal(SIGCHLD) to the parent process when a child process terminates, notifying the parent process and allowing it to retrieve the child process's exit status. Theexit statusreturned by a child process typically indicates whether the process terminated normally orabnormally. For normal termination, this status also includes the exit code (usually an integer value) that the process returned to the system. During the first 20 years of UNIX, only the low 8 bits of the exit code were available to the waiting parent. In 1989 withSVR4,[citation needed]a new callwaitidwas introduced that returns all bits from theexitcall in a structure calledsiginfo_tin the structure membersi_status.[citation needed]Waitid has been a mandatory part of the POSIX standard since 2001. When a child process terminates, it becomes azombie process,and continues to exist as an entry in the systemprocess tableeven though it is no longer an actively executing program. Under normal operation it will typically be immediately waited on by its parent, and then reaped by the system, reclaiming the resource (the process table entry). If a child is not waited on by its parent, it continues to consume this resource indefinitely, and thus is aresource leak. Such situations are typically handled with a special "reaper" process[citation needed]that locates zombies and retrieves their exit status, allowing the operating system to then deallocate their resources. Conversely, a child process whose parent process terminates before it does becomes anorphan process. Such situations are typically handled with a special "root" (or "init") process, which is assigned as the new parent of a process when its parent process exits. This special process detects when an orphan process terminates and then retrieves its exit status, allowing the system to deallocate the terminated child process. If a child process receives a signal, a waiting parent will then continue execution leaving an orphan process behind.[citation needed]Hence it is sometimes needed to check the argument set by wait, waitpid or waitid and, in the case that WIFSIGNALED is true, wait for the child process again to deallocate resources.[citation needed]
https://en.wikipedia.org/wiki/Wait_(system_call)
Anormal modeof adynamical systemis a pattern of motion in which all parts of the system movesinusoidallywith the same frequency and with a fixed phase relation. The free motion described by the normal modes takes place at fixed frequencies. These fixed frequencies of the normal modes of a system are known as itsnatural frequenciesorresonant frequencies. A physical object, such as a building, bridge, or molecule, has a set of normal modes and their natural frequencies that depend on its structure, materials and boundary conditions. The most general motion of a linear system is asuperpositionof its normal modes. The modes are normal in the sense that they can move independently, that is to say that an excitation of one mode will never cause motion of a different mode. In mathematical terms, normal modes areorthogonalto each other. In thewave theoryof physics and engineering, amodein adynamical systemis astanding wavestate of excitation, in which all the components of the system will be affected sinusoidally at a fixed frequency associated with that mode. Because no real system can perfectly fit under the standing wave framework, themodeconcept is taken as a general characterization of specific states of oscillation, thus treating the dynamic system in alinearfashion, in which linearsuperpositionof states can be performed. Typical examples include: The concept of normal modes also finds application in other dynamical systems, such asoptics,quantum mechanics,atmospheric dynamicsandmolecular dynamics. Most dynamical systems can be excited in several modes, possibly simultaneously. Each mode is characterized by one or several frequencies,[dubious–discuss]according to the modal variable field. For example, a vibrating rope in 2D space is defined by a single-frequency (1D axial displacement), but a vibrating rope in 3D space is defined by two frequencies (2D axial displacement). For a given amplitude on the modal variable, each mode will store a specific amount of energy because of the sinusoidal excitation. Thenormalordominantmode of a system with multiple modes will be the mode storing the minimum amount of energy for a given amplitude of the modal variable, or, equivalently, for a given stored amount of energy, the dominant mode will be the mode imposing the maximum amplitude of the modal variable. A mode of vibration is characterized by a modal frequency and a mode shape. It is numbered according to the number of half waves in the vibration. For example, if a vibrating beam with both ends pinned displayed a mode shape of half of a sine wave (one peak on the vibrating beam) it would be vibrating in mode 1. If it had a full sine wave (one peak and one trough) it would be vibrating in mode 2. In a system with two or more dimensions, such as the pictured disk, each dimension is given a mode number. Usingpolar coordinates, we have a radial coordinate and an angular coordinate. If one measured from the center outward along the radial coordinate one would encounter a full wave, so the mode number in the radial direction is 2. The other direction is trickier, because only half of the disk is considered due to the anti-symmetric (also calledskew-symmetry) nature of a disk's vibration in the angular direction. Thus, measuring 180° along the angular direction you would encounter a half wave, so the mode number in the angular direction is 1. So the mode number of the system is 2–1 or 1–2, depending on which coordinate is considered the "first" and which is considered the "second" coordinate (so it is important to always indicate which mode number matches with each coordinate direction). In linear systems each mode is entirely independent of all other modes. In general all modes have different frequencies (with lower modes having lower frequencies) and different mode shapes. In a one-dimensional system at a given mode the vibration will have nodes, or places where the displacement is always zero. These nodes correspond to points in the mode shape where the mode shape is zero. Since the vibration of a system is given by the mode shape multiplied by a time function, the displacement of the node points remain zero at all times. When expanded to a two dimensional system, these nodes become lines where the displacement is always zero. If you watch the animation above you will see two circles (one about halfway between the edge and center, and the other on the edge itself) and a straight line bisecting the disk, where the displacement is close to zero. In an idealized system these lines equal zero exactly, as shown to the right. In the analysis ofconservative systemswith small displacements from equilibrium, important inacoustics,molecular spectra, andelectrical circuits, the system can be transformed to new coordinates callednormal coordinates.Each normal coordinate corresponds to a single vibrational frequency of the system and the corresponding motion of the system is called the normal mode of vibration.[1]: 332 Consider two equal bodies (not affected by gravity), each ofmassm, attached to three springs, each withspring constantk. They are attached in the following manner, forming a system that is physically symmetric: where the edge points are fixed and cannot move. Letx1(t)denote the horizontaldisplacementof the left mass, andx2(t)denote the displacement of the right mass. Denoting acceleration (the secondderivativeofx(t)with respect to time) asx¨{\textstyle {\ddot {x}}},theequations of motionare: mx¨1=−kx1+k(x2−x1)=−2kx1+kx2mx¨2=−kx2+k(x1−x2)=−2kx2+kx1{\displaystyle {\begin{aligned}m{\ddot {x}}_{1}&=-kx_{1}+k(x_{2}-x_{1})=-2kx_{1}+kx_{2}\\m{\ddot {x}}_{2}&=-kx_{2}+k(x_{1}-x_{2})=-2kx_{2}+kx_{1}\end{aligned}}} Since we expect oscillatory motion of a normal mode (whereωis the same for both masses), we try: x1(t)=A1eiωtx2(t)=A2eiωt{\displaystyle {\begin{aligned}x_{1}(t)&=A_{1}e^{i\omega t}\\x_{2}(t)&=A_{2}e^{i\omega t}\end{aligned}}} Substituting these into the equations of motion gives us: −ω2mA1eiωt=−2kA1eiωt+kA2eiωt−ω2mA2eiωt=kA1eiωt−2kA2eiωt{\displaystyle {\begin{aligned}-\omega ^{2}mA_{1}e^{i\omega t}&=-2kA_{1}e^{i\omega t}+kA_{2}e^{i\omega t}\\-\omega ^{2}mA_{2}e^{i\omega t}&=kA_{1}e^{i\omega t}-2kA_{2}e^{i\omega t}\end{aligned}}} Omitting the exponential factor (because it is common to all terms) and simplifying yields: (ω2m−2k)A1+kA2=0kA1+(ω2m−2k)A2=0{\displaystyle {\begin{aligned}(\omega ^{2}m-2k)A_{1}+kA_{2}&=0\\kA_{1}+(\omega ^{2}m-2k)A_{2}&=0\end{aligned}}} And inmatrixrepresentation: [ω2m−2kkkω2m−2k](A1A2)=0{\displaystyle {\begin{bmatrix}\omega ^{2}m-2k&k\\k&\omega ^{2}m-2k\end{bmatrix}}{\begin{pmatrix}A_{1}\\A_{2}\end{pmatrix}}=0} If the matrix on the left is invertible, the unique solution is the trivial solution(A1,A2) = (x1,x2) = (0, 0). The non trivial solutions are to be found for those values ofωwhereby the matrix on the left issingular; i.e. is not invertible. It follows that thedeterminantof the matrix must be equal to 0, so: (ω2m−2k)2−k2=0{\displaystyle (\omega ^{2}m-2k)^{2}-k^{2}=0} Solving forω, the two positive solutions are: ω1=kmω2=3km{\displaystyle {\begin{aligned}\omega _{1}&={\sqrt {\frac {k}{m}}}\\\omega _{2}&={\sqrt {\frac {3k}{m}}}\end{aligned}}} Substitutingω1into the matrix and solving for(A1,A2), yields(1, 1). Substitutingω2results in(1, −1). (These vectors areeigenvectors, and the frequencies areeigenvalues.) The first normal mode is:η→1=(x11(t)x21(t))=c1(11)cos⁡(ω1t+φ1){\displaystyle {\vec {\eta }}_{1}={\begin{pmatrix}x_{1}^{1}(t)\\x_{2}^{1}(t)\end{pmatrix}}=c_{1}{\begin{pmatrix}1\\1\end{pmatrix}}\cos {(\omega _{1}t+\varphi _{1})}} Which corresponds to both masses moving in the same direction at the same time. This mode is called antisymmetric. The second normal mode is: η→2=(x12(t)x22(t))=c2(1−1)cos⁡(ω2t+φ2){\displaystyle {\vec {\eta }}_{2}={\begin{pmatrix}x_{1}^{2}(t)\\x_{2}^{2}(t)\end{pmatrix}}=c_{2}{\begin{pmatrix}1\\-1\end{pmatrix}}\cos {(\omega _{2}t+\varphi _{2})}} This corresponds to the masses moving in the opposite directions, while the center of mass remains stationary. This mode is called symmetric. The general solution is asuperpositionof thenormal modeswherec1,c2,φ1, andφ2are determined by theinitial conditionsof the problem. The process demonstrated here can be generalized and formulated using the formalism ofLagrangian mechanicsorHamiltonian mechanics. Astanding waveis a continuous form of normal mode. In a standing wave, all the space elements (i.e.(x,y,z)coordinates) are oscillating in the samefrequencyand inphase(reaching theequilibriumpoint together), but each has a different amplitude. The general form of a standing wave is: Ψ(t)=f(x,y,z)(Acos⁡(ωt)+Bsin⁡(ωt)){\displaystyle \Psi (t)=f(x,y,z)(A\cos(\omega t)+B\sin(\omega t))} wheref(x,y,z)represents the dependence of amplitude on location and the cosine/sine are the oscillations in time. Physically, standing waves are formed by theinterference(superposition) of waves and their reflections (although one may also say the opposite; that a moving wave is asuperpositionof standing waves). The geometric shape of the medium determines what would be the interference pattern, thus determines thef(x,y,z)form of the standing wave. This space-dependence is called anormal mode. Usually, for problems with continuous dependence on(x,y,z)there is no single or finite number of normal modes, but there are infinitely many normal modes. If the problem is bounded (i.e. it is defined on a finite section of space) there arecountably manynormal modes (usually numberedn= 1, 2, 3, ...). If the problem is not bounded, there is a continuous spectrum of normal modes. In any solid at any temperature, the primary particles (e.g. atoms or molecules) are not stationary, but rather vibrate about mean positions. In insulators the capacity of the solid to store thermal energy is due almost entirely to these vibrations. Many physical properties of the solid (e.g. modulus of elasticity) can be predicted given knowledge of the frequencies with which the particles vibrate. The simplest assumption (by Einstein) is that all the particles oscillate about their mean positions with the same natural frequencyν. This is equivalent to the assumption that all atoms vibrate independently with a frequencyν. Einstein also assumed that the allowed energy states of these oscillations are harmonics, or integral multiples ofhν. The spectrum of waveforms can be described mathematically using a Fourier series of sinusoidal density fluctuations (or thermalphonons). Debye subsequently recognized that each oscillator is intimately coupled to its neighboring oscillators at all times. Thus, by replacing Einstein's identical uncoupled oscillators with the same number of coupled oscillators, Debye correlated the elastic vibrations of a one-dimensional solid with the number of mathematically special modes of vibration of a stretched string (see figure). The pure tone of lowest pitch or frequency is referred to as the fundamental and the multiples of that frequency are called its harmonic overtones. He assigned to one of the oscillators the frequency of the fundamental vibration of the whole block of solid. He assigned to the remaining oscillators the frequencies of the harmonics of that fundamental, with the highest of all these frequencies being limited by the motion of the smallest primary unit. The normal modes of vibration of a crystal are in general superpositions of many overtones, each with an appropriate amplitude and phase. Longer wavelength (low frequency)phononsare exactly those acoustical vibrations which are considered in the theory of sound. Both longitudinal and transverse waves can be propagated through a solid, while, in general, only longitudinal waves are supported by fluids. In thelongitudinal mode, the displacement of particles from their positions of equilibrium coincides with the propagation direction of the wave. Mechanical longitudinal waves have been also referred to ascompression waves. Fortransverse modes, individual particles move perpendicular to the propagation of the wave. According to quantum theory, the mean energy of a normal vibrational mode of a crystalline solid with characteristic frequencyνis: E(ν)=12hν+hνehν/kT−1{\displaystyle E(\nu )={\frac {1}{2}}h\nu +{\frac {h\nu }{e^{h\nu /kT}-1}}} The term(1/2)hνrepresents the "zero-point energy", or the energy which an oscillator will have at absolute zero.E(ν)tends to the classic valuekTat high temperatures E(ν)=kT[1+112(hνkT)2+O(hνkT)4+⋯]{\displaystyle E(\nu )=kT\left[1+{\frac {1}{12}}\left({\frac {h\nu }{kT}}\right)^{2}+O\left({\frac {h\nu }{kT}}\right)^{4}+\cdots \right]} By knowing the thermodynamic formula, (∂S∂E)N,V=1T{\displaystyle \left({\frac {\partial S}{\partial E}}\right)_{N,V}={\frac {1}{T}}} the entropy per normal mode is: S(ν)=∫0TddTE(ν)dTT=E(ν)T−klog⁡(1−e−hνkT){\displaystyle {\begin{aligned}S\left(\nu \right)&=\int _{0}^{T}{\frac {d}{dT}}E\left(\nu \right){\frac {dT}{T}}\\[10pt]&={\frac {E\left(\nu \right)}{T}}-k\log \left(1-e^{-{\frac {h\nu }{kT}}}\right)\end{aligned}}} The free energy is: F(ν)=E−TS=kTlog⁡(1−e−hνkT){\displaystyle F(\nu )=E-TS=kT\log \left(1-e^{-{\frac {h\nu }{kT}}}\right)} which, forkT≫hν, tends to: F(ν)=kTlog⁡(hνkT){\displaystyle F(\nu )=kT\log \left({\frac {h\nu }{kT}}\right)} In order to calculate the internal energy and the specific heat, we must know the number of normal vibrational modes a frequency between the valuesνandν+dν. Allow this number to bef(ν)dν. Since the total number of normal modes is3N, the functionf(ν)is given by: ∫f(ν)dν=3N{\displaystyle \int f(\nu )\,d\nu =3N} The integration is performed over all frequencies of the crystal. Then the internal energyUwill be given by: U=∫f(ν)E(ν)dν{\displaystyle U=\int f(\nu )E(\nu )\,d\nu } Bound states inquantum mechanicsare analogous to modes. The waves in quantum systems are oscillations in probability amplitude rather than material displacement. The frequency of oscillation,f, relates to the mode energy byE=hfwherehis thePlanck constant. Thus a system like an atom consists of a linear combination of modes of definite energy. These energies are characteristic of the particular atom. The (complex) square of the probability amplitude at a point in space gives the probability of measuring an electron at that location. The spatial distribution of this probability is characteristic of the atom.[2]: I49–S5 Normal modes are generated in the Earth from long wavelengthseismic wavesfrom large earthquakes interfering to form standing waves. For an elastic, isotropic, homogeneous sphere, spheroidal, toroidal and radial (or breathing) modes arise. Spheroidal modes only involve P and SV waves (likeRayleigh waves) and depend on overtone numbernand angular orderlbut have degeneracy of azimuthal orderm. Increasinglconcentrates fundamental branch closer to surface and at largelthis tends to Rayleigh waves. Toroidal modes only involve SH waves (likeLove waves) and do not exist in fluid outer core. Radial modes are just a subset of spheroidal modes withl= 0. The degeneracy does not exist on Earth as it is broken by rotation, ellipticity and 3D heterogeneous velocity and density structure. It may be assumed that each mode can be isolated, the self-coupling approximation, or that many modes close in frequencyresonate, the cross-coupling approximation. Self-coupling will solely change the phase velocity and not the number of waves around a great circle, resulting in a stretching or shrinking of standing wave pattern. Modal cross-coupling occurs due to the rotation of the Earth, from aspherical elastic structure, or due to Earth's ellipticity and leads to a mixing of fundamental spheroidal and toroidal modes.
https://en.wikipedia.org/wiki/Normal_mode
TheHeaviside step function, or theunit step function, usually denoted byHorθ(but sometimesu,1or𝟙), is astep functionnamed afterOliver Heaviside, the value of which iszerofor negative arguments andonefor positive arguments. Different conventions concerning the valueH(0)are in use. It is an example of the general class of step functions, all of which can be represented aslinear combinationsof translations of this one. The function was originally developed inoperational calculusfor the solution ofdifferential equations, where it represents a signal that switches on at a specified time and stays switched on indefinitely. Heaviside developed the operational calculus as a tool in the analysis of telegraphic communications and represented the function as1. Taking the convention thatH(0) = 1, the Heaviside function may be defined as: For the alternative convention thatH(0) =⁠1/2⁠, it may be expressed as: Other definitions which are undefined atH(0)include: H(x)=x+|x|2x{\displaystyle H(x)={\frac {x+|x|}{2x}}} TheDirac delta functionis theweak derivativeof the Heaviside function:δ(x)=ddxH(x).{\displaystyle \delta (x)={\frac {d}{dx}}H(x).}Hence the Heaviside function can be considered to be theintegralof the Dirac delta function. This is sometimes written asH(x):=∫−∞xδ(s)ds{\displaystyle H(x):=\int _{-\infty }^{x}\delta (s)\,ds}although this expansion may not hold (or even make sense) forx= 0, depending on which formalism one uses to give meaning to integrals involvingδ. In this context, the Heaviside function is thecumulative distribution functionof arandom variablewhich isalmost surely0. (SeeConstant random variable.) Approximations to the Heaviside step function are of use inbiochemistryandneuroscience, wherelogisticapproximations of step functions (such as theHilland theMichaelis–Menten equations) may be used to approximate binary cellular switches in response to chemical signals. For asmoothapproximation to the step function, one can use thelogistic functionH(x)≈12+12tanh⁡kx=11+e−2kx,{\displaystyle H(x)\approx {\tfrac {1}{2}}+{\tfrac {1}{2}}\tanh kx={\frac {1}{1+e^{-2kx}}},} where a largerkcorresponds to a sharper transition atx= 0. If we takeH(0) =⁠1/2⁠, equality holds in the limit:H(x)=limk→∞12(1+tanh⁡kx)=limk→∞11+e−2kx.{\displaystyle H(x)=\lim _{k\to \infty }{\tfrac {1}{2}}(1+\tanh kx)=\lim _{k\to \infty }{\frac {1}{1+e^{-2kx}}}.} There aremany other smooth, analytic approximationsto the step function.[1]Among the possibilities are:H(x)=limk→∞(12+1πarctan⁡kx)H(x)=limk→∞(12+12erf⁡kx){\displaystyle {\begin{aligned}H(x)&=\lim _{k\to \infty }\left({\tfrac {1}{2}}+{\tfrac {1}{\pi }}\arctan kx\right)\\H(x)&=\lim _{k\to \infty }\left({\tfrac {1}{2}}+{\tfrac {1}{2}}\operatorname {erf} kx\right)\end{aligned}}} These limits holdpointwiseand in the sense ofdistributions. In general, however, pointwise convergence need not imply distributional convergence, and vice versa distributional convergence need not imply pointwise convergence. (However, if all members of a pointwise convergent sequence of functions are uniformly bounded by some "nice" function, thenconvergence holds in the sense of distributions too.) In general, anycumulative distribution functionof acontinuousprobability distributionthat is peaked around zero and has a parameter that controls forvariancecan serve as an approximation, in the limit as the variance approaches zero. For example, all three of the above approximations arecumulative distribution functionsof common probability distributions: thelogistic,Cauchyandnormaldistributions, respectively. Approximations to the Heaviside step function could be made throughSmooth transition functionlike1≤m→∞{\displaystyle 1\leq m\to \infty }:f(x)={12(1+tanh⁡(m2x1−x2)),|x|<11,x≥10,x≤−1{\displaystyle {\begin{aligned}f(x)&={\begin{cases}{\displaystyle {\frac {1}{2}}\left(1+\tanh \left(m{\frac {2x}{1-x^{2}}}\right)\right)},&|x|<1\\\\1,&x\geq 1\\0,&x\leq -1\end{cases}}\end{aligned}}} Often anintegralrepresentation of the Heaviside step function is useful:H(x)=limε→0+−12πi∫−∞∞1τ+iεe−ixτdτ=limε→0+12πi∫−∞∞1τ−iεeixτdτ.{\displaystyle {\begin{aligned}H(x)&=\lim _{\varepsilon \to 0^{+}}-{\frac {1}{2\pi i}}\int _{-\infty }^{\infty }{\frac {1}{\tau +i\varepsilon }}e^{-ix\tau }d\tau \\&=\lim _{\varepsilon \to 0^{+}}{\frac {1}{2\pi i}}\int _{-\infty }^{\infty }{\frac {1}{\tau -i\varepsilon }}e^{ix\tau }d\tau .\end{aligned}}} where the second representation is easy to deduce from the first, given that the step function is real and thus is its own complex conjugate. SinceHis usually used in integration, and the value of a function at a single point does not affect its integral, it rarely matters what particular value is chosen ofH(0). Indeed whenHis considered as adistributionor an element ofL∞(seeLpspace) it does not even make sense to talk of a value at zero, since such objects are only definedalmost everywhere. If using some analytic approximation (as in theexamples above) then often whatever happens to be the relevant limit at zero is used. There exist various reasons for choosing a particular value. Also, H(x) + H(-x) = 1 for all x. An alternative form of the unit step, defined instead as a functionH:Z→R{\displaystyle H:\mathbb {Z} \rightarrow \mathbb {R} }(that is, taking in a discrete variablen), is: H[n]={0,n<0,1,n≥0,{\displaystyle H[n]={\begin{cases}0,&n<0,\\1,&n\geq 0,\end{cases}}} or using the half-maximum convention:[2] H[n]={0,n<0,12,n=0,1,n>0,{\displaystyle H[n]={\begin{cases}0,&n<0,\\{\tfrac {1}{2}},&n=0,\\1,&n>0,\end{cases}}} wherenis aninteger. Ifnis an integer, thenn< 0must imply thatn≤ −1, whilen> 0must imply that the function attains unity atn= 1. Therefore the "step function" exhibits ramp-like behavior over the domain of[−1, 1], and cannot authentically be a step function, using the half-maximum convention. Unlike the continuous case, the definition ofH[0]is significant. The discrete-time unit impulse is the first difference of the discrete-time step δ[n]=H[n]−H[n−1].{\displaystyle \delta [n]=H[n]-H[n-1].} This function is the cumulative summation of theKronecker delta: H[n]=∑k=−∞nδ[k]{\displaystyle H[n]=\sum _{k=-\infty }^{n}\delta [k]} where δ[k]=δk,0{\displaystyle \delta [k]=\delta _{k,0}} is thediscrete unit impulse function. Theramp functionis anantiderivativeof the Heaviside step function:∫−∞xH(ξ)dξ=xH(x)=max{0,x}.{\displaystyle \int _{-\infty }^{x}H(\xi )\,d\xi =xH(x)=\max\{0,x\}\,.} Thedistributional derivativeof the Heaviside step function is theDirac delta function:dH(x)dx=δ(x).{\displaystyle {\frac {dH(x)}{dx}}=\delta (x)\,.} TheFourier transformof the Heaviside step function is a distribution. Using one choice of constants for the definition of the Fourier transform we haveH^(s)=limN→∞∫−NNe−2πixsH(x)dx=12(δ(s)−iπp.v.⁡1s).{\displaystyle {\hat {H}}(s)=\lim _{N\to \infty }\int _{-N}^{N}e^{-2\pi ixs}H(x)\,dx={\frac {1}{2}}\left(\delta (s)-{\frac {i}{\pi }}\operatorname {p.v.} {\frac {1}{s}}\right).} Herep.v.⁠1/s⁠is thedistributionthat takes a test functionφto theCauchy principal valueof∫−∞∞φ(s)sds{\displaystyle \textstyle \int _{-\infty }^{\infty }{\frac {\varphi (s)}{s}}\,ds}. The limit appearing in the integral is also taken in the sense of (tempered) distributions. TheLaplace transformof the Heaviside step function is ameromorphic function. Using the unilateral Laplace transform we have:H^(s)=limN→∞∫0Ne−sxH(x)dx=limN→∞∫0Ne−sxdx=1s{\displaystyle {\begin{aligned}{\hat {H}}(s)&=\lim _{N\to \infty }\int _{0}^{N}e^{-sx}H(x)\,dx\\&=\lim _{N\to \infty }\int _{0}^{N}e^{-sx}\,dx\\&={\frac {1}{s}}\end{aligned}}} When the bilateral transform is used, the integral can be split in two parts and the result will be the same.
https://en.wikipedia.org/wiki/Heaviside_step_function
Inlinear algebra, amatrix unitis amatrixwith only one nonzero entry with value 1.[1][2]The matrix unit with a 1 in theith row andjth column is denoted asEij{\displaystyle E_{ij}}. For example, the 3 by 3 matrix unit withi= 1 andj= 2 isE12=[010000000]{\displaystyle E_{12}={\begin{bmatrix}0&1&0\\0&0&0\\0&0&0\end{bmatrix}}}Avector unitis astandard unit vector. Asingle-entry matrixgeneralizes the matrix unit for matrices with only one nonzero entry of any value, not necessarily of value 1. The set ofmbynmatrix units is abasisof the space ofmbynmatrices.[2] The product of two matrix units of the same square shapen×n{\displaystyle n\times n}satisfies the relationEijEkl=δjkEil,{\displaystyle E_{ij}E_{kl}=\delta _{jk}E_{il},}whereδjk{\displaystyle \delta _{jk}}is theKronecker delta.[2] The group ofscalarn-by-nmatrices over a ringRis thecentralizerof the subset ofn-by-nmatrix units in the set ofn-by-nmatrices overR.[2] Thematrix norm(induced by the same two vector norms) of a matrix unit is equal to 1. When multiplied by another matrix, it isolates a specific row or column in arbitrary position. For example, for any 3-by-3 matrixA:[3] Thislinear algebra-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Single-entry_vector
Transient execution CPU vulnerabilitiesarevulnerabilitiesin which instructions, most often optimized usingspeculative execution, are executed temporarily by amicroprocessor, without committing their results due to a misprediction or error, resulting in leaking secret data to an unauthorized party. The archetype isSpectre, and transient execution attacks like Spectre belong to the cache-attack category, one of several categories ofside-channel attacks. Since January 2018 many different cache-attack vulnerabilities have been identified. Modern computers are highly parallel devices, composed of components with very different performance characteristics. If an operation (such as a branch) cannot yet be performed because some earlier slow operation (such as a memory read) has not yet completed, a microprocessor may attempt topredictthe result of the earlier operation and execute the later operationspeculatively, acting as if the prediction were correct. The prediction may be based on recent behavior of the system. When the earlier, slower operation completes, the microprocessor determines whether the prediction was correct or incorrect. If it was correct then execution proceeds uninterrupted; if it was incorrect then the microprocessor rolls back the speculatively executed operations and repeats the original instruction with the real result of the slow operation. Specifically, atransient instruction[1]refers to an instruction processed by error by the processor (incriminating the branch predictor in the case ofSpectre) which can affect the micro-architectural state of the processor, leaving the architectural state without any trace of its execution. In terms of the directly visible behavior of the computer it is as if the speculatively executed code "never happened". However, this speculative execution may affect the state of certain components of the microprocessor, such as thecache, and this effect may be discovered by careful monitoring of the timing of subsequent operations. If an attacker can arrange that the speculatively executed code (which may be directly written by the attacker, or may be a suitablegadgetthat they have found in the targeted system) operates on secret data that they are unauthorized to access, and has a different effect on the cache for different values of the secret data, they may be able to discover the value of the secret data. In early January 2018, it was reported that allIntel processorsmade since 1995[2][3](besidesIntel Itaniumand pre-2013Intel Atom) have been subject to two security flaws dubbedMeltdownandSpectre.[4][5] The impact on performance resulting from software patches is "workload-dependent". Several procedures to help protect home computers and related devices from the Spectre and Meltdown security vulnerabilities have been published.[6][7][8][9]Spectre patches have been reported to significantly slow down performance, especially on older computers; on the newer 8th-generation Core platforms, benchmark performance drops of 2–14% have been measured.[10]Meltdown patches may also produce performance loss.[11][12][13]It is believed that "hundreds of millions" of systems could be affected by these flaws.[3][14]More security flaws were disclosed on May 3, 2018,[15]on August 14, 2018, on January 18, 2019, and on March 5, 2020.[16][17][18][19] At the time, Intel was not commenting on this issue.[20][21] On March 15, 2018, Intel reported that it will redesign itsCPUs(performance losses to be determined) to protect against theSpectre security vulnerability, and expects to release the newly redesigned processors later in 2018.[22][23] On May 3, 2018, eight additional Spectre-class flaws were reported. Intel reported that they are preparing new patches to mitigate these flaws.[24] On August 14, 2018, Intel disclosed three additional chip flaws referred to as L1 Terminal Fault (L1TF). They reported that previously released microcode updates, along with new, pre-release microcode updates can be used to mitigate these flaws.[25][26] On January 18, 2019, Intel disclosed three new vulnerabilities affecting all Intel CPUs, named "Fallout", "RIDL", and "ZombieLoad", allowing a program to read information recently written, read data in the line-fill buffers and load ports, and leak information from other processes and virtual machines.[27][28][29]Coffee Lake-series CPUs are even more vulnerable, due to hardware mitigations forSpectre.[citation needed][30] On March 5, 2020, computer security experts reported another Intel chip security flaw, besides theMeltdownandSpectreflaws, with the systematic nameCVE-2019-0090(or "Intel CSME Bug").[16]This newly found flaw is not fixable with a firmware update, and affects nearly "all Intel chips released in the past five years".[17][18][19] In March 2021 AMD security researchers discovered that the Predictive Store Forwarding algorithm inZen 3CPUs could be used by malicious applications to access data it shouldn't be accessing.[31]According to Phoronix there's little performance impact in disabling the feature.[32] In June 2021, two new vulnerabilities,Speculative Code Store Bypass(SCSB,CVE-2021-0086) andFloating Point Value Injection(FPVI,CVE-2021-0089), affectingallmodern x86-64 CPUs both from Intel and AMD were discovered.[33]In order to mitigate them software has to be rewritten and recompiled. ARM CPUs are not affected by SCSB but some certain ARM architectures are affected by FPVI.[34] Also in June 2021,MITresearchers revealed thePACMANattack on Pointer Authentication Codes (PAC) inARMv8.3A.[35][36][37] In August 2021 a vulnerability called "Transient Execution of Non-canonical Accesses" affecting certain AMD CPUs was disclosed.[38][39][40]It requires the same mitigations as the MDS vulnerability affecting certain Intel CPUs.[41]It was assignedCVE-2020-12965. Since most x86 software is already patched against MDS and this vulnerability has the exact same mitigations, software vendors don't have to address this vulnerability. In October 2021 for the first time ever a vulnerability similar to Meltdown was disclosed[42][43]to be affecting all AMD CPUs however the company doesn't think any new mitigations have to be applied and the existing ones are already sufficient.[44] In March 2022, a new variant of the Spectre vulnerability calledBranch History Injectionwas disclosed.[45][46]It affects certain ARM64 CPUs[47]and the following Intel CPU families:Cascade Lake,Ice Lake,Tiger LakeandAlder Lake. According to Linux kernel developers AMD CPUs are also affected.[48] In March 2022, a vulnerability affecting a wide range of AMD CPUs was disclosed underCVE-2021-26341.[49][50] In June 2022, multipleMMIOIntel CPUs vulnerabilities related to execution invirtual environmentswere announced.[51]The following CVEs were designated:CVE-2022-21123,CVE-2022-21125,CVE-2022-21166. In July 2022, theRetbleedvulnerability was disclosed affecting Intel Core 6 to 8th generation CPUs and AMD Zen 1, 1+ and 2 generation CPUs. Newer Intel microarchitectures as well as AMD starting with Zen 3 are not affected. The mitigations for the vulnerability decrease the performance of the affected Intel CPUs by up to 39%, while AMD CPUs lose up to 14%. In August 2022, theSQUIPvulnerability was disclosed affecting Ryzen 2000–5000 series CPUs.[52]According to AMD the existing mitigations are enough to protect from it.[53] According to a Phoronix review released in October, 2022Zen 4/Ryzen 7000CPUs are not slowed down by mitigations, in fact disabling them leads to a performance loss.[54][55] In February 2023 a vulnerability affecting a wide range of AMD CPU architectures called "Cross-Thread Return Address Predictions" was disclosed.[56][57][58] In July 2023 a critical vulnerability in theZen 2AMD microarchitecture calledZenbleedwas made public.[59][1]AMD released a microcode update to fix it.[60] In August 2023 a vulnerability in AMD'sZen 1,Zen 2,Zen 3, andZen 4microarchitectures calledInception[61][62]was revealed and assignedCVE-2023-20569. According to AMD it is not practical but the company will release a microcode update for the affected products. Also in August 2023 a new vulnerability calledDownfallorGather Data Samplingwas disclosed,[63][64][65]affecting Intel CPU Skylake, Cascade Lake, Cooper Lake, Ice Lake, Tiger Lake, Amber Lake, Kaby Lake, Coffee Lake, Whiskey Lake, Comet Lake & Rocket Lake CPU families. Intel will release a microcode update for affected products. TheSLAM[66][67][68][69]vulnerability (Spectre based on Linear Address Masking) reported in 2023 neither has received a corresponding CVE, nor has been confirmed or mitigated against. In March 2024, a variant of Spectre-V1 attack calledGhostRacewas published.[70]It was claimed it affected all the major microarchitectures and vendors, including Intel, AMD and ARM. It was assignedCVE-2024-2193. AMD dismissed the vulnerability (calling it "Speculative Race Conditions (SRCs)") claiming that existing mitigations were enough.[71]Linux kernel developers chose not to add mitigations citing performance concerns.[72]TheXen hypervisorproject released patches to mitigate the vulnerability but they are not enabled by default.[73] Also in March 2024, a vulnerability inIntel Atomprocessors calledRegister File Data Sampling(RFDS) was revealed.[74]It was assignedCVE-2023-28746. Its mitigations incur a slight performance degradation.[75] In April 2024, it was revealed that the BHI vulnerability in certain Intel CPU families could be still exploited in Linux entirely inuser spacewithout using any kernel features or root access despite existing mitigations.[76][77][78]Intel recommended "additional software hardening".[79]The attack was assignedCVE-2024-2201. In June 2024,SamsungResearch andSeoul National Universityresearchers revealed theTikTagattack against the Memory Tagging Extension inARMv8.5A CPUs. The researchers created PoCs forGoogle Chromeand theLinux kernel.[80][81][82][83]Researchers from VUSec previously revealed ARM's Memory Tagging Extension is vulnerable to speculative probing.[84][85] In July 2024,UC San Diegoresearchers revealed theIndirectorattack againstIntelAlder LakeandRaptor LakeCPUs leveraging high-precision Branch Target Injection (BTI).[86][87][88]Intel downplayed the severity of the vulnerability and claimed the existing mitigations are enough to tackle the issue.[89]No CVE was assigned. In January 2025, Georgia Institute of Technology researchers published two whitepapers on Data Speculation Attacks via Load Address Prediction on Apple Silicon (SLAP) and Breaking the Apple M3 CPU via False Load Output Predictions (FLOP).[90][91][92] Also in January 2025,Armdisclosed a vulnerability (CVE-2024-7881) in which an unprivileged context can trigger a data memory-dependentprefetchengine to fetch data from a privileged location, potentially leading to unauthorized access. To mitigate the issue, Arm recommends disabling the affected prefetcher by setting CPUACTLR6_EL1[41].[93][94] In May 2025, VUSec released three vulnerabilities extending on Spectre-v2 in various Intel and ARM architectures under the moniker Training Solo.[95][96][97]Mitigations require a microcode update for Intel CPUs and changes in the Linux kernel. Also in May 2025, ETH Zurich Computer Security Group "COMSEC" disclosed the Branch Privilege Injection vulnerability affecting all Intel x86 architectures starting from the 9th generation (Coffee Lake Refresh) under CVE-2024-45332.[98][99][100]A microcode update is required to mitigate it. It comes with a performance cost up to 8%. Spectre class vulnerabilities will remain unfixed because otherwise CPU designers will have to disablespeculative executionwhich will entail a massive performance loss.[citation needed]Despite this, AMD has managed to designZen 4such a way its performance isnotaffected by mitigations.[54][55] *Various CPU microarchitectures not included above are also affected, among them areARM,IBM Power,MIPSand others.[149][150][151][152] **The 8th generation Coffee Lake architecture in this tablealsoapplies to a wide range of previously released Intel CPUs, not limited to the architectures based onIntel Core,Pentium 4andIntel Atomstarting withSilvermont.[153][154]
https://en.wikipedia.org/wiki/Transient_execution_CPU_vulnerability
Ascale-free networkis anetworkwhosedegree distributionfollows apower law, at least asymptotically. That is, the fractionP(k) of nodes in the network havingkconnections to other nodes goes for large values ofkas whereγ{\displaystyle \gamma }is a parameter whose value is typically in the range2<γ<3{\textstyle 2<\gamma <3}(wherein the second moment (scale parameter) ofk−γ{\displaystyle k^{\boldsymbol {-\gamma }}}is infinite but the first moment is finite), although occasionally it may lie outside these bounds.[1][2]The name "scale-free" could be explained by the fact that some moments of the degree distribution are not defined, so that the network does not have a characteristic scale or "size". Preferential attachmentand thefitness modelhave been proposed as mechanisms to explain the power law degree distributions in real networks. Alternative models such assuper-linear preferential attachmentand second-neighbour preferential attachment may appear to generate transient scale-free networks, but the degree distribution deviates from a power law as networks become very large.[3][4] In studies of citations between scientific papers,Derek de Solla Priceshowed in 1965 that the number of citations a paper receives had aheavy-tailed distributionfollowing aPareto distributionorpower law. In a later paper in 1976, Price also proposed a mechanism to explain the occurrence of power laws in citation networks, which he called "cumulative advantage." However, both treated citations are scalar quantities, rather than a fundamental feature of a new class of networks. The interest in scale-free networks started in 1999 with work byAlbert-László BarabásiandRéka Albertat theUniversity of Notre Damewho mapped the topology of a portion of the World Wide Web,[5]finding that some nodes, which they called "hubs", had many more connections than others and that the network as a whole had a power-law distribution of the number of links connecting to a node. In a subsequent paper[6]BarabásiandAlbertshowed that the power laws are not a unique property of the WWW, but the feature is present in a few real networks, prompting them to coin the term "scale-free network" to describe the class of networks that exhibit a power-law degree distribution. Barabási andRéka Albertproposed a generative mechanism[6]to explain the appearance of power-law distributions, which they called "preferential attachment". Analytic solutions for this mechanism were presented in 2000 by Dorogovtsev,Mendesand Samukhin[7]and independently by Krapivsky,Redner, and Leyvraz, and later rigorously proved by mathematicianBéla Bollobás.[8] When the concept of "scale-free" was initially introduced in the context of networks,[6]it primarily referred to a specific trait: a power-law distribution for a given variablek{\displaystyle k}, expressed asf(k)∝k−γ{\displaystyle f(k)\propto k^{-\gamma }}. This property maintains its form when subjected to a continuous scale transformationk→k+ϵk{\displaystyle k\to k+\epsilon k}, evoking parallels with the renormalization group techniques in statistical field theory.[9][10] However, there's a key difference. In statistical field theory, the term "scale" often pertains to system size. In the realm of networks, "scale"k{\displaystyle k}is a measure of connectivity, generally quantified by a node's degree—that is, the number of links attached to it. Networks featuring a higher number of high-degree nodes are deemed to have greater connectivity. The power-law degree distribution enables us to make "scale-free" assertions about the prevalence of high-degree nodes.[11]For instance, we can say that "nodes with triple the average connectivity occur half as frequently as nodes with average connectivity". The specific numerical value of what constitutes "average connectivity" becomes irrelevant, whether it's a hundred or a million.[12] The most notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and are thought to serve specific purposes in their networks, although this depends greatly on the domain. In a random network the maximum degree, or the expected largest hub, scales askmax~ log N, whereNis the network size, a very slow dependence. In contrast, in scale-free networks the largest hub scales askmax~ ~N1/(γ−1)indicating that the hubs increase polynomically with the size of the network. A key feature of scale-free networks is their high degree heterogeneity, κ=<k2>/<k>, which governs multiple network-based processes, from network robustness to epidemic spreading and network synchronization. While for a random network κ=<k> + 1,i.e. the ration is independent of the network sizeN, for a scale-free network we have κ~ N(3−γ)/(γ−1), increasing with the network size, indicating that for these networks the degree heterogeneity increases. Another important characteristic of scale-free networks is theclustering coefficientdistribution, which decreases as the node degree increases. This distribution also follows a power law. This implies that the low-degree nodes belong to very dense sub-graphs and those sub-graphs are connected to each other through hubs. Consider a social network in which nodes are people and links are acquaintance relationships between people. It is easy to see that people tend to form communities, i.e., small groups in which everyone knows everyone (one can think of such community as acomplete graph). In addition, the members of a community also have a few acquaintance relationships to people outside that community. Some people, however, are connected to a large number of communities (e.g., celebrities, politicians). Those people may be considered the hubs responsible for thesmall-world phenomenon. At present, the more specific characteristics of scale-free networks vary with the generative mechanism used to create them. For instance, networks generated by preferential attachment typically place the high-degree vertices in the middle of the network, connecting them together to form a core, with progressively lower-degree nodes making up the regions between the core and the periphery. The random removal of even a large fraction of vertices impacts the overall connectedness of the network very little, suggesting that such topologies could be useful forsecurity, while targeted attacks destroys the connectedness very quickly. Other scale-free networks, which place the high-degree vertices at the periphery, do not exhibit these properties. Similarly, the clustering coefficient of scale-free networks can vary significantly depending on other topological details. The question of how to immunize efficiently scale free networks which represent realistic networks such as the Internet and social networks has been studied extensively. One such strategy is to immunize the largest degree nodes, i.e., targeted (intentional) attacks since for this case pc{\displaystyle c}is relatively high and less nodes are needed to be immunized. However, in many realistic cases the global structure is not available and the largest degree nodes are not known. Properties of random graph may change or remain invariant under graph transformations.Mashaghi A.et al., for example, demonstrated that a transformation which converts random graphs to their edge-dual graphs (or line graphs) produces an ensemble of graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient. Scale free graphs, as such, remain scale free under such transformations.[13] Examples of networks found to be scale-free include: Scale free topology has been also found in high temperature superconductors.[17]The qualities of a high-temperature superconductor — a compound in which electrons obey the laws of quantum physics, and flow in perfect synchrony, without friction — appear linked to the fractal arrangements of seemingly random oxygen atoms and lattice distortion.[18] Scale-free networks do not arise by chance alone.Erdősand Rényi (1960) studied a model of growth for graphs in which, at each step, two nodes are chosen uniformly at random and a link is inserted between them. The properties of theserandom graphsare different from the properties found in scale-free networks, and therefore a model for this growth process is needed. The most widely known generative model for a subset of scale-free networks is Barabási and Albert's (1999)rich get richergenerative model in which each new Web page creates links to existing Web pages with a probability distribution which is not uniform, but proportional to the current in-degree of Web pages. According to this process, a page with many in-links will attract more in-links than a regular page. This generates a power-law but the resulting graph differs from the actual Web graph in other properties such as the presence of small tightly connected communities. More general models and network characteristics have been proposed and studied. For example, Pachon et al. (2018) proposed a variant of therich get richergenerative model which takes into account two different attachment rules: a preferential attachment mechanism and a uniform choice only for the most recent nodes.[19]For a review see the book by Dorogovtsev andMendes.[citation needed]Some mechanisms such assuper-linear preferential attachmentand second neighbour attachment generate networks which are transiently scale-free, but deviate from a power law as networks grow large.[3][4] A somewhat different generative model for Web links has been suggested by Pennock et al. (2002). They examined communities with interests in a specific topic such as the home pages of universities, public companies, newspapers or scientists, and discarded the major hubs of the Web. In this case, the distribution of links was no longer a power law but resembled anormal distribution. Based on these observations, the authors proposed a generative model that mixes preferential attachment with a baseline probability of gaining a link. Another generative model is thecopymodel studied by Kumar et al.[20](2000), in which new nodes choose an existent node at random and copy a fraction of the links of the existent node. This also generates a power law. There are two major components that explain the emergence of the power-law distribution in theBarabási–Albert model: the growth and the preferential attachment.[21]By "growth" is meant a growth process where, over an extended period of time, new nodes join an already existing system, a network (like the World Wide Web which has grown by billions of web pages over 10 years). Finally, by "preferential attachment" is meant that new nodes prefer to connect to nodes that already have a high number of links with others. Thus, there is a higher probability that more and more nodes will link themselves to that one which has already many links, leading this node to a hubin-fine.[6]Depending on the network, the hubs might either be assortative or disassortative. Assortativity would be found in social networks in which well-connected/famous people would tend to know better each other. Disassortativity would be found in technological (Internet, World Wide Web) and biological (protein interaction, metabolism) networks.[21] However, thegrowthof the networks (adding new nodes) is not a necessary condition for creating a scale-free network (see Dangalchev[22]). One possibility (Caldarelli et al. 2002) is to consider the structure as static and draw a link between vertices according to a particular property of the two vertices involved. Once specified the statistical distribution for these vertex properties (fitnesses), it turns out that in some circumstances also static networks develop scale-free properties. There has been a burst of activity in the modeling ofscale-free complex networks. The recipe of Barabási and Albert[23]has been followed by several variations and generalizations[24][25][26][27][19]and the revamping of previous mathematical works.[28] In today's terms, if a complex network has a power-law distribution of any of its metrics, it's generally considered a scale-free network. Similarly, any model with this feature is called a scale-free model.[11] Many real networks are (approximately) scale-free and hence require scale-free models to describe them. In Price's scheme, there are two ingredients needed to build up a scale-free model: 1. Adding or removingnodes. Usually we concentrate on growing the network, i.e. adding nodes. 2.Preferential attachment: The probabilityΠ{\displaystyle \Pi }that new nodes will be connected to the "old" node. Note that some models (see Dangalchev[22]and Fitness model below) can work also statically, without changing the number of nodes. It should also be kept in mind that the fact that "preferential attachment" models give rise to scale-free networks does not prove that this is the mechanism underlying the evolution of real-world scale-free networks, as there might exist different mechanisms at work in real-world systems that nevertheless give rise to scaling. There have been several attempts to generate scale-free network properties. Here are some examples: TheBarabási–Albert model, an undirected version ofPrice's modelhas a linear preferential attachmentΠ(ki)=ki∑jkj{\displaystyle \Pi (k_{i})={\frac {k_{i}}{\sum _{j}k_{j}}}}and adds one new node at every time step. (Note, another general feature ofΠ(k){\displaystyle \Pi (k)}in real networks is thatΠ(0)≠0{\displaystyle \Pi (0)\neq 0}, i.e. there is a nonzero probability that a new node attaches to an isolated node. Thus in generalΠ(k){\displaystyle \Pi (k)}has the formΠ(k)=A+kα{\displaystyle \Pi (k)=A+k^{\alpha }}, whereA{\displaystyle A}is the initial attractiveness of the node.) Dangalchev (see[22]) builds a 2-L model by considering the importance of each of the neighbours of a target node in preferential attachment. The attractiveness of a node in the 2-L model depends not only on the number of nodes linked to it but also on the number of links in each of these nodes. whereCis a coefficient between 0 and 1. A variant of the 2-L model, the k2 model, where first and second neighbour nodes contribute equally to a target node's attractiveness, demonstrates the emergence of transient scale-free networks.[4]In the k2 model, the degree distribution appears approximately scale-free as long as the network is relatively small, but significant deviations from the scale-free regime emerge as the network grows larger. This results in the relative attractiveness of nodes with different degrees changing over time, a feature also observed in real networks. In themediation-driven attachment (MDA) model, a new node coming withm{\displaystyle m}edges picks an existing connected node at random and then connects itself, not with that one, but withm{\displaystyle m}of its neighbors, also chosen at random. The probabilityΠ(i){\displaystyle \Pi (i)}that the nodei{\displaystyle i}of the existing node picked is The factor∑j=1ki1kjki{\displaystyle {\frac {\sum _{j=1}^{k_{i}}{\frac {1}{k_{j}}}}{k_{i}}}}is the inverse of the harmonic mean (IHM) of degrees of theki{\displaystyle k_{i}}neighbors of a nodei{\displaystyle i}. Extensive numerical investigation suggest that for approximatelym>14{\displaystyle m>14}the mean IHM value in the largeN{\displaystyle N}limit becomes a constant which meansΠ(i)∝ki{\displaystyle \Pi (i)\propto k_{i}}. It implies that the higher the links (degree) a node has, the higher its chance of gaining more links since they can be reached in a larger number of ways through mediators which essentially embodies the intuitive idea of rich get richer mechanism (or the preferential attachment rule of the Barabasi–Albert model). Therefore, the MDA network can be seen to follow the PA rule but in disguise.[29] However, form=1{\displaystyle m=1}it describes the winner takes it all mechanism as we find that almost99%{\displaystyle 99\%}of the total nodes has degree one and one is super-rich in degree. Asm{\displaystyle m}value increases the disparity between the super rich and poor decreases and asm>14{\displaystyle m>14}we find a transition from rich get super richer to rich get richer mechanism. The Barabási–Albert model assumes that the probabilityΠ(k){\displaystyle \Pi (k)}that a node attaches to nodei{\displaystyle i}is proportional to thedegreek{\displaystyle k}of nodei{\displaystyle i}. This assumption involves two hypotheses: first, thatΠ(k){\displaystyle \Pi (k)}depends onk{\displaystyle k}, in contrast to random graphs in whichΠ(k)=p{\displaystyle \Pi (k)=p}, and second, that the functional form ofΠ(k){\displaystyle \Pi (k)}is linear ink{\displaystyle k}. In non-linear preferential attachment, the form ofΠ(k){\displaystyle \Pi (k)}is not linear, and recent studies have demonstrated that the degree distribution depends strongly on the shape of the functionΠ(k){\displaystyle \Pi (k)} Krapivsky, Redner, and Leyvraz[26]demonstrate that the scale-free nature of the network is destroyed for nonlinear preferential attachment. The only case in which the topology of the network is scale free is that in which the preferential attachment isasymptoticallylinear, i.e.Π(ki)∼a∞ki{\displaystyle \Pi (k_{i})\sim a_{\infty }k_{i}}aski→∞{\displaystyle k_{i}\to \infty }. In this case the rate equation leads to This way the exponent of the degree distribution can be tuned to any value between 2 and∞{\displaystyle \infty }.[clarification needed] Hierarchical network modelsare, by design, scale free and have high clustering of nodes.[30] Theiterativeconstruction leads to a hierarchical network. Starting from a fully connected cluster of five nodes, we create four identical replicas connecting the peripheral nodes of each cluster to the central node of the original cluster. From this, we get a network of 25 nodes (N= 25). Repeating the same process, we can create four more replicas of the original cluster – the four peripheral nodes of each one connect to the central node of the nodes created in the first step. This givesN= 125, and the process can continue indefinitely. The idea is that the link between two vertices is assigned not randomly with a probabilitypequal for all the couple of vertices. Rather, for every vertexjthere is an intrinsicfitnessxjand a link between vertexiandjis created with a probabilityp(xi,xj){\displaystyle p(x_{i},x_{j})}.[31]In the case of World Trade Web it is possible to reconstruct all the properties by using as fitnesses of the country their GDP, and taking Assuming that a network has an underlying hyperbolic geometry, one can use the framework ofspatial networksto generate scale-free degree distributions. This heterogeneous degree distribution then simply reflects the negative curvature and metric properties of the underlying hyperbolic geometry.[33] Starting with scale free graphs with low degree correlation and clustering coefficient, one can generate new graphs with much higher degree correlations and clustering coefficients by applying edge-dual transformation.[13] UPA modelis a variant of the preferential attachment model (proposed by Pachon et al.) which takes into account two different attachment rules: a preferential attachment mechanism (with probability 1−p) that stresses the rich get richer system, and a uniform choice (with probability p) for the most recent nodes. This modification is interesting to study the robustness of the scale-free behavior of the degree distribution. It is proved analytically that the asymptotically power-law degree distribution is preserved.[19] In the context ofnetwork theoryascale-free ideal networkis arandom networkwith adegree distributionfollowing thescale-free ideal gasdensity distribution. These networks are able to reproduce city-size distributions and electoral results by unraveling the size distribution of social groups with information theory on complex networks when a competitive cluster growth process is applied to the network.[34][35]In models of scale-free ideal networks it is possible to demonstrate thatDunbar's numberis the cause of the phenomenon known as the 'six degrees of separation'. For a scale-free network withn{\displaystyle n}nodes and power-law exponentγ>3{\displaystyle \gamma >3}, the induced subgraph constructed by vertices with degrees larger thanlog⁡n×log∗⁡n{\displaystyle \log {n}\times \log ^{*}{n}}is a scale-free network withγ′=2{\displaystyle \gamma '=2},almost surely.[36] On a theoretical level, refinements to the abstract definition of scale-free have been proposed. For example, Li et al. (2005) offered a potentially more precise "scale-free metric". Briefly, letGbe a graph with edge setE, and denote the degree of a vertexv{\displaystyle v}(that is, the number of edges incident tov{\displaystyle v}) bydeg⁡(v){\displaystyle \deg(v)}. Define This is maximized when high-degree nodes are connected to other high-degree nodes. Now define wheresmaxis the maximum value ofs(H) forHin the set of all graphs with degree distribution identical to that ofG. This gives a metric between 0 and 1, where a graphGwith smallS(G) is "scale-rich", and a graphGwithS(G) close to 1 is "scale-free". This definition captures the notion ofself-similarityimplied in the name "scale-free". Estimating the power-law exponentγ{\displaystyle \gamma }of a scale-free network is typically done by using themaximum likelihood estimationwith the degrees of a few uniformly sampled nodes.[37]However, since uniform sampling does not obtain enough samples from the important heavy-tail of the power law degree distribution, this method can yield a large bias and a variance. It has been recently proposed to sample random friends (i.e., random ends of random links) who are more likely come from the tail of the degree distribution as a result of thefriendship paradox.[38][39]Theoretically, maximum likelihood estimation with random friends lead to a smaller bias and a smaller variance compared to classical approach based on uniform sampling.[39]
https://en.wikipedia.org/wiki/Scale-free_network
Infunctional analysisand related areas ofmathematics, ametrizable(resp.pseudometrizable)topological vector space(TVS) is a TVS whose topology is induced by a metric (resp.pseudometric). AnLM-spaceis aninductive limitof a sequence oflocally convexmetrizable TVS. Apseudometricon a setX{\displaystyle X}is a mapd:X×X→R{\displaystyle d:X\times X\rightarrow \mathbb {R} }satisfying the following properties: A pseudometric is called ametricif it satisfies: Ultrapseudometric A pseudometricd{\displaystyle d}onX{\displaystyle X}is called aultrapseudometricor astrong pseudometricif it satisfies: Pseudometric space Apseudometric spaceis a pair(X,d){\displaystyle (X,d)}consisting of a setX{\displaystyle X}and a pseudometricd{\displaystyle d}onX{\displaystyle X}such thatX{\displaystyle X}'s topology is identical to the topology onX{\displaystyle X}induced byd.{\displaystyle d.}We call a pseudometric space(X,d){\displaystyle (X,d)}ametric space(resp.ultrapseudometric space) whend{\displaystyle d}is a metric (resp. ultrapseudometric). Ifd{\displaystyle d}is a pseudometric on a setX{\displaystyle X}then collection ofopen balls:Br(z):={x∈X:d(x,z)<r}{\displaystyle B_{r}(z):=\{x\in X:d(x,z)<r\}}asz{\displaystyle z}ranges overX{\displaystyle X}andr>0{\displaystyle r>0}ranges over the positive real numbers, forms a basis for a topology onX{\displaystyle X}that is called thed{\displaystyle d}-topologyor thepseudometric topologyonX{\displaystyle X}induced byd.{\displaystyle d.} Pseudometrizable space A topological space(X,τ){\displaystyle (X,\tau )}is calledpseudometrizable(resp.metrizable,ultrapseudometrizable) if there exists a pseudometric (resp. metric, ultrapseudometric)d{\displaystyle d}onX{\displaystyle X}such thatτ{\displaystyle \tau }is equal to the topology induced byd.{\displaystyle d.}[1] An additivetopological groupis an additive group endowed with a topology, called agroup topology, under which addition and negation become continuous operators. A topologyτ{\displaystyle \tau }on a real or complex vector spaceX{\displaystyle X}is called avector topologyor aTVS topologyif it makes the operations of vector addition and scalar multiplication continuous (that is, if it makesX{\displaystyle X}into atopological vector space). Everytopological vector space(TVS)X{\displaystyle X}is an additive commutative topological group but not all group topologies onX{\displaystyle X}are vector topologies. This is because despite it making addition and negation continuous, a group topology on a vector spaceX{\displaystyle X}may fail to make scalar multiplication continuous. For instance, thediscrete topologyon any non-trivial vector space makes addition and negation continuous but do not make scalar multiplication continuous. IfX{\displaystyle X}is an additive group then we say that a pseudometricd{\displaystyle d}onX{\displaystyle X}istranslation invariantor justinvariantif it satisfies any of the following equivalent conditions: IfX{\displaystyle X}is atopological groupthe avalueorG-seminormonX{\displaystyle X}(theGstands for Group) is a real-valued mapp:X→R{\displaystyle p:X\rightarrow \mathbb {R} }with the following properties:[2] where we call a G-seminorm aG-normif it satisfies the additional condition: Ifp{\displaystyle p}is a value on a vector spaceX{\displaystyle X}then: Theorem[2]—Suppose thatX{\displaystyle X}is an additive commutative group. Ifd{\displaystyle d}is a translation invariant pseudometric onX{\displaystyle X}then the mapp(x):=d(x,0){\displaystyle p(x):=d(x,0)}is a value onX{\displaystyle X}calledthe value associated withd{\displaystyle d}, and moreover,d{\displaystyle d}generates a group topology onX{\displaystyle X}(i.e. thed{\displaystyle d}-topology onX{\displaystyle X}makesX{\displaystyle X}into a topological group). Conversely, ifp{\displaystyle p}is a value onX{\displaystyle X}then the mapd(x,y):=p(x−y){\displaystyle d(x,y):=p(x-y)}is a translation-invariant pseudometric onX{\displaystyle X}and the value associated withd{\displaystyle d}is justp.{\displaystyle p.} Theorem[2]—If(X,τ){\displaystyle (X,\tau )}is an additive commutativetopological groupthen the following are equivalent: If(X,τ){\displaystyle (X,\tau )}is Hausdorff then the word "pseudometric" in the above statement may be replaced by the word "metric." A commutative topological group is metrizable if and only if it is Hausdorff and pseudometrizable. LetX{\displaystyle X}be a non-trivial (i.e.X≠{0}{\displaystyle X\neq \{0\}}) real or complex vector space and letd{\displaystyle d}be the translation-invarianttrivial metriconX{\displaystyle X}defined byd(x,x)=0{\displaystyle d(x,x)=0}andd(x,y)=1for allx,y∈X{\displaystyle d(x,y)=1{\text{ for all }}x,y\in X}such thatx≠y.{\displaystyle x\neq y.}The topologyτ{\displaystyle \tau }thatd{\displaystyle d}induces onX{\displaystyle X}is thediscrete topology, which makes(X,τ){\displaystyle (X,\tau )}into a commutative topological group under addition but doesnotform a vector topology onX{\displaystyle X}because(X,τ){\displaystyle (X,\tau )}isdisconnectedbut every vector topology is connected. What fails is that scalar multiplication isn't continuous on(X,τ).{\displaystyle (X,\tau ).} This example shows that a translation-invariant (pseudo)metric isnotenough to guarantee a vector topology, which leads us to define paranorms andF-seminorms. A collectionN{\displaystyle {\mathcal {N}}}of subsets of a vector space is calledadditive[5]if for everyN∈N,{\displaystyle N\in {\mathcal {N}},}there exists someU∈N{\displaystyle U\in {\mathcal {N}}}such thatU+U⊆N.{\displaystyle U+U\subseteq N.} Continuity of addition at 0—If(X,+){\displaystyle (X,+)}is agroup(as all vector spaces are),τ{\displaystyle \tau }is a topology onX,{\displaystyle X,}andX×X{\displaystyle X\times X}is endowed with theproduct topology, then the addition mapX×X→X{\displaystyle X\times X\to X}(i.e. the map(x,y)↦x+y{\displaystyle (x,y)\mapsto x+y}) is continuous at the origin ofX×X{\displaystyle X\times X}if and only if the set ofneighborhoodsof the origin in(X,τ){\displaystyle (X,\tau )}is additive. This statement remains true if the word "neighborhood" is replaced by "open neighborhood."[5] All of the above conditions are consequently a necessary for a topology to form a vector topology. Additive sequences of sets have the particularly nice property that they define non-negative continuous real-valuedsubadditivefunctions. These functions can then be used to prove many of the basic properties of topological vector spaces and also show that a Hausdorff TVS with a countable basis of neighborhoods is metrizable. The following theorem is true more generally for commutative additivetopological groups. Theorem—LetU∙=(Ui)i=0∞{\displaystyle U_{\bullet }=\left(U_{i}\right)_{i=0}^{\infty }}be a collection of subsets of a vector space such that0∈Ui{\displaystyle 0\in U_{i}}andUi+1+Ui+1⊆Ui{\displaystyle U_{i+1}+U_{i+1}\subseteq U_{i}}for alli≥0.{\displaystyle i\geq 0.}For allu∈U0,{\displaystyle u\in U_{0},}letS(u):={n∙=(n1,…,nk):k≥1,ni≥0for alli,andu∈Un1+⋯+Unk}.{\displaystyle \mathbb {S} (u):=\left\{n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)~:~k\geq 1,n_{i}\geq 0{\text{ for all }}i,{\text{ and }}u\in U_{n_{1}}+\cdots +U_{n_{k}}\right\}.} Definef:X→[0,1]{\displaystyle f:X\to [0,1]}byf(x)=1{\displaystyle f(x)=1}ifx∉U0{\displaystyle x\not \in U_{0}}and otherwise letf(x):=inf{2−n1+⋯2−nk:n∙=(n1,…,nk)∈S(x)}.{\displaystyle f(x):=\inf _{}\left\{2^{-n_{1}}+\cdots 2^{-n_{k}}~:~n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)\in \mathbb {S} (x)\right\}.} Thenf{\displaystyle f}issubadditive(meaningf(x+y)≤f(x)+f(y)for allx,y∈X{\displaystyle f(x+y)\leq f(x)+f(y){\text{ for all }}x,y\in X}) andf=0{\displaystyle f=0}on⋂i≥0Ui,{\displaystyle \bigcap _{i\geq 0}U_{i},}so in particularf(0)=0.{\displaystyle f(0)=0.}If allUi{\displaystyle U_{i}}aresymmetric setsthenf(−x)=f(x){\displaystyle f(-x)=f(x)}and if allUi{\displaystyle U_{i}}are balanced thenf(sx)≤f(x){\displaystyle f(sx)\leq f(x)}for all scalarss{\displaystyle s}such that|s|≤1{\displaystyle |s|\leq 1}and allx∈X.{\displaystyle x\in X.}IfX{\displaystyle X}is a topological vector space and if allUi{\displaystyle U_{i}}are neighborhoods of the origin thenf{\displaystyle f}is continuous, where if in additionX{\displaystyle X}is Hausdorff andU∙{\displaystyle U_{\bullet }}forms a basis of balanced neighborhoods of the origin inX{\displaystyle X}thend(x,y):=f(x−y){\displaystyle d(x,y):=f(x-y)}is a metric defining the vector topology onX.{\displaystyle X.} Assume thatn∙=(n1,…,nk){\displaystyle n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)}always denotes a finite sequence of non-negative integers and use the notation:∑2−n∙:=2−n1+⋯+2−nkand∑Un∙:=Un1+⋯+Unk.{\displaystyle \sum 2^{-n_{\bullet }}:=2^{-n_{1}}+\cdots +2^{-n_{k}}\quad {\text{ and }}\quad \sum U_{n_{\bullet }}:=U_{n_{1}}+\cdots +U_{n_{k}}.} For any integersn≥0{\displaystyle n\geq 0}andd>2,{\displaystyle d>2,}Un⊇Un+1+Un+1⊇Un+1+Un+2+Un+2⊇Un+1+Un+2+⋯+Un+d+Un+d+1+Un+d+1.{\displaystyle U_{n}\supseteq U_{n+1}+U_{n+1}\supseteq U_{n+1}+U_{n+2}+U_{n+2}\supseteq U_{n+1}+U_{n+2}+\cdots +U_{n+d}+U_{n+d+1}+U_{n+d+1}.} From this it follows that ifn∙=(n1,…,nk){\displaystyle n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)}consists of distinct positive integers then∑Un∙⊆U−1+min(n∙).{\displaystyle \sum U_{n_{\bullet }}\subseteq U_{-1+\min \left(n_{\bullet }\right)}.} It will now be shown by induction onk{\displaystyle k}that ifn∙=(n1,…,nk){\displaystyle n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)}consists of non-negative integers such that∑2−n∙≤2−M{\displaystyle \sum 2^{-n_{\bullet }}\leq 2^{-M}}for some integerM≥0{\displaystyle M\geq 0}then∑Un∙⊆UM.{\displaystyle \sum U_{n_{\bullet }}\subseteq U_{M}.}This is clearly true fork=1{\displaystyle k=1}andk=2{\displaystyle k=2}so assume thatk>2,{\displaystyle k>2,}which implies that allni{\displaystyle n_{i}}are positive. If allni{\displaystyle n_{i}}are distinct then this step is done, and otherwise pick distinct indicesi<j{\displaystyle i<j}such thatni=nj{\displaystyle n_{i}=n_{j}}and constructm∙=(m1,…,mk−1){\displaystyle m_{\bullet }=\left(m_{1},\ldots ,m_{k-1}\right)}fromn∙{\displaystyle n_{\bullet }}by replacing eachni{\displaystyle n_{i}}withni−1{\displaystyle n_{i}-1}and deleting thejth{\displaystyle j^{\text{th}}}element ofn∙{\displaystyle n_{\bullet }}(all other elements ofn∙{\displaystyle n_{\bullet }}are transferred tom∙{\displaystyle m_{\bullet }}unchanged). Observe that∑2−n∙=∑2−m∙{\displaystyle \sum 2^{-n_{\bullet }}=\sum 2^{-m_{\bullet }}}and∑Un∙⊆∑Um∙{\displaystyle \sum U_{n_{\bullet }}\subseteq \sum U_{m_{\bullet }}}(becauseUni+Unj⊆Uni−1{\displaystyle U_{n_{i}}+U_{n_{j}}\subseteq U_{n_{i}-1}}) so by appealing to the inductive hypothesis we conclude that∑Un∙⊆∑Um∙⊆UM,{\displaystyle \sum U_{n_{\bullet }}\subseteq \sum U_{m_{\bullet }}\subseteq U_{M},}as desired. It is clear thatf(0)=0{\displaystyle f(0)=0}and that0≤f≤1{\displaystyle 0\leq f\leq 1}so to prove thatf{\displaystyle f}is subadditive, it suffices to prove thatf(x+y)≤f(x)+f(y){\displaystyle f(x+y)\leq f(x)+f(y)}whenx,y∈X{\displaystyle x,y\in X}are such thatf(x)+f(y)<1,{\displaystyle f(x)+f(y)<1,}which implies thatx,y∈U0.{\displaystyle x,y\in U_{0}.}This is an exercise. If allUi{\displaystyle U_{i}}are symmetric thenx∈∑Un∙{\displaystyle x\in \sum U_{n_{\bullet }}}if and only if−x∈∑Un∙{\displaystyle -x\in \sum U_{n_{\bullet }}}from which it follows thatf(−x)≤f(x){\displaystyle f(-x)\leq f(x)}andf(−x)≥f(x).{\displaystyle f(-x)\geq f(x).}If allUi{\displaystyle U_{i}}are balanced then the inequalityf(sx)≤f(x){\displaystyle f(sx)\leq f(x)}for all unit scalarss{\displaystyle s}such that|s|≤1{\displaystyle |s|\leq 1}is proved similarly. Becausef{\displaystyle f}is a nonnegative subadditive function satisfyingf(0)=0,{\displaystyle f(0)=0,}as described in the article onsublinear functionals,f{\displaystyle f}is uniformly continuous onX{\displaystyle X}if and only iff{\displaystyle f}is continuous at the origin. If allUi{\displaystyle U_{i}}are neighborhoods of the origin then for any realr>0,{\displaystyle r>0,}pick an integerM>1{\displaystyle M>1}such that2−M<r{\displaystyle 2^{-M}<r}so thatx∈UM{\displaystyle x\in U_{M}}impliesf(x)≤2−M<r.{\displaystyle f(x)\leq 2^{-M}<r.}If the set of allUi{\displaystyle U_{i}}form basis of balanced neighborhoods of the origin then it may be shown that for anyn>1,{\displaystyle n>1,}there exists some0<r≤2−n{\displaystyle 0<r\leq 2^{-n}}such thatf(x)<r{\displaystyle f(x)<r}impliesx∈Un.{\displaystyle x\in U_{n}.}◼{\displaystyle \blacksquare } IfX{\displaystyle X}is a vector space over the real or complex numbers then aparanormonX{\displaystyle X}is a G-seminorm (defined above)p:X→R{\displaystyle p:X\rightarrow \mathbb {R} }onX{\displaystyle X}that satisfies any of the following additional conditions, each of which begins with "for all sequencesx∙=(xi)i=1∞{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }}inX{\displaystyle X}and all convergent sequences of scalarss∙=(si)i=1∞{\displaystyle s_{\bullet }=\left(s_{i}\right)_{i=1}^{\infty }}":[6] A paranorm is calledtotalif in addition it satisfies: Ifp{\displaystyle p}is a paranorm on a vector spaceX{\displaystyle X}then the mapd:X×X→R{\displaystyle d:X\times X\rightarrow \mathbb {R} }defined byd(x,y):=p(x−y){\displaystyle d(x,y):=p(x-y)}is a translation-invariant pseudometric onX{\displaystyle X}that defines avector topologyonX.{\displaystyle X.}[8] Ifp{\displaystyle p}is a paranorm on a vector spaceX{\displaystyle X}then: IfX{\displaystyle X}is a vector space over the real or complex numbers then anF-seminormonX{\displaystyle X}(theF{\displaystyle F}stands forFréchet) is a real-valued mapp:X→R{\displaystyle p:X\to \mathbb {R} }with the following four properties:[11] AnF-seminorm is called anF-normif in addition it satisfies: AnF-seminorm is calledmonotoneif it satisfies: AnF-seminormed space(resp.F-normed space)[12]is a pair(X,p){\displaystyle (X,p)}consisting of a vector spaceX{\displaystyle X}and anF-seminorm (resp.F-norm)p{\displaystyle p}onX.{\displaystyle X.} If(X,p){\displaystyle (X,p)}and(Z,q){\displaystyle (Z,q)}areF-seminormed spaces then a mapf:X→Z{\displaystyle f:X\to Z}is called anisometric embedding[12]ifq(f(x)−f(y))=p(x,y)for allx,y∈X.{\displaystyle q(f(x)-f(y))=p(x,y){\text{ for all }}x,y\in X.} Every isometric embedding of oneF-seminormed space into another is atopological embedding, but the converse is not true in general.[12] EveryF-seminorm is a paranorm and every paranorm is equivalent to someF-seminorm.[7]EveryF-seminorm on a vector spaceX{\displaystyle X}is a value onX.{\displaystyle X.}In particular,p(x)=0,{\displaystyle p(x)=0,}andp(x)=p(−x){\displaystyle p(x)=p(-x)}for allx∈X.{\displaystyle x\in X.} Theorem[11]—Letp{\displaystyle p}be anF-seminorm on a vector spaceX.{\displaystyle X.}Then the mapd:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }defined byd(x,y):=p(x−y){\displaystyle d(x,y):=p(x-y)}is a translation invariant pseudometric onX{\displaystyle X}that defines a vector topologyτ{\displaystyle \tau }onX.{\displaystyle X.}Ifp{\displaystyle p}is anF-norm thend{\displaystyle d}is a metric. WhenX{\displaystyle X}is endowed with this topology thenp{\displaystyle p}is a continuous map onX.{\displaystyle X.} The balanced sets{x∈X:p(x)≤r},{\displaystyle \{x\in X~:~p(x)\leq r\},}asr{\displaystyle r}ranges over the positive reals, form a neighborhood basis at the origin for this topology consisting of closed set. Similarly, the balanced sets{x∈X:p(x)<r},{\displaystyle \{x\in X~:~p(x)<r\},}asr{\displaystyle r}ranges over the positive reals, form a neighborhood basis at the origin for this topology consisting of open sets. Suppose thatL{\displaystyle {\mathcal {L}}}is a non-empty collection ofF-seminorms on a vector spaceX{\displaystyle X}and for any finite subsetF⊆L{\displaystyle {\mathcal {F}}\subseteq {\mathcal {L}}}and anyr>0,{\displaystyle r>0,}letUF,r:=⋂p∈F{x∈X:p(x)<r}.{\displaystyle U_{{\mathcal {F}},r}:=\bigcap _{p\in {\mathcal {F}}}\{x\in X:p(x)<r\}.} The set{UF,r:r>0,F⊆L,Ffinite}{\displaystyle \left\{U_{{\mathcal {F}},r}~:~r>0,{\mathcal {F}}\subseteq {\mathcal {L}},{\mathcal {F}}{\text{ finite }}\right\}}forms a filter base onX{\displaystyle X}that also forms a neighborhood basis at the origin for a vector topology onX{\displaystyle X}denoted byτL.{\displaystyle \tau _{\mathcal {L}}.}[12]EachUF,r{\displaystyle U_{{\mathcal {F}},r}}is abalancedandabsorbingsubset ofX.{\displaystyle X.}[12]These sets satisfy[12]UF,r/2+UF,r/2⊆UF,r.{\displaystyle U_{{\mathcal {F}},r/2}+U_{{\mathcal {F}},r/2}\subseteq U_{{\mathcal {F}},r}.} Suppose thatp∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is a family of non-negative subadditive functions on a vector spaceX.{\displaystyle X.} TheFréchet combination[8]ofp∙{\displaystyle p_{\bullet }}is defined to be the real-valued mapp(x):=∑i=1∞pi(x)2i[1+pi(x)].{\displaystyle p(x):=\sum _{i=1}^{\infty }{\frac {p_{i}(x)}{2^{i}\left[1+p_{i}(x)\right]}}.} Assume thatp∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is an increasing sequence of seminorms onX{\displaystyle X}and letp{\displaystyle p}be the Fréchet combination ofp∙.{\displaystyle p_{\bullet }.}Thenp{\displaystyle p}is anF-seminorm onX{\displaystyle X}that induces the same locally convex topology as the familyp∙{\displaystyle p_{\bullet }}of seminorms.[13] Sincep∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is increasing, a basis of open neighborhoods of the origin consists of all sets of the form{x∈X:pi(x)<r}{\displaystyle \left\{x\in X~:~p_{i}(x)<r\right\}}asi{\displaystyle i}ranges over all positive integers andr>0{\displaystyle r>0}ranges over all positive real numbers. Thetranslation invariantpseudometriconX{\displaystyle X}induced by thisF-seminormp{\displaystyle p}isd(x,y)=∑i=1∞12ipi(x−y)1+pi(x−y).{\displaystyle d(x,y)=\sum _{i=1}^{\infty }{\frac {1}{2^{i}}}{\frac {p_{i}(x-y)}{1+p_{i}(x-y)}}.} This metric was discovered byFréchetin his 1906 thesis for the spaces of real and complex sequences with pointwise operations.[14] If eachpi{\displaystyle p_{i}}is a paranorm then so isp{\displaystyle p}and moreover,p{\displaystyle p}induces the same topology onX{\displaystyle X}as the familyp∙{\displaystyle p_{\bullet }}of paranorms.[8]This is also true of the following paranorms onX{\displaystyle X}: The Fréchet combination can be generalized by use of a bounded remetrization function. Abounded remetrization function[15]is a continuous non-negative non-decreasing mapR:[0,∞)→[0,∞){\displaystyle R:[0,\infty )\to [0,\infty )}that has a bounded range, issubadditive(meaning thatR(s+t)≤R(s)+R(t){\displaystyle R(s+t)\leq R(s)+R(t)}for alls,t≥0{\displaystyle s,t\geq 0}), and satisfiesR(s)=0{\displaystyle R(s)=0}if and only ifs=0.{\displaystyle s=0.} Examples of bounded remetrization functions includearctan⁡t,{\displaystyle \arctan t,}tanh⁡t,{\displaystyle \tanh t,}t↦min{t,1},{\displaystyle t\mapsto \min\{t,1\},}andt↦t1+t.{\displaystyle t\mapsto {\frac {t}{1+t}}.}[15]Ifd{\displaystyle d}is a pseudometric (respectively, metric) onX{\displaystyle X}andR{\displaystyle R}is a bounded remetrization function thenR∘d{\displaystyle R\circ d}is a bounded pseudometric (respectively, bounded metric) onX{\displaystyle X}that is uniformly equivalent tod.{\displaystyle d.}[15] Suppose thatp∙=(pi)i=1∞{\displaystyle p_{\bullet }=\left(p_{i}\right)_{i=1}^{\infty }}is a family of non-negativeF-seminorm on a vector spaceX,{\displaystyle X,}R{\displaystyle R}is a bounded remetrization function, andr∙=(ri)i=1∞{\displaystyle r_{\bullet }=\left(r_{i}\right)_{i=1}^{\infty }}is a sequence of positive real numbers whose sum is finite. Thenp(x):=∑i=1∞riR(pi(x)){\displaystyle p(x):=\sum _{i=1}^{\infty }r_{i}R\left(p_{i}(x)\right)}defines a boundedF-seminorm that is uniformly equivalent to thep∙.{\displaystyle p_{\bullet }.}[16]It has the property that for any netx∙=(xa)a∈A{\displaystyle x_{\bullet }=\left(x_{a}\right)_{a\in A}}inX,{\displaystyle X,}p(x∙)→0{\displaystyle p\left(x_{\bullet }\right)\to 0}if and only ifpi(x∙)→0{\displaystyle p_{i}\left(x_{\bullet }\right)\to 0}for alli.{\displaystyle i.}[16]p{\displaystyle p}is anF-norm if and only if thep∙{\displaystyle p_{\bullet }}separate points onX.{\displaystyle X.}[16] A pseudometric (resp. metric)d{\displaystyle d}is induced by a seminorm (resp. norm) on a vector spaceX{\displaystyle X}if and only ifd{\displaystyle d}is translation invariant andabsolutely homogeneous, which means that for all scalarss{\displaystyle s}and allx,y∈X,{\displaystyle x,y\in X,}in which case the function defined byp(x):=d(x,0){\displaystyle p(x):=d(x,0)}is a seminorm (resp. norm) and the pseudometric (resp. metric) induced byp{\displaystyle p}is equal tod.{\displaystyle d.} If(X,τ){\displaystyle (X,\tau )}is atopological vector space(TVS) (where note in particular thatτ{\displaystyle \tau }is assumed to be a vector topology) then the following are equivalent:[11] If(X,τ){\displaystyle (X,\tau )}is a TVS then the following are equivalent: Birkhoff–Kakutani theorem—If(X,τ){\displaystyle (X,\tau )}is a topological vector space then the following three conditions are equivalent:[17][note 1] By the Birkhoff–Kakutani theorem, it follows that there is anequivalent metricthat is translation-invariant. If(X,τ){\displaystyle (X,\tau )}is TVS then the following are equivalent:[13] LetM{\displaystyle M}be a vector subspace of a topological vector space(X,τ).{\displaystyle (X,\tau ).} IfX{\displaystyle X}is Hausdorff locally convex TVS thenX{\displaystyle X}with thestrong topology,(X,b(X,X′)),{\displaystyle \left(X,b\left(X,X^{\prime }\right)\right),}is metrizable if and only if there exists a countable setB{\displaystyle {\mathcal {B}}}of bounded subsets ofX{\displaystyle X}such that every bounded subset ofX{\displaystyle X}is contained in some element ofB.{\displaystyle {\mathcal {B}}.}[22] Thestrong dual spaceXb′{\displaystyle X_{b}^{\prime }}of a metrizable locally convex space (such as aFréchet space[23])X{\displaystyle X}is aDF-space.[24]The strong dual of a DF-space is aFréchet space.[25]The strong dual of areflexiveFréchet space is abornological space.[24]The strong bidual (that is, thestrong dual spaceof the strong dual space) of a metrizable locally convex space is a Fréchet space.[26]IfX{\displaystyle X}is a metrizable locally convex space then its strong dualXb′{\displaystyle X_{b}^{\prime }}has one of the following properties, if and only if it has all of these properties: (1)bornological, (2)infrabarreled, (3)barreled.[26] A topological vector space isseminormableif and only if it has aconvexbounded neighborhood of the origin. Moreover, a TVS isnormableif and only if it isHausdorffand seminormable.[14]Every metrizable TVS on a finite-dimensionalvector space is a normablelocally convexcomplete TVS, beingTVS-isomorphictoEuclidean space. Consequently, any metrizable TVS that isnotnormable must be infinite dimensional. IfM{\displaystyle M}is a metrizablelocally convex TVSthat possess acountablefundamental system of bounded sets, thenM{\displaystyle M}is normable.[27] IfX{\displaystyle X}is a Hausdorfflocally convex spacethen the following are equivalent: and if this locally convex spaceX{\displaystyle X}is also metrizable, then the following may be appended to this list: In particular, if a metrizable locally convex spaceX{\displaystyle X}(such as aFréchet space) isnotnormable then itsstrong dual spaceXb′{\displaystyle X_{b}^{\prime }}is not aFréchet–Urysohn spaceand consequently, thiscompleteHausdorff locally convex spaceXb′{\displaystyle X_{b}^{\prime }}is also neither metrizable nor normable. Another consequence of this is that ifX{\displaystyle X}is areflexivelocally convexTVS whose strong dualXb′{\displaystyle X_{b}^{\prime }}is metrizable thenXb′{\displaystyle X_{b}^{\prime }}is necessarily a reflexive Fréchet space,X{\displaystyle X}is aDF-space, bothX{\displaystyle X}andXb′{\displaystyle X_{b}^{\prime }}are necessarilycompleteHausdorffultrabornologicaldistinguishedwebbed spaces, and moreover,Xb′{\displaystyle X_{b}^{\prime }}is normable if and only ifX{\displaystyle X}is normable if and only ifX{\displaystyle X}is Fréchet–Urysohn if and only ifX{\displaystyle X}is metrizable. In particular, such a spaceX{\displaystyle X}is either aBanach spaceor else it is not even a Fréchet–Urysohn space. Suppose that(X,d){\displaystyle (X,d)}is a pseudometric space andB⊆X.{\displaystyle B\subseteq X.}The setB{\displaystyle B}ismetrically boundedord{\displaystyle d}-boundedif there exists a real numberR>0{\displaystyle R>0}such thatd(x,y)≤R{\displaystyle d(x,y)\leq R}for allx,y∈B{\displaystyle x,y\in B}; the smallest suchR{\displaystyle R}is then called thediameterord{\displaystyle d}-diameterofB.{\displaystyle B.}[14]IfB{\displaystyle B}isboundedin a pseudometrizable TVSX{\displaystyle X}then it is metrically bounded; the converse is in general false but it is true forlocally convexmetrizable TVSs.[14] Theorem[29]—All infinite-dimensionalseparablecomplete metrizable TVS arehomeomorphic. Everytopological vector space(and more generally, atopological group) has a canonicaluniform structure, induced by its topology, which allows the notions of completeness and uniform continuity to be applied to it. IfX{\displaystyle X}is a metrizable TVS andd{\displaystyle d}is a metric that definesX{\displaystyle X}'s topology, then its possible thatX{\displaystyle X}is complete as a TVS (i.e. relative to its uniformity) but the metricd{\displaystyle d}isnotacomplete metric(such metrics exist even forX=R{\displaystyle X=\mathbb {R} }). Thus, ifX{\displaystyle X}is a TVS whose topology is induced by a pseudometricd,{\displaystyle d,}then the notion of completeness ofX{\displaystyle X}(as a TVS) and the notion of completeness of the pseudometric space(X,d){\displaystyle (X,d)}are not always equivalent. The next theorem gives a condition for when they are equivalent: Theorem—IfX{\displaystyle X}is a pseudometrizable TVS whose topology is induced by atranslation invariantpseudometricd,{\displaystyle d,}thend{\displaystyle d}is a complete pseudometric onX{\displaystyle X}if and only ifX{\displaystyle X}is complete as a TVS.[36] Theorem[37][38](Klee)—Letd{\displaystyle d}beany[note 2]metric on a vector spaceX{\displaystyle X}such that the topologyτ{\displaystyle \tau }induced byd{\displaystyle d}onX{\displaystyle X}makes(X,τ){\displaystyle (X,\tau )}into a topological vector space. If(X,d){\displaystyle (X,d)}is a complete metric space then(X,τ){\displaystyle (X,\tau )}is a complete-TVS. Theorem—IfX{\displaystyle X}is a TVS whose topology is induced by a paranormp,{\displaystyle p,}thenX{\displaystyle X}is complete if and only if for every sequence(xi)i=1∞{\displaystyle \left(x_{i}\right)_{i=1}^{\infty }}inX,{\displaystyle X,}if∑i=1∞p(xi)<∞{\displaystyle \sum _{i=1}^{\infty }p\left(x_{i}\right)<\infty }then∑i=1∞xi{\displaystyle \sum _{i=1}^{\infty }x_{i}}converges inX.{\displaystyle X.}[39] IfM{\displaystyle M}is a closed vector subspace of a complete pseudometrizable TVSX,{\displaystyle X,}then the quotient spaceX/M{\displaystyle X/M}is complete.[40]IfM{\displaystyle M}is acompletevector subspace of a metrizable TVSX{\displaystyle X}and if the quotient spaceX/M{\displaystyle X/M}is complete then so isX.{\displaystyle X.}[40]IfX{\displaystyle X}is not complete thenM:=X,{\displaystyle M:=X,}but not complete, vector subspace ofX.{\displaystyle X.} ABaireseparabletopological groupis metrizable if and only if it is cosmic.[23] Banach-Saks theorem[45]—If(xn)n=1∞{\displaystyle \left(x_{n}\right)_{n=1}^{\infty }}is a sequence in alocally convexmetrizable TVS(X,τ){\displaystyle (X,\tau )}that convergesweaklyto somex∈X,{\displaystyle x\in X,}then there exists a sequencey∙=(yi)i=1∞{\displaystyle y_{\bullet }=\left(y_{i}\right)_{i=1}^{\infty }}inX{\displaystyle X}such thaty∙→x{\displaystyle y_{\bullet }\to x}in(X,τ){\displaystyle (X,\tau )}and eachyi{\displaystyle y_{i}}is a convex combination of finitely manyxn.{\displaystyle x_{n}.} Mackey's countability condition[14]—Suppose thatX{\displaystyle X}is a locally convex metrizable TVS and that(Bi)i=1∞{\displaystyle \left(B_{i}\right)_{i=1}^{\infty }}is a countable sequence of bounded subsets ofX.{\displaystyle X.}Then there exists a bounded subsetB{\displaystyle B}ofX{\displaystyle X}and a sequence(ri)i=1∞{\displaystyle \left(r_{i}\right)_{i=1}^{\infty }}of positive real numbers such thatBi⊆riB{\displaystyle B_{i}\subseteq r_{i}B}for alli.{\displaystyle i.} Generalized series As describedin this article's section on generalized series, for anyI{\displaystyle I}-indexed familyfamily(ri)i∈I{\displaystyle \left(r_{i}\right)_{i\in I}}of vectors from a TVSX,{\displaystyle X,}it is possible to define their sum∑i∈Iri{\displaystyle \textstyle \sum \limits _{i\in I}r_{i}}as the limit of thenetof finite partial sumsF∈FiniteSubsets⁡(I)↦∑i∈Fri{\displaystyle F\in \operatorname {FiniteSubsets} (I)\mapsto \textstyle \sum \limits _{i\in F}r_{i}}where the domainFiniteSubsets⁡(I){\displaystyle \operatorname {FiniteSubsets} (I)}isdirectedby⊆.{\displaystyle \,\subseteq .\,}IfI=N{\displaystyle I=\mathbb {N} }andX=R,{\displaystyle X=\mathbb {R} ,}for instance, then the generalized series∑i∈Nri{\displaystyle \textstyle \sum \limits _{i\in \mathbb {N} }r_{i}}converges if and only if∑i=1∞ri{\displaystyle \textstyle \sum \limits _{i=1}^{\infty }r_{i}}converges unconditionallyin the usual sense (which for real numbers,is equivalenttoabsolute convergence). If a generalized series∑i∈Iri{\displaystyle \textstyle \sum \limits _{i\in I}r_{i}}converges in a metrizable TVS, then the set{i∈I:ri≠0}{\displaystyle \left\{i\in I:r_{i}\neq 0\right\}}is necessarilycountable(that is, either finite orcountably infinite);[proof 1]in other words, all but at most countably manyri{\displaystyle r_{i}}will be zero and so this generalized series∑i∈Iri=∑ri≠0i∈Iri{\displaystyle \textstyle \sum \limits _{i\in I}r_{i}~=~\textstyle \sum \limits _{\stackrel {i\in I}{r_{i}\neq 0}}r_{i}}is actually a sum of at most countably many non-zero terms. IfX{\displaystyle X}is a pseudometrizable TVS andA{\displaystyle A}maps bounded subsets ofX{\displaystyle X}to bounded subsets ofY,{\displaystyle Y,}thenA{\displaystyle A}is continuous.[14]Discontinuous linear functionals exist on any infinite-dimensional pseudometrizable TVS.[46]Thus, a pseudometrizable TVS is finite-dimensional if and only if its continuous dual space is equal to itsalgebraic dual space.[46] IfF:X→Y{\displaystyle F:X\to Y}is a linear map between TVSs andX{\displaystyle X}is metrizable then the following are equivalent: Open and almost open maps A vector subspaceM{\displaystyle M}of a TVSX{\displaystyle X}hasthe extension propertyif any continuous linear functional onM{\displaystyle M}can be extended to a continuous linear functional onX.{\displaystyle X.}[22]Say that a TVSX{\displaystyle X}has theHahn-Banachextension property(HBEP) if every vector subspace ofX{\displaystyle X}has the extension property.[22] TheHahn-Banach theoremguarantees that every Hausdorff locally convex space has the HBEP. For complete metrizable TVSs there is a converse: Theorem(Kalton)—Every complete metrizable TVS with the Hahn-Banach extension property is locally convex.[22] If a vector spaceX{\displaystyle X}has uncountable dimension and if we endow it with thefinest vector topologythen this is a TVS with the HBEP that is neither locally convex or metrizable.[22] Proofs
https://en.wikipedia.org/wiki/Paranorm
Shipping portalsare websites which allowshippers, consignees and forwarders access to multiple carriers through a single site. Portals provide bookings, track and trace, and documentation, and allow users to communicate with their carriers. In many respects, ashippingportal is to the maritime industry what aglobal distribution system(GDS) is to theairlineindustry. Shipping portals first emerged in 2000-2001 whenCargoSmart,GT NexusandINTTRA Inc.all launched their trial phases.[citation needed] Membership across the three main shipping portals comprises 30 carriers of varying sizes, but the majority are amongst the world's largest, so most of the industry'sTEUcapacity is represented.[1] No portals can access allshipping lines. With around 250container shippinglines world-wide, many carriers are left out. Sailing schedule search engines have emerged to allow users to find an appropriate service.[citation needed]
https://en.wikipedia.org/wiki/Shipping_portal
The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, published in 2004, is a book written byJames Surowieckiabout the aggregation of information in groups, resulting in decisions that, he argues, are often better than could have been made by any single member of the group. The book presents numerous case studies andanecdotesto illustrate its argument, and touches on several fields, primarilyeconomicsandpsychology. The opening anecdote relatesFrancis Galton's surprise that the crowd at a county fair accurately guessed the weight of anoxwhen the median of their individual guesses was taken (the median was closer to the ox's true butchered weight than the estimates of most crowd members).[1][2] The book relates to diverse collections of independently deciding individuals, rather thancrowd psychologyas traditionally understood. Its central thesis, that a diverse collection of independently deciding individuals is likely to make certain types of decisions and predictions better than individuals or even experts, draws many parallels with statisticalsampling; however, there is little overt discussion of statistics in the book. Its title is an allusion toCharles Mackay'sExtraordinary Popular Delusions and the Madness of Crowds,published in 1841.[3] Surowiecki breaks down the advantages he sees in disorganized decisions into three main types, which he classifies as Not all crowds (groups) are wise. Consider, for example, mobs or crazed investors in astock market bubble. According to Surowiecki, these key criteria separate wise crowds from irrational ones: Based on Surowiecki's book, Oinas-Kukkonen[4]captures the wisdom of crowds approach with the following eight conjectures: Surowiecki studies situations (such asrational bubbles) in which the crowd produces very bad judgment, and argues that in these types of situations their cognition or cooperation failed because (in one way or another) the members of the crowd were too conscious of the opinions of others and began to emulate each other and conform rather than think differently. Although he gives experimental details of crowds collectively swayed by a persuasive speaker, he says that the main reason that groups of people intellectually conform is that the system for making decisions has a systemic flaw. Causes and detailed case histories of such failures include: TheOffice of the Director of National Intelligenceand theCIAhave created aWikipedia-style information sharing network calledIntellipediathat will help the free flow of information to prevent such failures again. At the 2005O'ReillyEmerging TechnologyConference Surowiecki presented a session entitledIndependent Individuals and Wise Crowds, orIs It Possible to Be Too Connected?[6] The question for all of us is, how can you have interaction withoutinformation cascades, without losing the independence that's such a key factor in group intelligence? He recommends: Tim O'Reilly[7]and others also discuss the success ofGoogle,wikis,blogging, andWeb 2.0in the context of the wisdom of crowds. Surowiecki is a strong advocate of the benefits of decision markets and regrets the failure ofDARPA's controversialPolicy Analysis Marketto get off the ground. He points to the success of public and internal corporate markets as evidence that a collection of people with varying points of view but the same motivation (to make a good guess) can produce an accurate aggregate prediction. According to Surowiecki, the aggregate predictions have been shown to be more reliable than the output of anythink tank. He advocates extensions of the existing futures markets even into areas such asterroristactivity and prediction markets within companies. To illustrate this thesis, he says that his publisher can publish a more compelling output by relying on individual authors under one-off contracts bringing book ideas to them. In this way, they are able to tap into the wisdom of a much larger crowd than would be possible with an in-house writing team. Will Huttonhas argued that Surowiecki's analysis applies to value judgments as well as factual issues, with crowd decisions that "emerge of our own aggregated free will [being] astonishingly... decent". He concludes that "There's no better case for pluralism, diversity and democracy, along with a genuinely independent press."[8] Applications of the wisdom-of-crowds effect exist in three general categories:Prediction markets,Delphi methods, and extensions of thetraditional opinion poll. The most common application is the prediction market, a speculative or betting market created to make verifiable predictions. Surowiecki discusses the success of prediction markets. Similar toDelphi methodsbut unlikeopinion polls, prediction (information) markets ask questions like, "Who do you think will win the election?" and predict outcomes rather well. Answers to the question, "Who will you vote for?" are not as predictive.[9] Assets are cash values tied to specific outcomes (e.g., Candidate X will win the election) or parameters (e.g., Next quarter's revenue). The current market prices are interpreted as predictions of the probability of the event or the expected value of the parameter.Betfairis the world's biggest prediction exchange, with around $28 billion traded in 2007.NewsFuturesis an international prediction market that generates consensus probabilities for news events.Intrade.com, which operated a person to person prediction market based in Dublin Ireland achieved very high media attention in 2012 related to the US Presidential Elections, with more than 1.5 million search references to Intrade and Intrade data. Several companies now offer enterprise class prediction marketplaces to predict project completion dates, sales, or the market potential for new ideas.[citation needed]A number of Web-based quasi-prediction marketplace companies have sprung up to offer predictions primarily on sporting events and stock markets but also on other topics. The principle of the prediction market is also used inproject management softwareto let team members predict a project's "real" deadline and budget. The Delphi method is a systematic, interactiveforecastingmethod which relies on a panel of independent experts. The carefully selected experts answer questionnaires in two or more rounds. After each round, a facilitator provides an anonymous summary of the experts' forecasts from the previous round as well as the reasons they provided for their judgments. Thus, participants are encouraged to revise their earlier answers in light of the replies of other members of the group. It is believed that during this process the range of the answers will decrease and the group will converge towards the "correct" answer. Many of the consensus forecasts have proven to be more accurate than forecasts made by individuals. Designed as an optimized method for unleashing the wisdom of crowds, this approach implements real-time feedback loops around synchronous groups of users with the goal of achieving more accurate insights from fewer numbers of users. Human Swarming (sometimes referred to as Social Swarming) is modeled after biological processes in birds, fish, and insects, and is enabled among networked users by using mediating software such as theUNUcollective intelligence platform. As published by Rosenberg (2015), such real-time control systems enable groups of human participants to behave as a unifiedcollective intelligence.[10]When logged into the UNU platform, for example, groups of distributed users can collectively answer questions, generate ideas, and make predictions as a singular emergent entity.[11][12]Early testing shows that human swarms can out-predict individuals across a variety of real-world projections.[13][14] Hugo-winningwriterJohn Brunner's 1975science fictionnovelThe Shockwave Riderincludes an elaborate planet-wide information futures andbetting poolcalled "Delphi" based on the Delphi method. IllusionistDerren Brownclaimed to use the 'Wisdom of Crowds' concept to explain how he correctly predicted theUK National Lotteryresults in September 2009. His explanation was met with criticism on-line, by people who argued that the concept was misapplied.[15]The methodology employed was too flawed; the sample of people could not have been totally objective and free in thought, because they were gathered multiple times and socialised with each other too much; a condition Surowiecki tells us is corrosive to pure independence and the diversity of mind required (Surowiecki 2004:38). Groups thus fall intogroupthinkwhere they increasingly make decisions based on influence of each other and are thuslessaccurate. However, other commentators have suggested that, given the entertainment nature of the show, Brown's misapplication of the theory may have been a deliberate smokescreen to conceal his true method.[16][17] This was also shown in the television series East of Eden where a social network of roughly 10,000 individuals came up with ideas to stop missiles in a very short span of time.[citation needed] Wisdom of Crowdswould have a significant influence on the naming of the crowdsourcing creative companyTongal, which is an anagram for Galton, the last name of the social-scientist highlighted in the introduction to Surowiecki's book.Francis Galtonrecognized the ability of a crowd's median weight-guesses for oxen to exceed the accuracy of experts.[18] In his bookEmbracing the Wide Sky,Daniel Tammetfinds fault with this notion. Tammet points out the potential for problems in systems which have poorly defined means of pooling knowledge: Subject matter experts can be overruled and even wrongly punished by less knowledgeable persons in crowd sourced systems, citing a case of this on Wikipedia. Furthermore, Tammet mentions the assessment of theaccuracy of Wikipediaas described in a study mentioned inNaturein 2005, outlining several flaws in the study's methodology which included that the study made no distinction between minor errors and large errors. Tammet also cites theKasparov versus the World, an online competition that pitted the brainpower of tens of thousands of online chess players choosing moves in a match againstGarry Kasparov, which was won by Kasparov, not the "crowd". Although Kasparov did say, "It is the greatest game in the history of chess. The sheer number of ideas, the complexity, and the contribution it has made to chess make it the most important game ever played." In his bookYou Are Not a Gadget,Jaron Lanierargues that crowd wisdom is best suited for problems that involve optimization, but ill-suited for problems that require creativity or innovation. In the online articleDigital Maoism, Lanier argues that the collective is more likely to be smart only when Lanier argues that only under those circumstances can a collective be smarter than a person. If any of these conditions are broken, the collective becomes unreliable or worse. Iain Couzin, a professor in Princeton's Department of Ecology and Evolutionary Biology, and Albert Kao, his student, in a 2014article, in the journal Proceedings of the Royal Society, argue that "the conventional view of the wisdom of crowds may not be informative in complex and realistic environments, and that being in small groups can maximize decision accuracy across many contexts." By "small groups," Couzin and Kao mean fewer than a dozen people. They conclude and say that “the decisions of very large groups may be highly accurate when the information used is independently sampled, but they are particularly susceptible to the negative effects of correlated information, even when only a minority of the group uses such information.”
https://en.wikipedia.org/wiki/The_Wisdom_of_Crowds
Age of candidacyis the minimum age at which a person canlegallyhold certain elected government offices. In many cases, it also determines the age at which a person may beeligible to standfor an election or be grantedballot access. International electoral standards which are defined in the International Public Human Rights Law, allow restricting candidacy on the basis of age. The interpretation of the International Covenant for Civil and Political Rights offered by the United Nations Human Rights Committee in the General Comment 25 states "Any conditions which apply to the exercise of the rights protected by article 25 (of the ICCPR) should be based on objective and reasonable criteria. For example, it may be reasonable to require a higher age for election or appointment to particular offices than for exercising the right to vote, which should be available to every adult citizen."[1] The first known example of a law enforcing age of candidacy was theLex Villia Annalis, a Roman law enacted in 180 BCE which set the minimum ages for senatorialmagistrates.[206] InAustraliaa person must be aged 18 or over to stand for election to public office at federal, state or local government level. Prior to 1973, the age of candidacy for the federal parliament was 21.[207] The youngest ever member of theHouse of Representativeswas 20-year-oldWyatt Roy, elected in the 2010 federal election. InAustria, a person must be 18 years of age or older to stand in elections to theEuropean ParliamentorNational Council.[16]The Diets of regionalLänderare able to set a minimum age lower than 18 for to be in the polls in elections to the Diet itself as well as to municipal councils in the Land.[208]In presidential elections the candidacy age is 35. Any Belgian who has reached the age of 18 years can stand for election for theChamber of Representatives, can become a member of theSenate, or can be elected in one of the regional parliaments.[209]This is regulated in theConstitution(Art. 64) and in the Special Law on the Reform of the Institutions. According to theConstitution of Belize, a person must be at least 18 years old to be elected as a member of theHouse of Representativesand must be at least 30 to be Speaker of the House. A person must be at least 18 years old to be appointed to theSenateand must be at least 30 to be president or Vice-President of the Senate. As only members of the House of Representatives are eligible to be appointed prime minister, thePrime Ministermust be at least 18 years old. A person must also be at least 18 years old to be elected to a village council.[28] TheBrazilianConstitution (Article 14, Section 3 (VI)) defines 35 years as the minimum age for someone to be elected president, Vice-President or Senator; 30 years for state Governor or Vice-Governor; 21 for Federal or State Deputy, Mayor or Vice-Mayor; and 18 for city Council member.[33] InCanada,the constitutiondoes not outline any age requirements to run for elected office, simply stating "Every citizen of Canada has the right to vote in an election of the members of the House of Commons or of a legislative assembly and to be qualified for membership therein."[210]However under the currentElections CanadaAct to be eligible to run for elected office (municipal, provincial, federal) one must be a minimum of 18 years or older on the day of the election.[211]Prior to 1970, the age requirement was 21 along with the voting age. To be appointed to theSenate(Upper House), one must be at least 30 years of age, under 75 years of age, must possess land worth at least $4,000 in the province for which they are appointed, and must own real and personal property worth at least $4,000, above their debts and liabilities.[212] In the province of Ontario,Sam Oosterhoff, a member of theProgressive Conservative Party of Ontario, was first elected at the age of 19 in a November 2016 by-election, the youngest Ontario MPP to ever be elected.[213] Pierre-Luc Dusseault(born May 31, 1991) is a Canadian politician who was elected to the House of Commons of Canada in the 2011 federal election at the age of 19, becoming the youngest Member of Parliament in the country's history. He was sworn into office two days after his 20th birthday. He was re-elected in 2015 but lost his seat in the 2019 Canadian federal election.[214] Article 36 of the 2016Constitution of the Central African Republicrequires that candidates forPresidentmust "be aged thirty-five (35) years at least [on] the day of the deposit of the dossier of the candidature".[40] InChilethe minimum age required to be electedPresident of the Republicis 35 years on the day of the election. Before the 2005 reforms the requirement was 40 years, and from 1925 to 1981 it was 30 years. Forsenatorsit is 35 years (between 1981 and 2005 it was 40 years) and fordeputiesit is 21 years (between 1925 and 1970 it was 35 years).[215] InChinathe minimum age to be elected as president or vice-president is 45.[216]All citizens who have reached the age of 18 have the right to vote and stand for election.[217] InCyprusthe minimum age to be elected president is 35 years. The minimum age to run for theHouse of Representativeswas 25 years until theConstitutionwas amended in 2019 to lower the limit to 21.[218] In theCzech Republic, a person must be at least 18-years-oldto be electedinlocal elections. A person must be at least 21 years old to be elected to thelower houseof theCzech Parliamentor to theEuropean Parliamentand 40 years old to be a member of the upper house (Senate) of the Parliament[53]or thePresident of the Czech Republic. InDenmark, any adult 18 years of age or older can become a candidate and be elected in any public election. InEstonia, any citizen 18 years of age or oldercan be electedinlocal elections, and 21 years or older inparliamentary elections. The minimum age for thePresident of Estoniais 40.[65] InFrance, any citizen 18 years of age or older can be elected to thelower house of Parliament, and 24 years or older for theSenate. The minimum age for thePresident of Franceis 18.[citation needed] InGermanya citizen must be 18 or overto be electedat the national level, like theChancellor, and this age to be elected at the regional or local level. A person must be 40 or over to bePresident. InGreece, those aged 25 years old and over who hold Greekcitizenshipare eligible to stand and be elected to theHellenic Parliament.[76]All over 40 years old are eligible to stand for presidency. In Hong Kong a person must be at least 21 to be candidate in a district council or Legislative Council election.[219][220]A person must be at least 40 to be candidate in theChief Executiveelection, and also at least 40 to be candidate in the election for thePresident of the Legislative Councilfrom among the members of the Legislative Council.[221] For the office ofPresident, any Icelandic citizen who has reached the age of 35 and fulfills therequirementnecessary to vote in elections to theAlthingis eligible to be elected president.[222] InIndiaa person must be at least: Criticism has been on the rise to decrease the age of candidacy in India. Young India Foundation has been working on a campaign to decrease the age of candidacy in India forMPsandMLAsto better reflect the large young demographic of India.[223] InIndonesiaa person must be at least: InIsraelone must be at least 21 to become a member of theKnesset(Basic Law: The Knessetsection 6(a)) or amunicipality.[citation needed]When thePrime Ministerwas directly elected, one must have been a member of the Knesset who is at least 30 to be a candidate for prime minister.[citation needed]Every Israeli Citizen (including minors) can be appointed as aGovernmentMinister, or elected asPresident of Israel, but the latter role is mostly ceremonial and elected by the Parliament.[citation needed] InItaly, a person must be at least 50 to be President of the Republic, 40 to be aSenator, and 25 to be aDeputy, as specified in the 1947Constitution of Italy. 18 years of age is sufficient, however, to be elected member of the Council of Regions, Provinces, and Municipalities (Communes). InIrana person must be at least 21 years old to run for president.[87] The Iraqi constitution states that a person must be at least 40 years old to run for president[88]and 35 years old to be prime minister.[89]Until 2019, the electoral law set the age limit at 30 years old for candidates to run for the Council of Representatives.[224]However, the new Iraqi Council of Representatives Election Law (passed in 2019, yet to be enacted) lowered the age limit to 28.[225] The 1937Constitution of Irelandrequires thePresidentto be at least 35 and members of theOireachtas(legislature) to be 21.[91][92]Members of the European Parliamentfor Ireland must also be 21.[92][93]Members oflocal authoritiesmust be 18, reduced from 21 in 1973.[92][94]The 1922–1937Constitution of the Irish Free StaterequiredTDs(members of theDáil, lower house) to be 21,[226]whereasSenatorshad to be 35 (reduced to 30 in 1928).[95]At the1987 general election, theHigh Courtruled that a candidate (Hugh Hall) was eligible who reached the minimum age after the date of nomination but before the date of election.[227]TheThirty-fifth Amendment of the Constitution Bill 2015proposed to lower the presidential age limit to 21.[228]However, this proposal was rejected by 73% of the voters. InJapana person must be at least:[99] InLithuaniaa person must be at least: In Luxembourg a person must be at least 18-years-old to stand as a candidate to be a member of theChamber of Deputies, the country'sunicameralnational legislature.[114] In Malaysia a citizen shall be over 18 years of age to become a candidate and be elected to theDewan RakyatandDewan Undangan Negeri, and a person shall be over 30 to be theSenatorby constitution. InMexico, a person must be at least 35 to be president, 25 to be a senator, or 21 to be a Congressional Deputy, as specified in the1917 Constitution of Mexico. In theNetherlands, any adult 18 years of age or older can become elected in any public election. To be a candidate the person has to reach this age during the time for which the elections are held. InNew Zealandthe minimum age to bePrime Minister of New Zealandis 18 years old. Citizens and permanent residents who are enrolled as an elector are eligible to be a candidate for election as aMember of Parliament.[citation needed] InNigeria, a person must be at least 35 years of age to be electedPresidentorVice President, 35 to be a senator, 30 to be a State Governor, and 25 to be a Representative in parliament or Member of the States' House of Assembly.[229] InNorth Korea, any person eligible to vote in elections to theSupreme People's Assemblyis also eligible to stand for candidacy. The age for both voting and candidacy is 17.[230] InNorway, any adult, aged 18 or over within the calendar year, can become a candidate and be elected in any public election. Palestinian parliamentary candidates must be at least 28 years old, while the presidential candidates must be at least 40 years old.[231] InPakistan, a person must be at least 45 years old to bePresident. A person must be at least 25 years old to be a member of the provincial assembly or national assembly.[232] In Russia a person must be at least 35 to run for president.[158] InSingaporea person must be at least 45 years old to run for president.[238]21 year-olds can stand in parliamentary elections. Section 47, Clause 1 of the 1996 Constitution of South Africa states that "Every citizen who is qualified to vote for the National Assembly is eligible to be a member of the Assembly", defaulting to Section 46 which "provides for a minimum voting age of 18 years" in National Assembly elections; Sections 106 and 105 provide the same for provincial legislatures. [173] Spainhas two legislative chambers of Parliament, a lower house and an upper house. These are theCongress of Deputies(lower house) and theSenate of Spain(upper house) respectively. The minimum age requirement to stand and to be elected to either house is 18 years of age.[174] InSweden, any citizen at least 18 years old, who resides, or who has resided in the realm can be elected to parliament.[240]Citizens of Sweden, the European Union, Norway or Iceland aged 18 and over may be elected to county or municipal council. Citizens of other countries may also be elected to council, provided they have resided in the realm for at least three years.[241] InSwitzerland, any citizen aged 18 or over can become a candidate and be elected in any federal election. In theRepublic of China(commonly known as Taiwan), the minimum age of candidacy is 23, unless otherwise specified in the Constitution or any relevant laws.[242]The Civil Servants Election and Recall Act specifies that candidates for township, city, and indigenous district chiefs must be at least 26, and candidates for municipality, county, and city governors must be at least 30.[243]The minimum age to be elected as president or vice-president is 40.[244] The14th Dalai Lamawas enthroned at the age of 4, and none ofhis predecessorshave been enthroned before age 4. The coming of age for the Dalai Lama is 18, when responsibilities are assumed. The1876 constitutionset the age for parliamentary elections as 30. This remained unchanged until 13 October 2006, when it was lowered to 25 through a constitutional amendment. In 2017, it was further lowered to 18, the same as thevoting age.[245]In presidential elections the candidacy age is 40. In theUnited Kingdom, a person must be aged 18 or over to stand inelectionsto all parliaments, assemblies, and councils within the UK,devolved, or local level. This age requirement also applies in elections to any individual elective public office; the main example is that of anelected mayor, whether ofLondonor alocal authority. There are no higher age requirements for particular positions in public office. Candidates are required to be aged 18 on both the day of nomination and the day of the poll.[citation needed] Previously, the requirement was that candidates be 21 years old. During the early 2000s, theBritish Youth Counciland other groups successfully campaigned to lower age of candidacy requirements in the United Kingdom.[246]The age of candidacy was reduced from 21 to 18 inEngland,WalesandScotlandon 1 January 2007,[247]when section 17 of theElectoral Administration Act 2006entered into force.[248] In theUnited States, a person must be aged 35 or over to serve as president. To be a senator, a person must be aged 30 or over. To be a Representative, a person must be aged 25 or older. This is specified in theU.S. Constitution. Most states in the U.S. also have age requirements for the offices of Governor, State Senator, and State Representative.[249]Some states have a minimum age requirement to hold any elected office (usually 21 or 18). Manyyouth rightsgroups view current age of candidacy requirements as unjustifiedage discrimination.[250]Occasionally people who are younger than the minimum age will run for an office in protest of the requirement or because they do not know that the requirement exists. On extremely rare occasions, young people have been elected to offices they do not qualify for and have been deemed ineligible to assume the office. In 1872,Victoria Woodhullran for President of the United States, although according to the Constitution she would have been too young to be President if elected.[251] In 1934,Rush Holtof West Virginia was elected to theSenate of the United Statesat the age of 29. Since theU.S. Constitutionrequires senators to be at least 30, Holt was forced to wait until his 30th birthday, six months after the start of the session, before being sworn in.[252] In 1954,Richard Fultonwon election to theTennessee Senate. Shortly after being sworn in, Fulton was ousted from office because he was 27 years old at the time. TheTennessee State Constitutionrequired that senators be at least 30.[253]Rather than hold a new election, the previous incumbent,Clifford Allen, was allowed to resume his office for another term. Fulton went on to win the next State Senate election in 1956 and was later elected to theU.S. House of Representativeswhere he served for 10 years. In 1964,Congressman Jed Johnson Jr.of Oklahoma was elected to the89th Congressin the 1964 election while still aged 24 years. However, he becameeligiblefor the House after turning 25 on his birthday, 27 December 1964, seven days before his swearing in, making him the youngestlegallyelected and seated member of the United States Congress ever.[254] In South Carolina, two Senators aged 24 were elected, but were too young according to the State Constitution: Mike Laughlin in 1969 andBryan Dorn(later a U.S. congressman) in 1941. They were seated anyway.[255] On several occasions, theSocialist Workers Party (USA)has nominated candidates too young to qualify for the offices they were running for. In 1972,Linda Jennessran as the SWP presidential candidate, although she was 31 at the time. Since the U.S. Constitution requires that the President and Vice President be at least 35 years old, Jenness was not able to receiveballot accessin several states in which she otherwise qualified.[256]Despite this handicap, Jenness still received 83,380 votes.[257]In 2004, the SWP nominatedArrin Hawkinsas the party's vice-presidential candidate, although she was 28 at the time. Hawkins was also unable to receive ballot access in several states due to her age.[258] In the United States, many groups have attempted to lower age of candidacy requirements in various states. In 1994,South Dakotavoters rejected a ballot measure that would have lowered the age requirements to serve as a State Senator or State Representative from 25 to 18. In 1998, however, they approved a similarballotmeasure that reduced the age requirements for those offices from 25 to 21.[259]In 2002,Oregonvoters rejected a ballot measure that would have reduced the age requirement to serve as a State Representative from 21 to 18. InVenezuela, a person must be at least 30 to bePresidentorVice President,[260]21 to be a deputy for theNational Assembly[261]and 25 to be the Governor of astate.[262]
https://en.wikipedia.org/wiki/Age_of_candidacy
Astructure gauge, also called theminimum structure outline, is a diagram or physical structure that sets limits to the extent that bridges, tunnels and other infrastructure can encroach on rail vehicles. It specifies the height and width of station platforms,tunnelsandbridges, and the width of the doors that allow access to awarehousefrom arail siding. Specifications may include the minimum distance from rail vehicles torailway platforms, buildings, lineside electrical equipment cabinets,signallingequipment,third railsor supports foroverhead lines.[1] A related but separate gauge is theloading gauge: a diagram or physical structure that defines the maximum height and width dimensions inrailwayvehicles and their loads. The difference between these two gauges is called theclearance. The specified amount of clearance makes allowance forwobblingof rail vehicles at speed or the shifting of vehicles on curves; consequently, in some circumstances a train may be permitted to go past a restricted clearance at very slow speed. The term can also be applied to the minimum size of roadtunnels, the space beneathoverpassesand the space within thesuperstructureofbridges, as well asdoorsintoautomobile repair shops,bus garages,filling stations,residential garages,multi-storey car parks,overhangsatdrive-throughsandwarehouses.[citation needed] Eurocode 1: Actions on structureshas a definition of "physical clearance" between roadway surface and the underside of bridge element. The code also defines the clearance that is shorter than the physical clearance to account forsag curves, bridgedeflectionand expected settlements with a recommendation of minimum clearance of 5 metres (16 ft 5 in).[2]In UK, the "standard minimum clearance" for structures over public highways is 16 feet 6 inches (5.03 m).[3]In United States, the "minimum vertical clearance" of overpasses onInterstate Highway Systemis 16 feet (4.9 m).[4] This rail-transport related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Structure_gauge
Word playorwordplay[1](also:play-on-words) is aliterary techniqueand a form ofwitin which words used become the main subject of the work, primarily for the purpose of intended effect oramusement. Examples of word play includepuns, phonetic mix-ups such asspoonerisms, obscure words and meanings, cleverrhetoricalexcursions, oddly formed sentences,double entendres, and telling character names (such as in the playThe Importance of Being Earnest,Ernestbeing agiven namethat sounds exactly like the adjectiveearnest). Word play is quite common inoral culturesas a method of reinforcing meaning. Examples of text-based (orthographic) word play are found in languages with or without alphabet-based scripts, such ashomophonic puns in Mandarin Chinese. Most writers engage in word play to some extent, but certain writers are particularly committed to, or adept at, word play as a major feature of their work .Shakespeare's "quibbles" have made him a noted punster. Similarly,P.G. Wodehousewas hailed byThe Timesas a "comic genius recognized in his lifetime as a classic and an old master of farce" for his own acclaimed wordplay.[6]James Joyce, author ofUlysses, is another noted word-player. For example, in hisFinnegans WakeJoyce's phrase "they were yung and easily freudened" clearly implies the more conventional "they were young and easily frightened"; however, the former also makes an apt pun on the names of two famouspsychoanalysts,JungandFreud. Anepitaph, probably unassigned to anygrave, demonstrates use in rhyme. Crossword puzzlesoften employ wordplay to challenge solvers.Cryptic crosswordsespecially are based on elaborate systems of wordplay. An example of modern word play can be found on line 103 ofChildish Gambino's "III. Life: The Biggest Troll". H2O plus my D, that's my hood, I'm living in it RapperMilouses a play on words in his verse on "True Nen".[7] A farmer says, "I got soaked for nothing, stood out there in the rain bang in the middle of my land, a complete waste of time. I'll like to kill the swine who said you can win theNobel Prizefor being out standing in your field!". TheMario Partyseries is known for its mini-game titles that usually are puns and various plays on words; for example: "Shock, Drop, and Roll", "Gimme a Brake", and "Right Oar Left". These mini-game titles are also different depending onregional differencesand take into account that specific region's culture. Many of the books the characterGromitin theWallace & Gromit seriesreads or the music Grommit listens to are plays on words, such as "Pup Fiction" (Pulp Fiction), "Where Beagles Dare" (Where Eagles Dare), "Red Hot Chili Puppies" (Red Hot Chili Peppers) and "The Hound of Music" (The Sound of Music). Word play can enter common usage asneologisms. Word play is closely related toword games; that is, games in which the point is manipulating words. See alsolanguage gamefor a linguist's variation. Word play can cause problems for translators: e.g., in the bookWinnie-the-Pooha character mistakes the word "issue" for the noise of asneeze, a resemblance which disappears when the word "issue" is translated into another language.
https://en.wikipedia.org/wiki/Word_play
Acase report form(orCRF) is a paper or electronic questionnaire specifically used in clinical trial research.[1]The case report form is the tool used by the sponsor of theclinical trialto collect data from each participating patient. All data on each patient participating in a clinical trial are held and/or documented in the CRF, includingadverse events. The sponsor of the clinical trial develops the CRF to collect the specific data they need in order totesttheir hypotheses or answer their research questions. The size of a CRF can range from a handwritten one-time 'snapshot' of a patient's physical condition to hundreds of pages of electronically captured data obtained over a period of weeks or months. (It can also include required check-up visits months after the patient's treatment has stopped.) The sponsor is responsible for designing a CRF that accurately represents the protocol of the clinical trial, as well as managing its production, monitoring the data collection and auditing the content of the filled-in CRFs. In this case, this is a wrong case. Case report forms contain data obtained during the patient's participation in the clinical trial. Before being sent to the sponsor, this data is usually de-identified (not traceable to the patient) by removing the patient's name, medical record number, etc., and giving the patient a unique study number. The supervisingInstitutional Review Board(IRB) oversees the release of any personally identifiable data to the sponsor. From the sponsor's point of view, the main logistic goal of a clinical trial is to obtain accurate CRFs. However, because of human and machine error, the data entered in CRFs is rarely completely accurate or entirely readable. To combat these errors monitors are usually hired by the sponsor to audit the CRF to make sure the CRF contains the correct data. When the study administrators or automated mechanisms process the CRFs that were sent to the sponsor by local researchers, they make a note of queries. Queries are non-sensible or questionable data that must be explained. Examples of data that would lead to a query: a male patient being on female birth control medication or having had an abortion, or a 15-year-old participant having had hip replacement surgery. Each query has to be resolved by the individual attention of a member of each local research team, as well as an individual in the study administration. To ensure quality control, these queries are usually addressed and resolved before the CRF data is included by the sponsor in the finalclinical study report. Depending on variables relating to the nature of the study, (e.g., the health of the study population), the effectiveness of the study administrators in resolving these queries can significantly impact the cost of studies. Originally all case report forms were made on paper. But recently there is a changing trend to perform clinical studies using an electronic case report form (eCRF). This way of working has many advantages:
https://en.wikipedia.org/wiki/Case_report_form
Depth-first search(DFS) is analgorithmfor traversing or searchingtreeorgraphdata structures. The algorithm starts at theroot node(selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking. Extra memory, usually astack, is needed to keep track of the nodes discovered so far along a specified branch which helps in backtracking of the graph. A version of depth-first search was investigated in the 19th century by French mathematicianCharles Pierre Trémaux[1]as a strategy forsolving mazes.[2][3] Thetimeandspaceanalysis of DFS differs according to its application area. In theoretical computer science, DFS is typically used to traverse an entire graph, and takes timeO(|V|+|E|){\displaystyle O(|V|+|E|)},[4]where|V|{\displaystyle |V|}is the number ofverticesand|E|{\displaystyle |E|}the number ofedges. This is linear in the size of the graph. In these applications it also uses spaceO(|V|){\displaystyle O(|V|)}in the worst case to store thestackof vertices on the current search path as well as the set of already-visited vertices. Thus, in this setting, the time and space bounds are the same as forbreadth-first searchand the choice of which of these two algorithms to use depends less on their complexity and more on the different properties of the vertex orderings the two algorithms produce. For applications of DFS in relation to specific domains, such as searching for solutions inartificial intelligenceor web-crawling, the graph to be traversed is often either too large to visit in its entirety or infinite (DFS may suffer fromnon-termination). In such cases, search is only performed to alimited depth; due to limited resources, such as memory or disk space, one typically does not use data structures to keep track of the set of all previously visited vertices. When search is performed to a limited depth, the time is still linear in terms of the number of expanded vertices and edges (although this number is not the same as the size of the entire graph because some vertices may be searched more than once and others not at all) but the space complexity of this variant of DFS is only proportional to the depth limit, and as a result, is much smaller than the space needed for searching to the same depth using breadth-first search. For such applications, DFS also lends itself much better toheuristicmethods for choosing a likely-looking branch. When an appropriate depth limit is not known a priori,iterative deepening depth-first searchapplies DFS repeatedly with a sequence of increasing limits. In the artificial intelligence mode of analysis, with abranching factorgreater than one, iterative deepening increases the running time by only a constant factor over the case in which the correct depth limit is known due to the geometric growth of the number of nodes per level. DFS may also be used to collect asampleof graph nodes. However, incomplete DFS, similarly to incompleteBFS, isbiasedtowards nodes of highdegree. For the following graph: a depth-first search starting at the node A, assuming that the left edges in the shown graph are chosen before right edges, and assuming the search remembers previously visited nodes and will not repeat them (since this is a small graph), will visit the nodes in the following order: A, B, D, F, E, C, G. The edges traversed in this search form aTrémaux tree, a structure with important applications ingraph theory. Performing the same search without remembering previously visited nodes results in visiting the nodes in the order A, B, D, F, E, A, B, D, F, E, etc. forever, caught in the A, B, D, F, E cycle and never reaching C or G. Iterative deepeningis one technique to avoid this infinite loop and would reach all nodes. The result of a depth-first search of a graph can be conveniently described in terms of aspanning treeof the vertices reached during the search. Based on this spanning tree, the edges of the original graph can be divided into three classes:forward edges, which point from a node of the tree to one of its descendants,back edges, which point from a node to one of its ancestors, andcross edges, which do neither. Sometimestree edges, edges which belong to the spanning tree itself, are classified separately from forward edges. If the original graph is undirected then all of its edges are tree edges or back edges. It is also possible to use depth-first search to linearly order the vertices of a graph or tree. There are four possible ways of doing this: Forbinary treesthere is additionallyin-orderingandreverse in-ordering. For example, when searching the directed graph below beginning at node A, the sequence of traversals is either A B D B A C A or A C D C A B A (choosing to first visit B or C from A is up to the algorithm). Note that repeat visits in the form of backtracking to a node, to check if it has still unvisited neighbors, are included here (even if it is found to have none). Thus the possible preorderings are A B D C and A C D B, while the possible postorderings are D B C A and D C B A, and the possible reverse postorderings are A C B D and A B C D. Reverse postordering produces atopological sortingof anydirected acyclic graph. This ordering is also useful incontrol-flow analysisas it often represents a natural linearization of the control flows. The graph above might represent the flow of control in the code fragment below, and it is natural to consider this code in the order A B C D or A C B D but not natural to use the order A B D C or A C D B. 0 A recursive implementation of DFS:[5] A non-recursive implementation of DFS with worst-case space complexityO(|E|){\displaystyle O(|E|)}, with the possibility of duplicate vertices on the stack:[6] These two variations of DFS visit the neighbors of each vertex in the opposite order from each other: the first neighbor ofvvisited by the recursive variation is the first one in the list of adjacent edges, while in the iterative variation the first visited neighbor is the last one in the list of adjacent edges. The recursive implementation will visit the nodes from the example graph in the following order: A, B, D, F, E, C, G. The non-recursive implementation will visit the nodes as: A, E, F, B, D, C, G. The non-recursive implementation is similar tobreadth-first searchbut differs from it in two ways: IfGis atree, replacing the queue of the breadth-first search algorithm with a stack will yield a depth-first search algorithm. For general graphs, replacing the stack of the iterative depth-first search implementation with a queue would also produce a breadth-first search algorithm, although a somewhat nonstandard one.[7] Another possible implementation of iterative depth-first search uses a stack ofiteratorsof the list of neighbors of a node, instead of a stack of nodes. This yields the same traversal as recursive DFS.[8] Algorithms that use depth-first search as a building block include: Thecomputational complexityof DFS was investigated byJohn Reif. More precisely, given a graphG{\displaystyle G}, letO=(v1,…,vn){\displaystyle O=(v_{1},\dots ,v_{n})}be the ordering computed by the standard recursive DFS algorithm. This ordering is called the lexicographic depth-first search ordering. John Reif considered the complexity of computing the lexicographic depth-first search ordering, given a graph and a source. Adecision versionof the problem (testing whether some vertexuoccurs before some vertexvin this order) isP-complete,[12]meaning that it is "a nightmare forparallel processing".[13]: 189 A depth-first search ordering (not necessarily the lexicographic one), can be computed by a randomized parallel algorithm in the complexity classRNC.[14]As of 1997, it remained unknown whether a depth-first traversal could be constructed by a deterministic parallel algorithm, in the complexity classNC.[15]
https://en.wikipedia.org/wiki/Depth-first_search
Indata analysisinvolving geographical locations,geo-imputationorgeographical imputationmethods are steps taken to replacemissing valuesfor exact locations with approximate locations derived from associated data. They assign a reasonable location or geographic based attribute (e.g.,census tract) to a person by using both the demographic characteristics of the person and the population characteristics from a larger geographic aggregate area in which the person was geocoded (e.g., postal delivery area or county). For example, if a person's census tract was known and no other address information was available then geo-imputation methods could be used to probabilistically assign that person to a smaller geographic area, such as a census block group.[1]
https://en.wikipedia.org/wiki/Geo-imputation
Alanguage modelis amodelof natural language.[1]Language models are useful for a variety of tasks, includingspeech recognition,[2]machine translation,[3]natural language generation(generating more human-like text),optical character recognition,route optimization,[4]handwriting recognition,[5]grammar induction,[6]andinformation retrieval.[7][8] Large language models(LLMs), currently their most advanced form, are predominantly based ontransformerstrained on larger datasets (frequently using wordsscrapedfrom the publicinternet). They have supersededrecurrent neural network-based models, which had previously superseded the purely statistical models, such aswordn-gram language model. Noam Chomskydid pioneering work on language models in the 1950s by developing a theory offormal grammars.[9] In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations likewordn-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such asword embeddings, began to replace discrete representations.[10]Typically, the representation is areal-valuedvector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning, and common relationships between pairs of words like plurality or gender. In 1980, the first significant statistical language model was proposed, and during the decade IBM performed ‘Shannon-style’ experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text.[11] Awordn-gram language modelis a purely statistical model of language. It has been superseded byrecurrent neural network–based models, which have been superseded bylarge language models.[12]It is based on an assumption that the probability of the next word in a sequence depends only on a fixed size window of previous words. If only one previous word is considered, it is called a bigram model; if two words, a trigram model; ifn− 1 words, ann-gram model.[13]Special tokens are introduced to denote the start and end of a sentence⟨s⟩{\displaystyle \langle s\rangle }and⟨/s⟩{\displaystyle \langle /s\rangle }. Maximum entropylanguage models encode the relationship between a word and then-gram history using feature functions. The equation is P(wm∣w1,…,wm−1)=1Z(w1,…,wm−1)exp⁡(aTf(w1,…,wm)){\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} whereZ(w1,…,wm−1){\displaystyle Z(w_{1},\ldots ,w_{m-1})}is thepartition function,a{\displaystyle a}is the parameter vector, andf(w1,…,wm){\displaystyle f(w_{1},\ldots ,w_{m})}is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certainn-gram. It is helpful to use a prior ona{\displaystyle a}or some form ofregularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. wordn-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that areskippedover (thus the name "skip-gram").[14] Formally, ak-skip-n-gram is a length-nsubsequence where the components occur at distance at mostkfrom each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented bylinear combinations, capturing a form ofcompositionality. For example, in some such models, ifvis the function that maps a wordwto itsn-d vector representation, then v(king)−v(male)+v(female)≈v(queen){\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} Continuous representations orembeddings of wordsare produced inrecurrent neural network-based language models (known also ascontinuous space language models).[17]Such continuous space embeddings help to alleviate thecurse of dimensionality, which is the consequence of the number of possible sequences of words increasingexponentiallywith the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net.[18] Alarge language model(LLM) is a type ofmachine learningmodeldesigned fornatural language processingtasks such as languagegeneration. LLMs are language models with many parameters, and are trained withself-supervised learningon a vast amount of text. Although sometimes matching human performance, it is not clear whether they are plausiblecognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do.[22] Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves.[23] Various data sets have been developed for use in evaluating language processing systems.[24]These include:
https://en.wikipedia.org/wiki/Language_model
TransmitorTransmissionmay refer to:
https://en.wikipedia.org/wiki/Transmission_(disambiguation)
"AI slop", often simply "slop", is a derogatory term for low-quality media, including writing and images, made usinggenerative artificial intelligencetechnology, characterized by an inherent lack of effort, logic, or purpose.[1][4][5]Coined in the 2020s, the term has a pejorative connotation akin to "spam".[4] It has been variously defined as "digital clutter", "filler content produced by AI tools that prioritize speed and quantity over substance and quality",[6]and "shoddy or unwanted AI content insocial media, art, books and, increasingly, in search results".[7] Jonathan Gilmore, a philosophy professor at theCity University of New York, describes the "incredibly banal, realistic style" of AI slop as being "very easy to process".[8] As earlylarge language models(LLMs) andimage diffusion modelsaccelerated the creation of high-volume but low-quality written content and images, discussion commenced among journalists and on social platforms for the appropriate term for the influx of material. Terms proposed included "AI garbage", "AI pollution", and "AI-generated dross".[5]Early uses of the term "slop" as a descriptor for low-grade AI material apparently came in reaction to the release of AI image generators in 2022. Its early use has been noted among4chan,Hacker News, andYouTubecommentators as a form of in-groupslang.[7] The British computer programmerSimon Willisonis credited with being an early champion of the term "slop" in the mainstream,[1][7]having used it on his personal blog in May 2024.[9]However, he has said it was in use long before he began pushing for the term.[7] The term gained increased popularity in the second quarter of 2024 in part because ofGoogle's use of itsGeminiAI model to generate responses to search queries,[7]and was widely criticized in media headlines during the fourth quarter of 2024.[1][4] Research found that training LLMs on slop causesmodel collapse: a consistent decrease in the lexical, syntactic, and semantic diversity of the model outputs through successive iterations, notably remarkable for tasks demanding high levels of creativity.[10]AI slop is similarly produced when the same content is continuously refined, paraphrased, or reprocessed through LLMs, with each output becoming the input for the next iteration. Research has shown that this process causes information to gradually distort as it passes through a chain of LLMs, a phenomenon reminiscent of a classic communication exercise known as thetelephone game.[11] AI image and video slop proliferated on social media in part because it was revenue-generating for its creators onFacebookandTikTok, with the issue affecting Facebook most notably. This incentivizes individuals fromdeveloping countriesto create images that appeal to audiences in the United States which attract higher advertising rates.[12][13][14] The journalist Jason Koebler speculated that the bizarre nature of some of the content may be due to the creators using Hindi, Urdu, and Vietnamese prompts (languages which are underrepresented in the model'straining data), or using erraticspeech-to-textmethods to translate their intentions into English.[12] Speaking toNew Yorkmagazine, a Kenyan creator of slop images described givingChatGPTprompts such as "WRITE ME 10 PROMPT picture OF JESUS WHICH WILLING BRING HIGH ENGAGEMENT ON FACEBOOK [sic]", and then feeding those created prompts into atext-to-imageAI service such asMidjourney.[4] In August 2024,The Atlanticnoted that AI slop was becoming associated with the political right in the United States, who were using it forshitpostingandengagement farmingon social media, with the technology offering "cheap, fast, on-demand fodder for content".[15] AI slop is frequently used in political campaigns in an attempt at gaining attention throughcontent farming.[16]In August 2024, the American politicianDonald Trumpposted a series of AI-generated images on his social media platform,Truth Social, that portrayed fans of the singerTaylor Swiftin "Swifties for Trump" T-shirts, as well as a photo of the singer herself appearing to endorseTrump's 2024 presidential campaign. The images originated from the conservativeTwitteraccount@amuse, which posted numerous AI slop images leading up to the2024 United States electionsthat were shared by other high-profile figures within the AmericanRepublican Party, such asElon Musk, who has publicly endorsed the utilization of generative AI, furthering this association.[17] In the aftermath ofHurricane Helenein the United States, members of the Republican Party circulated an AI-generated image of a young girl holding a puppy in a flood, and used it as evidence of the failure of PresidentJoe Bidento respond to the disaster.[18][3]Some, likeAmy Kremer, shared the image on social media even while acknowledging that it was not genuine.[19][20] In November 2024,Coca-Colaused artificial intelligence to create three commercials as part of their annualholiday campaign. These videos were immediately met with negative reception from both casual viewers and artists;[21]the animatorAlex Hirsch, the creator ofGravity Falls, criticized the company's decision not to employ human artists to create the commercial.[22]In response to the negative feedback, the company defended their decision to use generative artificial intelligence stating that "Coca-Cola will always remain dedicated to creating the highest level of work at the intersection of human creativity and technology".[23] In March 2025,Paramount Pictureswas criticized for using AI scripting and narration in anInstagramvideo promoting the filmNovocaine.[24]The ad uses a robotic AI voice in a style similar to low-quality AI spam videos produced by content farms.A24received similar backlash for releasing a series of AI-generated posters for the 2024 filmCivil War. One poster appears to depict a group of soldiers in a tank-like raft preparing to fire on a large swan, an image which does not resemble the events of the film.[25][26] In the same month,Activisionposted various advertisements and posters for fake video games such as "Guitar HeroMobile", "Crash Bandicoot: Brawl", and "Call of Duty: Zombie Defender" that were all made using generative AI on platforms such asFacebookand Instagram, which many labelled as AI slop.[27]The intention of the posts was later stated to act as a survey for interest in possible titles by the company.[28]TheItalian brainrotAI trend was widely adopted by advertisers to adjust well to younger audiences.[29] Fantastical promotional graphics for the 2024Willy's Chocolate Experienceevent, characterized as "AI-generated slop",[30]misled audiences into attending an event that was held in a lightly decorated warehouse. Tickets were marketed throughFacebookadvertisements showing AI-generated imagery, with no genuine photographs of the venue.[31] In October 2024, thousands of people were reported to have assembled for a non-existent Halloween parade inDublinas a result of a listing on an aggregation listings website, MySpiritHalloween.com, which used AI-generated content.[32][33]The listing went viral on TikTok andInstagram.[34]While a similar parade had been held inGalway, and Dublin had hosted parades in prior years, there was no parade in Dublin in 2024.[33]One analyst characterized the website, which appeared to use AI-generated staff pictures, as likely using artificial intelligence "to create content quickly and cheaply where opportunities are found".[35]The site's owner said that "We asked ChatGPT to write the article for us, but it wasn't ChatGPT by itself." In the past the site had removed non-existent events when contacted by their venues, but in the case of the Dublin parade the site owner said that "no one reported that this one wasn't going to happen". MySpiritHalloween.com updated their page to say that the parade had been "canceled" when they became aware of the issue.[36] Online booksellers and library vendors now have many titles that are written by AI and are not curated into collections by librarians. The digital media providerHoopla, which supplies libraries withebooksand downloadable content, has generative AI books with fictional authors and dubious quality, which cost libraries money when checked out by unsuspecting patrons.[37] The 2024 video gameCall of Duty: Black Ops 6includes assets generated by artificial intelligence. Since the game's initial release, many players had accusedTreyarchandRaven Softwareof using AI to create in-game assets, including loading screens, emblems, and calling cards. A particular example was a loading screen for the zombies game mode that depicted "Necroclaus", a zombifiedSanta Clauswith six fingers on one hand, an image which also had other irregularities.[38]Theprevious entryin theCall of Dutyfranchise was also accused of selling AI-generatedcosmetics.[39] In February 2025, Activision disclosedBlack Ops 6's usage of generative artificial intelligence to comply withValve's policies on AI-generated or assisted products onSteam. Activision states on the game's product page on Steam that "Our team uses generative AI tools to help develop some in game assets."[40] Foamstars, amultiplayerthird-person shooterreleased bySquare Enixin 2024, features in-game music withcover artthat was generated usingMidjourney. Square Enix confirmed the use of AI, but defended the decision, saying that they wanted to "experiment" with artificial intelligence technologies and claiming that the generated assets make up "about 0.01% or even less" of game content.[41][42][43]Previously, on January 1, 2024, Square Enix president Takashi Kiryu stated in a new year letter that the company will be "aggressive in applying AI and other cutting-edge technologies to both [their] content development and [their] publishing functions".[44][45] In 2024,Rovio Entertainmentreleased a demo of a mobile game called Angry Birds: Block Quest onAndroid. The game featured AI-generated images for loading screens and backgrounds.[46]It was heavily criticized by players, who called itshovelwareand disapproved of Rovio's use of AI images.[47][48]It was eventually discontinued and removed from thePlay Store. Some films have received backlash for including AI-generated content. The filmLate Night with the Devilwas notable for its use of AI, which some criticized as being AI slop.[49][50]Several low-quality AI-generated images were used as interstitial title cards, with one image featuring a skeleton with inaccurate bone structure and poorly-generated fingers that appear disconnected from its hands.[51] Some streaming services such asAmazon Prime Videohave used AI to generate posters and thumbnail images in a manner that can be described as slop. A low-quality AI poster was used for the 1922 filmNosferatu, depictingCount Orlokin a way that does not resemble his look in the film.[52]A thumbnail image for12 Angry MenonAmazon Freeveeused AI to depict 19 men with smudged faces, none of whom appeared to bear any similarities to the characters in the film.[53][54]Additionally, some viewers have noticed that many plot descriptions appear to be generated by AI, which some people have characterized as slop. One synopsis briefly listed on the site for the filmDog Day Afternoonread: "A man takes hostages at a bank in Brooklyn. Unfortunately I do not have enough information to summarize further within the provided guidelines."[55] In one case Deutsche Telekom removed a series from their media offer after viewers complained about the bad quality and monotonous German voice dubbing (translated from original Polish) and it was found out that it was done via AI.[56]
https://en.wikipedia.org/wiki/Slop_(artificial_intelligence)
Inlinguistics, thesyntax–semantics interfaceis the interaction betweensyntaxandsemantics. Its study encompasses phenomena that pertain to both syntax and semantics, with the goal of explaining correlations between form and meaning.[1]Specific topics includescope,[2][3]binding,[2]andlexical semanticproperties such asverbal aspectandnominal individuation,[4][5][6][7][8]semantic macroroles,[8]andunaccusativity.[4] The interface is conceived of very differently informalistandfunctionalistapproaches. While functionalists tend to look into semantics and pragmatics for explanations of syntactic phenomena, formalists try to limit such explanations within syntax itself.[9]Aside from syntax, other aspects of grammar have been studied in terms of how they interact with semantics; which can be observed by the existence of terms such asmorphosyntax–semantics interface.[3] Withinfunctionalistapproaches, research on the syntax–semantics interface has been aimed at disproving the formalist argument of theautonomy of syntax, by finding instances of semantically determined syntactic structures.[4][10] Levinand Rappaport Hovav, in their 1995 monograph, reiterated that there are some aspects of verb meaning that are relevant to syntax, and others that are not, as previously noted bySteven Pinker.[11][12]Levin and Rappaport Hovav isolated such aspects focusing on the phenomenon ofunaccusativitythat is "semantically determined and syntactically encoded".[13] Van ValinandLaPolla, in their 1997 monographic study, found that the more semantically motivated or driven a syntactic phenomenon is, the more it tends to be typologically universal, that is, to show less cross-linguistic variation.[14] Informal semantics,semantic interpretationis viewed as amappingfrom syntactic structures todenotations. There are several formal views of the syntax–semantics interface which differ in what they take to be the inputs and outputs of this mapping. In theHeim and Kratzermodel commonly adopted withingenerative linguistics, the input is taken to be a special level of syntactic representation calledlogical form. At logical form, semantic relationships such asscopeandbindingare represented unambiguously, having been determined by syntactic operations such asquantifier raising. Other formal frameworks take the opposite approach, assuming that such relationships are established by the rules of semantic interpretation themselves. In such systems, the rules include mechanisms such astype shiftinganddynamic binding.[1][15][16][2] Before the 1950s, there was no discussion of a syntax–semantics interface inAmerican linguistics, since neither syntax nor semantics was an active area of research.[17]This neglect was due in part to the influence oflogical positivismandbehaviorismin psychology, that viewed hypotheses about linguistic meaning as untestable.[17][18] By the 1960s, syntax had become a major area of study, and some researchers began examining semantics as well. In this period, the most prominent view of the interface was theKatz–PostalHypothesisaccording to whichdeep structurewas the level of syntactic representation which underwent semantic interpretation. This assumption was upended by data involving quantifiers, which showed thatsyntactic transformationscan affect meaning. During thelinguistics wars, a variety of competing notions of the interface were developed, many of which live on in present-day work.[17][2]
https://en.wikipedia.org/wiki/Syntax%E2%80%90semantics_interface
Inmachine learning,support vector machines(SVMs, alsosupport vector networks[1]) aresupervisedmax-marginmodels with associated learningalgorithmsthat analyze data forclassificationandregression analysis. Developed atAT&T Bell Laboratories,[1][2]SVMs are one of the most studied models, being based on statistical learning frameworks ofVC theoryproposed byVapnik(1982, 1995) andChervonenkis(1974). In addition to performinglinear classification, SVMs can efficiently perform non-linear classification using thekernel trick, representing the data only through a set of pairwise similarity comparisons between the original data points using a kernel function, which transforms them into coordinates in a higher-dimensionalfeature space. Thus, SVMs use the kernel trick to implicitly map their inputs into high-dimensional feature spaces, where linear classification can be performed.[3]Being max-margin models, SVMs are resilient to noisy data (e.g., misclassified examples). SVMs can also be used forregressiontasks, where the objective becomesϵ{\displaystyle \epsilon }-sensitive. The support vector clustering[4]algorithm, created byHava SiegelmannandVladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data.[citation needed]These data sets requireunsupervised learningapproaches, which attempt to find naturalclustering of the datainto groups, and then to map new data according to these clusters. The popularity of SVMs is likely due to their amenability to theoretical analysis, and their flexibility in being applied to a wide variety of tasks, includingstructured predictionproblems. It is not clear that SVMs have better predictive performance than other linear models, such aslogistic regressionandlinear regression.[5] Classifying datais a common task inmachine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class anewdata pointwill be in. In the case of support vector machines, a data point is viewed as ap{\displaystyle p}-dimensional vector (a list ofp{\displaystyle p}numbers), and we want to know whether we can separate such points with a(p−1){\displaystyle (p-1)}-dimensionalhyperplane. This is called alinear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, ormargin, between the two classes. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as themaximum-margin hyperplaneand the linear classifier it defines is known as amaximum-margin classifier; or equivalently, theperceptron of optimal stability.[6] More formally, a support vector machine constructs ahyperplaneor set of hyperplanes in a high or infinite-dimensional space, which can be used forclassification,regression, or other tasks like outliers detection.[7]Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training-data point of any class (so-called functional margin), since in general the larger the margin, the lower thegeneralization errorof the classifier.[8]A lowergeneralization errormeans that the implementer is less likely to experienceoverfitting. Whereas the original problem may be stated in a finite-dimensional space, it often happens that the sets to discriminate are notlinearly separablein that space. For this reason, it was proposed[9]that the original finite-dimensional space be mapped into a much higher-dimensional space, presumably making the separation easier in that space. To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure thatdot productsof pairs of input data vectors may be computed easily in terms of the variables in the original space, by defining them in terms of akernel functionk(x,y){\displaystyle k(x,y)}selected to suit the problem.[10]The hyperplanes in the higher-dimensional space are defined as the set of points whose dot product with a vector in that space is constant, where such a set of vectors is an orthogonal (and thus minimal) set of vectors that defines a hyperplane. The vectors defining the hyperplanes can be chosen to be linear combinations with parametersαi{\displaystyle \alpha _{i}}of images offeature vectorsxi{\displaystyle x_{i}}that occur in the data base. With this choice of a hyperplane, the pointsx{\displaystyle x}in thefeature spacethat are mapped into the hyperplane are defined by the relation∑iαik(xi,x)=constant.{\displaystyle \textstyle \sum _{i}\alpha _{i}k(x_{i},x)={\text{constant}}.}Note that ifk(x,y){\displaystyle k(x,y)}becomes small asy{\displaystyle y}grows further away fromx{\displaystyle x}, each term in the sum measures the degree of closeness of the test pointx{\displaystyle x}to the corresponding data base pointxi{\displaystyle x_{i}}. In this way, the sum of kernels above can be used to measure the relative nearness of each test point to the data points originating in one or the other of the sets to be discriminated. Note the fact that the set of pointsx{\displaystyle x}mapped into any hyperplane can be quite convoluted as a result, allowing much more complex discrimination between sets that are not convex at all in the original space. SVMs can be used to solve various real-world problems: The original SVM algorithm was invented byVladimir N. VapnikandAlexey Ya. Chervonenkisin 1964.[citation needed]In 1992, Bernhard Boser,Isabelle GuyonandVladimir Vapniksuggested a way to create nonlinear classifiers by applying thekernel trickto maximum-margin hyperplanes.[9]The "soft margin" incarnation, as is commonly used in software packages, was proposed byCorinna Cortesand Vapnik in 1993 and published in 1995.[1] We are given a training dataset ofn{\displaystyle n}points of the form(x1,y1),…,(xn,yn),{\displaystyle (\mathbf {x} _{1},y_{1}),\ldots ,(\mathbf {x} _{n},y_{n}),}where theyi{\displaystyle y_{i}}are either 1 or −1, each indicating the class to which the pointxi{\displaystyle \mathbf {x} _{i}}belongs. Eachxi{\displaystyle \mathbf {x} _{i}}is ap{\displaystyle p}-dimensionalrealvector. We want to find the "maximum-margin hyperplane" that divides the group of pointsxi{\displaystyle \mathbf {x} _{i}}for whichyi=1{\displaystyle y_{i}=1}from the group of points for whichyi=−1{\displaystyle y_{i}=-1}, which is defined so that the distance between the hyperplane and the nearest pointxi{\displaystyle \mathbf {x} _{i}}from either group is maximized. Anyhyperplanecan be written as the set of pointsx{\displaystyle \mathbf {x} }satisfyingwTx−b=0,{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} -b=0,}wherew{\displaystyle \mathbf {w} }is the (not necessarily normalized)normal vectorto the hyperplane. This is much likeHesse normal form, except thatw{\displaystyle \mathbf {w} }is not necessarily a unit vector. The parameterb‖w‖{\displaystyle {\tfrac {b}{\|\mathbf {w} \|}}}determines the offset of the hyperplane from the origin along the normal vectorw{\displaystyle \mathbf {w} }. Warning: most of the literature on the subject defines the bias so thatwTx+b=0.{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} +b=0.} If the training data islinearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as large as possible. The region bounded by these two hyperplanes is called the "margin", and the maximum-margin hyperplane is the hyperplane that lies halfway between them. With a normalized or standardized dataset, these hyperplanes can be described by the equations and Geometrically, the distance between these two hyperplanes is2‖w‖{\displaystyle {\tfrac {2}{\|\mathbf {w} \|}}},[21]so to maximize the distance between the planes we want to minimize‖w‖{\displaystyle \|\mathbf {w} \|}. The distance is computed using thedistance from a point to a planeequation. We also have to prevent data points from falling into the margin, we add the following constraint: for eachi{\displaystyle i}eitherwTxi−b≥1,ifyi=1,{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b\geq 1\,,{\text{ if }}y_{i}=1,}orwTxi−b≤−1,ifyi=−1.{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b\leq -1\,,{\text{ if }}y_{i}=-1.}These constraints state that each data point must lie on the correct side of the margin. This can be rewritten as We can put this together to get the optimization problem: minimizew,b12‖w‖2subject toyi(w⊤xi−b)≥1∀i∈{1,…,n}{\displaystyle {\begin{aligned}&{\underset {\mathbf {w} ,\;b}{\operatorname {minimize} }}&&{\frac {1}{2}}\|\mathbf {w} \|^{2}\\&{\text{subject to}}&&y_{i}(\mathbf {w} ^{\top }\mathbf {x} _{i}-b)\geq 1\quad \forall i\in \{1,\dots ,n\}\end{aligned}}} Thew{\displaystyle \mathbf {w} }andb{\displaystyle b}that solve this problem determine the final classifier,x↦sgn⁡(wTx−b){\displaystyle \mathbf {x} \mapsto \operatorname {sgn}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} -b)}, wheresgn⁡(⋅){\displaystyle \operatorname {sgn}(\cdot )}is thesign function. An important consequence of this geometric description is that the max-margin hyperplane is completely determined by thosexi{\displaystyle \mathbf {x} _{i}}that lie nearest to it (explained below). Thesexi{\displaystyle \mathbf {x} _{i}}are calledsupport vectors. To extend SVM to cases in which the data are not linearly separable, thehinge lossfunction is helpfulmax(0,1−yi(wTxi−b)).{\displaystyle \max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right).} Note thatyi{\displaystyle y_{i}}is thei-th target (i.e., in this case, 1 or −1), andwTxi−b{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b}is thei-th output. This function is zero if the constraint in(1)is satisfied, in other words, ifxi{\displaystyle \mathbf {x} _{i}}lies on the correct side of the margin. For data on the wrong side of the margin, the function's value is proportional to the distance from the margin. The goal of the optimization then is to minimize: ‖w‖2+C[1n∑i=1nmax(0,1−yi(wTxi−b))],{\displaystyle \lVert \mathbf {w} \rVert ^{2}+C\left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right)\right],} where the parameterC>0{\displaystyle C>0}determines the trade-off between increasing the margin size and ensuring that thexi{\displaystyle \mathbf {x} _{i}}lie on the correct side of the margin (Note we can add a weight to either term in the equation above). By deconstructing the hinge loss, this optimization problem can be formulated into the following: minimizew,b,ζ‖w‖22+C∑i=1nζisubject toyi(w⊤xi−b)≥1−ζi,ζi≥0∀i∈{1,…,n}{\displaystyle {\begin{aligned}&{\underset {\mathbf {w} ,\;b,\;\mathbf {\zeta } }{\operatorname {minimize} }}&&\|\mathbf {w} \|_{2}^{2}+C\sum _{i=1}^{n}\zeta _{i}\\&{\text{subject to}}&&y_{i}(\mathbf {w} ^{\top }\mathbf {x} _{i}-b)\geq 1-\zeta _{i},\quad \zeta _{i}\geq 0\quad \forall i\in \{1,\dots ,n\}\end{aligned}}} Thus, for large values ofC{\displaystyle C}, it will behave similar to the hard-margin SVM, if the input data are linearly classifiable, but will still learn if a classification rule is viable or not. The original maximum-margin hyperplane algorithm proposed by Vapnik in 1963 constructed alinear classifier. However, in 1992,Bernhard Boser,Isabelle GuyonandVladimir Vapniksuggested a way to create nonlinear classifiers by applying thekernel trick(originally proposed by Aizerman et al.[22]) to maximum-margin hyperplanes.[9]The kernel trick, wheredot productsare replaced by kernels, is easily derived in the dual representation of the SVM problem. This allows the algorithm to fit the maximum-margin hyperplane in a transformedfeature space. The transformation may be nonlinear and the transformed space high-dimensional; although the classifier is a hyperplane in the transformed feature space, it may be nonlinear in the original input space. It is noteworthy that working in a higher-dimensional feature space increases thegeneralization errorof support vector machines, although given enough samples the algorithm still performs well.[23] Some common kernels include: The kernel is related to the transformφ(xi){\displaystyle \varphi (\mathbf {x} _{i})}by the equationk(xi,xj)=φ(xi)⋅φ(xj){\displaystyle k(\mathbf {x} _{i},\mathbf {x} _{j})=\varphi (\mathbf {x} _{i})\cdot \varphi (\mathbf {x} _{j})}. The valuewis also in the transformed space, withw=∑iαiyiφ(xi){\textstyle \mathbf {w} =\sum _{i}\alpha _{i}y_{i}\varphi (\mathbf {x} _{i})}. Dot products withwfor classification can again be computed by the kernel trick, i.e.w⋅φ(x)=∑iαiyik(xi,x){\textstyle \mathbf {w} \cdot \varphi (\mathbf {x} )=\sum _{i}\alpha _{i}y_{i}k(\mathbf {x} _{i},\mathbf {x} )}. Computing the (soft-margin) SVM classifier amounts to minimizing an expression of the form We focus on the soft-margin classifier since, as noted above, choosing a sufficiently small value forλ{\displaystyle \lambda }yields the hard-margin classifier for linearly classifiable input data. The classical approach, which involves reducing(2)to aquadratic programmingproblem, is detailed below. Then, more recent approaches such as sub-gradient descent and coordinate descent will be discussed. Minimizing(2)can be rewritten as a constrained optimization problem with a differentiable objective function in the following way. For eachi∈{1,…,n}{\displaystyle i\in \{1,\,\ldots ,\,n\}}we introduce a variableζi=max(0,1−yi(wTxi−b)){\displaystyle \zeta _{i}=\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right)}. Note thatζi{\displaystyle \zeta _{i}}is the smallest nonnegative number satisfyingyi(wTxi−b)≥1−ζi.{\displaystyle y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\geq 1-\zeta _{i}.} Thus we can rewrite the optimization problem as follows minimize1n∑i=1nζi+λ‖w‖2subject toyi(wTxi−b)≥1−ζiandζi≥0,for alli.{\displaystyle {\begin{aligned}&{\text{minimize }}{\frac {1}{n}}\sum _{i=1}^{n}\zeta _{i}+\lambda \|\mathbf {w} \|^{2}\\[0.5ex]&{\text{subject to }}y_{i}\left(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b\right)\geq 1-\zeta _{i}\,{\text{ and }}\,\zeta _{i}\geq 0,\,{\text{for all }}i.\end{aligned}}} This is called theprimalproblem. By solving for theLagrangian dualof the above problem, one obtains the simplified problem maximizef(c1…cn)=∑i=1nci−12∑i=1n∑j=1nyici(xiTxj)yjcj,subject to∑i=1nciyi=0,and0≤ci≤12nλfor alli.{\displaystyle {\begin{aligned}&{\text{maximize}}\,\,f(c_{1}\ldots c_{n})=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(\mathbf {x} _{i}^{\mathsf {T}}\mathbf {x} _{j})y_{j}c_{j},\\&{\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.\end{aligned}}} This is called thedualproblem. Since the dual maximization problem is a quadratic function of theci{\displaystyle c_{i}}subject to linear constraints, it is efficiently solvable byquadratic programmingalgorithms. Here, the variablesci{\displaystyle c_{i}}are defined such that w=∑i=1nciyixi.{\displaystyle \mathbf {w} =\sum _{i=1}^{n}c_{i}y_{i}\mathbf {x} _{i}.} Moreover,ci=0{\displaystyle c_{i}=0}exactly whenxi{\displaystyle \mathbf {x} _{i}}lies on the correct side of the margin, and0<ci<(2nλ)−1{\displaystyle 0<c_{i}<(2n\lambda )^{-1}}whenxi{\displaystyle \mathbf {x} _{i}}lies on the margin's boundary. It follows thatw{\displaystyle \mathbf {w} }can be written as a linear combination of the support vectors. The offset,b{\displaystyle b}, can be recovered by finding anxi{\displaystyle \mathbf {x} _{i}}on the margin's boundary and solvingyi(wTxi−b)=1⟺b=wTxi−yi.{\displaystyle y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)=1\iff b=\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-y_{i}.} (Note thatyi−1=yi{\displaystyle y_{i}^{-1}=y_{i}}sinceyi=±1{\displaystyle y_{i}=\pm 1}.) Suppose now that we would like to learn a nonlinear classification rule which corresponds to a linear classification rule for the transformed data pointsφ(xi).{\displaystyle \varphi (\mathbf {x} _{i}).}Moreover, we are given a kernel functionk{\displaystyle k}which satisfiesk(xi,xj)=φ(xi)⋅φ(xj){\displaystyle k(\mathbf {x} _{i},\mathbf {x} _{j})=\varphi (\mathbf {x} _{i})\cdot \varphi (\mathbf {x} _{j})}. We know the classification vectorw{\displaystyle \mathbf {w} }in the transformed space satisfies w=∑i=1nciyiφ(xi),{\displaystyle \mathbf {w} =\sum _{i=1}^{n}c_{i}y_{i}\varphi (\mathbf {x} _{i}),} where, theci{\displaystyle c_{i}}are obtained by solving the optimization problem maximizef(c1…cn)=∑i=1nci−12∑i=1n∑j=1nyici(φ(xi)⋅φ(xj))yjcj=∑i=1nci−12∑i=1n∑j=1nyicik(xi,xj)yjcjsubject to∑i=1nciyi=0,and0≤ci≤12nλfor alli.{\displaystyle {\begin{aligned}{\text{maximize}}\,\,f(c_{1}\ldots c_{n})&=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(\varphi (\mathbf {x} _{i})\cdot \varphi (\mathbf {x} _{j}))y_{j}c_{j}\\&=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}k(\mathbf {x} _{i},\mathbf {x} _{j})y_{j}c_{j}\\{\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}&=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.\end{aligned}}} The coefficientsci{\displaystyle c_{i}}can be solved for using quadratic programming, as before. Again, we can find some indexi{\displaystyle i}such that0<ci<(2nλ)−1{\displaystyle 0<c_{i}<(2n\lambda )^{-1}}, so thatφ(xi){\displaystyle \varphi (\mathbf {x} _{i})}lies on the boundary of the margin in the transformed space, and then solve b=wTφ(xi)−yi=[∑j=1ncjyjφ(xj)⋅φ(xi)]−yi=[∑j=1ncjyjk(xj,xi)]−yi.{\displaystyle {\begin{aligned}b=\mathbf {w} ^{\mathsf {T}}\varphi (\mathbf {x} _{i})-y_{i}&=\left[\sum _{j=1}^{n}c_{j}y_{j}\varphi (\mathbf {x} _{j})\cdot \varphi (\mathbf {x} _{i})\right]-y_{i}\\&=\left[\sum _{j=1}^{n}c_{j}y_{j}k(\mathbf {x} _{j},\mathbf {x} _{i})\right]-y_{i}.\end{aligned}}} Finally, z↦sgn⁡(wTφ(z)−b)=sgn⁡([∑i=1nciyik(xi,z)]−b).{\displaystyle \mathbf {z} \mapsto \operatorname {sgn}(\mathbf {w} ^{\mathsf {T}}\varphi (\mathbf {z} )-b)=\operatorname {sgn} \left(\left[\sum _{i=1}^{n}c_{i}y_{i}k(\mathbf {x} _{i},\mathbf {z} )\right]-b\right).} Recent algorithms for finding the SVM classifier include sub-gradient descent and coordinate descent. Both techniques have proven to offer significant advantages over the traditional approach when dealing with large, sparse datasets—sub-gradient methods are especially efficient when there are many training examples, and coordinate descent when the dimension of the feature space is high. Sub-gradient descentalgorithms for the SVM work directly with the expression f(w,b)=[1n∑i=1nmax(0,1−yi(wTxi−b))]+λ‖w‖2.{\displaystyle f(\mathbf {w} ,b)=\left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right)\right]+\lambda \|\mathbf {w} \|^{2}.} Note thatf{\displaystyle f}is aconvex functionofw{\displaystyle \mathbf {w} }andb{\displaystyle b}. As such, traditionalgradient descent(orSGD) methods can be adapted, where instead of taking a step in the direction of the function's gradient, a step is taken in the direction of a vector selected from the function'ssub-gradient. This approach has the advantage that, for certain implementations, the number of iterations does not scale withn{\displaystyle n}, the number of data points.[24] Coordinate descentalgorithms for the SVM work from the dual problem maximizef(c1…cn)=∑i=1nci−12∑i=1n∑j=1nyici(xi⋅xj)yjcj,subject to∑i=1nciyi=0,and0≤ci≤12nλfor alli.{\displaystyle {\begin{aligned}&{\text{maximize}}\,\,f(c_{1}\ldots c_{n})=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(x_{i}\cdot x_{j})y_{j}c_{j},\\&{\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.\end{aligned}}} For eachi∈{1,…,n}{\displaystyle i\in \{1,\,\ldots ,\,n\}}, iteratively, the coefficientci{\displaystyle c_{i}}is adjusted in the direction of∂f/∂ci{\displaystyle \partial f/\partial c_{i}}. Then, the resulting vector of coefficients(c1′,…,cn′){\displaystyle (c_{1}',\,\ldots ,\,c_{n}')}is projected onto the nearest vector of coefficients that satisfies the given constraints. (Typically Euclidean distances are used.) The process is then repeated until a near-optimal vector of coefficients is obtained. The resulting algorithm is extremely fast in practice, although few performance guarantees have been proven.[25] The soft-margin support vector machine described above is an example of anempirical risk minimization(ERM) algorithm for thehinge loss. Seen this way, support vector machines belong to a natural class of algorithms for statistical inference, and many of its unique features are due to the behavior of the hinge loss. This perspective can provide further insight into how and why SVMs work, and allow us to better analyze their statistical properties. In supervised learning, one is given a set of training examplesX1…Xn{\displaystyle X_{1}\ldots X_{n}}with labelsy1…yn{\displaystyle y_{1}\ldots y_{n}}, and wishes to predictyn+1{\displaystyle y_{n+1}}givenXn+1{\displaystyle X_{n+1}}. To do so one forms ahypothesis,f{\displaystyle f}, such thatf(Xn+1){\displaystyle f(X_{n+1})}is a "good" approximation ofyn+1{\displaystyle y_{n+1}}. A "good" approximation is usually defined with the help of aloss function,ℓ(y,z){\displaystyle \ell (y,z)}, which characterizes how badz{\displaystyle z}is as a prediction ofy{\displaystyle y}. We would then like to choose a hypothesis that minimizes theexpected risk: ε(f)=E[ℓ(yn+1,f(Xn+1))].{\displaystyle \varepsilon (f)=\mathbb {E} \left[\ell (y_{n+1},f(X_{n+1}))\right].} In most cases, we don't know the joint distribution ofXn+1,yn+1{\displaystyle X_{n+1},\,y_{n+1}}outright. In these cases, a common strategy is to choose the hypothesis that minimizes theempirical risk: ε^(f)=1n∑k=1nℓ(yk,f(Xk)).{\displaystyle {\hat {\varepsilon }}(f)={\frac {1}{n}}\sum _{k=1}^{n}\ell (y_{k},f(X_{k})).} Under certain assumptions about the sequence of random variablesXk,yk{\displaystyle X_{k},\,y_{k}}(for example, that they are generated by a finite Markov process), if the set of hypotheses being considered is small enough, the minimizer of the empirical risk will closely approximate the minimizer of the expected risk asn{\displaystyle n}grows large. This approach is calledempirical risk minimization,or ERM. In order for the minimization problem to have a well-defined solution, we have to place constraints on the setH{\displaystyle {\mathcal {H}}}of hypotheses being considered. IfH{\displaystyle {\mathcal {H}}}is anormed space(as is the case for SVM), a particularly effective technique is to consider only those hypothesesf{\displaystyle f}for which‖f‖H<k{\displaystyle \lVert f\rVert _{\mathcal {H}}<k}. This is equivalent to imposing aregularization penaltyR(f)=λk‖f‖H{\displaystyle {\mathcal {R}}(f)=\lambda _{k}\lVert f\rVert _{\mathcal {H}}}, and solving the new optimization problem f^=argminf∈Hε^(f)+R(f).{\displaystyle {\hat {f}}=\mathrm {arg} \min _{f\in {\mathcal {H}}}{\hat {\varepsilon }}(f)+{\mathcal {R}}(f).} This approach is calledTikhonov regularization. More generally,R(f){\displaystyle {\mathcal {R}}(f)}can be some measure of the complexity of the hypothesisf{\displaystyle f}, so that simpler hypotheses are preferred. Recall that the (soft-margin) SVM classifierw^,b:x↦sgn⁡(w^Tx−b){\displaystyle {\hat {\mathbf {w} }},b:\mathbf {x} \mapsto \operatorname {sgn}({\hat {\mathbf {w} }}^{\mathsf {T}}\mathbf {x} -b)}is chosen to minimize the following expression: [1n∑i=1nmax(0,1−yi(wTx−b))]+λ‖w‖2.{\displaystyle \left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} -b)\right)\right]+\lambda \|\mathbf {w} \|^{2}.} In light of the above discussion, we see that the SVM technique is equivalent to empirical risk minimization with Tikhonov regularization, where in this case the loss function is thehinge loss ℓ(y,z)=max(0,1−yz).{\displaystyle \ell (y,z)=\max \left(0,1-yz\right).} From this perspective, SVM is closely related to other fundamentalclassification algorithmssuch asregularized least-squaresandlogistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with thesquare-loss,ℓsq(y,z)=(y−z)2{\displaystyle \ell _{sq}(y,z)=(y-z)^{2}}; logistic regression employs thelog-loss, ℓlog(y,z)=ln⁡(1+e−yz).{\displaystyle \ell _{\log }(y,z)=\ln(1+e^{-yz}).} The difference between the hinge loss and these other loss functions is best stated in terms oftarget functions -the function that minimizes expected risk for a given pair of random variablesX,y{\displaystyle X,\,y}. In particular, letyx{\displaystyle y_{x}}denotey{\displaystyle y}conditional on the event thatX=x{\displaystyle X=x}. In the classification setting, we have: yx={1with probabilitypx−1with probability1−px{\displaystyle y_{x}={\begin{cases}1&{\text{with probability }}p_{x}\\-1&{\text{with probability }}1-p_{x}\end{cases}}} The optimal classifier is therefore: f∗(x)={1ifpx≥1/2−1otherwise{\displaystyle f^{*}(x)={\begin{cases}1&{\text{if }}p_{x}\geq 1/2\\-1&{\text{otherwise}}\end{cases}}} For the square-loss, the target function is the conditional expectation function,fsq(x)=E[yx]{\displaystyle f_{sq}(x)=\mathbb {E} \left[y_{x}\right]}; For the logistic loss, it's the logit function,flog(x)=ln⁡(px/(1−px)){\displaystyle f_{\log }(x)=\ln \left(p_{x}/({1-p_{x}})\right)}. While both of these target functions yield the correct classifier, assgn⁡(fsq)=sgn⁡(flog)=f∗{\displaystyle \operatorname {sgn}(f_{sq})=\operatorname {sgn}(f_{\log })=f^{*}}, they give us more information than we need. In fact, they give us enough information to completely describe the distribution ofyx{\displaystyle y_{x}}. On the other hand, one can check that the target function for the hinge loss isexactlyf∗{\displaystyle f^{*}}. Thus, in a sufficiently rich hypothesis space—or equivalently, for an appropriately chosen kernel—the SVM classifier will converge to the simplest function (in terms ofR{\displaystyle {\mathcal {R}}}) that correctly classifies the data. This extends the geometric interpretation of SVM—for linear classification, the empirical risk is minimized by any function whose margins lie between the support vectors, and the simplest of these is the max-margin classifier.[26] SVMs belong to a family of generalizedlinear classifiersand can be interpreted as an extension of theperceptron.[27]They can also be considered a special case ofTikhonov regularization. A special property is that they simultaneously minimize the empiricalclassification errorand maximize thegeometric margin; hence they are also known asmaximummargin classifiers. A comparison of the SVM to other classifiers has been made by Meyer, Leisch and Hornik.[28] The effectiveness of SVM depends on the selection of kernel, the kernel's parameters, and soft margin parameterλ{\displaystyle \lambda }. A common choice is a Gaussian kernel, which has a single parameterγ{\displaystyle \gamma }. The best combination ofλ{\displaystyle \lambda }andγ{\displaystyle \gamma }is often selected by agrid searchwith exponentially growing sequences ofλ{\displaystyle \lambda }andγ{\displaystyle \gamma }, for example,λ∈{2−5,2−3,…,213,215}{\displaystyle \lambda \in \{2^{-5},2^{-3},\dots ,2^{13},2^{15}\}};γ∈{2−15,2−13,…,21,23}{\displaystyle \gamma \in \{2^{-15},2^{-13},\dots ,2^{1},2^{3}\}}. Typically, each combination of parameter choices is checked usingcross validation, and the parameters with best cross-validation accuracy are picked. Alternatively, recent work inBayesian optimizationcan be used to selectλ{\displaystyle \lambda }andγ{\displaystyle \gamma }, often requiring the evaluation of far fewer parameter combinations than grid search. The final model, which is used for testing and for classifying new data, is then trained on the whole training set using the selected parameters.[29] Potential drawbacks of the SVM include the following aspects: Multiclass SVM aims to assign labels to instances by using support vector machines, where the labels are drawn from a finite set of several elements. The dominant approach for doing so is to reduce the singlemulticlass probleminto multiplebinary classificationproblems.[30]Common methods for such reduction include:[30][31] Crammer and Singer proposed a multiclass SVM method which casts themulticlass classificationproblem into a single optimization problem, rather than decomposing it into multiple binary classification problems.[34]See also Lee, Lin and Wahba[35][36]and Van den Burg and Groenen.[37] Transductive support vector machines extend SVMs in that they could also treat partially labeled data insemi-supervised learningby following the principles oftransduction. Here, in addition to the training setD{\displaystyle {\mathcal {D}}}, the learner is also given a set D⋆={xi⋆∣xi⋆∈Rp}i=1k{\displaystyle {\mathcal {D}}^{\star }=\{\mathbf {x} _{i}^{\star }\mid \mathbf {x} _{i}^{\star }\in \mathbb {R} ^{p}\}_{i=1}^{k}} of test examples to be classified. Formally, a transductive support vector machine is defined by the following primal optimization problem:[38] Minimize (inw,b,y⋆{\displaystyle \mathbf {w} ,b,\mathbf {y} ^{\star }}) 12‖w‖2{\displaystyle {\frac {1}{2}}\|\mathbf {w} \|^{2}} subject to (for anyi=1,…,n{\displaystyle i=1,\dots ,n}and anyj=1,…,k{\displaystyle j=1,\dots ,k}) yi(w⋅xi−b)≥1,yj⋆(w⋅xj⋆−b)≥1,{\displaystyle {\begin{aligned}&y_{i}(\mathbf {w} \cdot \mathbf {x} _{i}-b)\geq 1,\\&y_{j}^{\star }(\mathbf {w} \cdot \mathbf {x} _{j}^{\star }-b)\geq 1,\end{aligned}}} and yj⋆∈{−1,1}.{\displaystyle y_{j}^{\star }\in \{-1,1\}.} Transductive support vector machines were introduced by Vladimir N. Vapnik in 1998. Structured support-vector machine is an extension of the traditional SVM model. While the SVM model is primarily designed for binary classification, multiclass classification, and regression tasks, structured SVM broadens its application to handle general structured output labels, for example parse trees, classification with taxonomies, sequence alignment and many more.[39] A version of SVM forregressionwas proposed in 1996 byVladimir N. Vapnik, Harris Drucker, Christopher J. C. Burges, Linda Kaufman and Alexander J. Smola.[40]This method is called support vector regression (SVR). The model produced by support vector classification (as described above) depends only on a subset of the training data, because the cost function for building the model does not care about training points that lie beyond the margin. Analogously, the model produced by SVR depends only on a subset of the training data, because the cost function for building the model ignores any training data close to the model prediction. Another SVM version known asleast-squares support vector machine(LS-SVM) has been proposed by Suykens and Vandewalle.[41] Training the original SVR means solving[42] wherexi{\displaystyle x_{i}}is a training sample with target valueyi{\displaystyle y_{i}}. The inner product plus intercept⟨w,xi⟩+b{\displaystyle \langle w,x_{i}\rangle +b}is the prediction for that sample, andε{\displaystyle \varepsilon }is a free parameter that serves as a threshold: all predictions have to be within anε{\displaystyle \varepsilon }range of the true predictions. Slack variables are usually added into the above to allow for errors and to allow approximation in the case the above problem is infeasible. In 2011 it was shown by Polson and Scott that the SVM admits aBayesianinterpretation through the technique ofdata augmentation.[43]In this approach the SVM is viewed as agraphical model(where the parameters are connected via probability distributions). This extended view allows the application ofBayesiantechniques to SVMs, such as flexible feature modeling, automatichyperparametertuning, andpredictive uncertainty quantification. Recently, a scalable version of the Bayesian SVM was developed byFlorian Wenzel, enabling the application of Bayesian SVMs tobig data.[44]Florian Wenzel developed two different versions, a variational inference (VI) scheme for the Bayesian kernel support vector machine (SVM) and a stochastic version (SVI) for the linear Bayesian SVM.[45] The parameters of the maximum-margin hyperplane are derived by solving the optimization. There exist several specialized algorithms for quickly solving thequadratic programming(QP) problem that arises from SVMs, mostly relying on heuristics for breaking the problem down into smaller, more manageable chunks. Another approach is to use aninterior-point methodthat usesNewton-like iterations to find a solution of theKarush–Kuhn–Tucker conditionsof the primal and dual problems.[46]Instead of solving a sequence of broken-down problems, this approach directly solves the problem altogether. To avoid solving a linear system involving the large kernel matrix, a low-rank approximation to the matrix is often used in the kernel trick. Another common method is Platt'ssequential minimal optimization(SMO) algorithm, which breaks the problem down into 2-dimensional sub-problems that are solved analytically, eliminating the need for a numerical optimization algorithm and matrix storage. This algorithm is conceptually simple, easy to implement, generally faster, and has better scaling properties for difficult SVM problems.[47] The special case of linear support vector machines can be solved more efficiently by the same kind of algorithms used to optimize its close cousin,logistic regression; this class of algorithms includessub-gradient descent(e.g., PEGASOS[48]) andcoordinate descent(e.g., LIBLINEAR[49]). LIBLINEAR has some attractive training-time properties. Each convergence iteration takes time linear in the time taken to read the train data, and the iterations also have aQ-linear convergenceproperty, making the algorithm extremely fast. The general kernel SVMs can also be solved more efficiently usingsub-gradient descent(e.g. P-packSVM[50]), especially whenparallelizationis allowed. Kernel SVMs are available in many machine-learning toolkits, includingLIBSVM,MATLAB,SAS, SVMlight,kernlab,scikit-learn,Shogun,Weka,Shark,JKernelMachines,OpenCVand others. Preprocessing of data (standardization) is highly recommended to enhance accuracy of classification.[51]There are a few methods of standardization, such as min-max, normalization by decimal scaling, Z-score.[52]Subtraction of mean and division by variance of each feature is usually used for SVM.[53]
https://en.wikipedia.org/wiki/Support_vector_machines
File verificationis the process of using analgorithmfor verifying the integrity of acomputer file, usually bychecksum. This can be done bycomparing two filesbit-by-bit, but requires two copies of the same file, and may miss systematic corruptions which might occur to both files. A more popular approach is to generate ahashof the copied file and comparing that to the hash of the original file. Fileintegrity can be compromised, usually referred to as the file becomingcorrupted. A file can become corrupted by a variety of ways: faultystorage media, errors in transmission, write errors during copying or moving,software bugs, and so on. Hash-based verification ensures that a file has not been corrupted by comparing the file's hash value to a previously calculated value. If these values match, the file is presumed to be unmodified. Due to the nature of hash functions,hash collisionsmay result infalse positives, but the likelihood of collisions is often negligible with random corruption. It is often desirable to verify that a file hasn't been modified in transmission or storage by untrusted parties, for example, to include malicious code such asvirusesorbackdoors. To verify the authenticity, a classical hash function is not enough as they are not designed to becollision resistant; it is computationally trivial for an attacker to cause deliberate hash collisions, meaning that a malicious change in the file is not detected by a hash comparison. In cryptography, this attack is called apreimage attack. For this purpose,cryptographic hash functionsare employed often. As long as the hash sums cannot be tampered with — for example, if they are communicated over a secure channel — the files can be presumed to be intact. Alternatively,digital signaturescan be employed to assuretamper resistance. Achecksum fileis a small file that contains the checksums of other files. There are a few well-known checksum file formats.[1] Several utilities, such asmd5deep, can use such checksum files to automatically verify an entire directory of files in one operation. The particular hash algorithm used is often indicated by the file extension of the checksum file. The ".sha1" file extension indicates a checksum file containing 160-bitSHA-1hashes insha1sumformat. The ".md5" file extension, or a file named "MD5SUMS", indicates a checksum file containing 128-bitMD5hashes inmd5sumformat. The ".sfv" file extension indicates a checksum file containing 32-bit CRC32 checksums insimple file verificationformat. The "crc.list" file indicates a checksum file containing 32-bit CRC checksums in brik format. As of 2012, best practice recommendations is to useSHA-2orSHA-3to generate new file integrity digests; and to accept MD5 and SHA-1 digests for backward compatibility if stronger digests are not available. The theoretically weaker SHA-1, the weaker MD5, or much weaker CRC were previously commonly used for file integrity checks.[2][3][4][5][6][7][8][9][10] CRC checksums cannot be used to verify the authenticity of files, as CRC32 is not acollision resistanthash function -- even if the hash sum file is not tampered with, it is computationally trivial for an attacker to replace a file with the same CRC digest as the original file, meaning that a malicious change in the file is not detected by a CRC comparison.[citation needed]
https://en.wikipedia.org/wiki/File_verification
Ambient IoT, fromambientandInternet of things, is a concept originally coined by3GPP[1]that is used in thetechnologyindustry referring to anecosystemof a large number of objects in which every item is connected into awireless sensor networkusing low-costself-poweredsensor nodes.[2][3][4][5]Bluetooth SIGhas assessed the total addressable market of Ambient IoT to be more than 10 trillion devices across differentverticals.[6] The applications of Ambient IoT include makingsupply chainsfor food and medicine more efficient andsustainable, protecting fromcounterfeitingand delivering the data required foradvanced transportationandsmart cityinitiatives.[2][7]Ambient IoT has been called "the original vision for theIoT" byU.S. Department of CommerceIoT Advisory BoardchairBenson Chan.[2] Standards for Ambient IoT are being considered by3GPP,[8]IEEEandBluetooth SIG.[4]
https://en.wikipedia.org/wiki/Ambient_IoT
Alertnessis a state of activeattentioncharacterized by highsensoryawareness. Someone who is alert is vigilant and promptly meets danger or emergency, or is quick to perceive and act. Alertness is a psychological and physiological state. Lack of alertness is a symptom of a number of conditions, includingnarcolepsy,attention deficit hyperactivity disorder,chronic fatigue syndrome,depression,Addison's disease, andsleep deprivation. Pronounced lack of alertness is analtered level of consciousness. States with low levels of alertness includedrowsiness. The word is formed from "alert", which comes from the Italianall'erta(on the watch, literally: on the height; 1618).[citation needed] Wakefulnessrefers mainly to differences between thesleepand waking states;vigilancerefers to sustained alertness andconcentration. Both terms are sometimes used synonymously with alertness. People who have to be alert during their jobs, such asair traffic controllersorpilots, often face challenges maintaining their alertness. Research shows that for people "...engaged in attention-intensive and monotonous tasks, retaining a constant level of alertness is rare if not impossible." If people employed in safety-related or transportation jobs have lapses in alertness, this "may lead to severe consequences in occupations ranging from air traffic control to monitoring of nuclear power plants."[1] Neurotransmittersthat can initiate, promote, or enhance wakefulness or alertness include serotonin, (nor)epinephrine, dopamine (e.g. blockade of dopamine reuptake), glutamate, histamine, and acetylcholine.Neuromodulatorsthat can do so include theneuropeptideorexin. Similarly inhibition or reduction of mechanisms causing sleepiness, or drowsiness such as certain cytokines andadenosine(as with caffeine) may also increase perceived wakefulness and thus alertness.[ambiguous][2][3][4] Wakefulness depends on the coordinated effort of multiple brain areas. These are affected by neurotransmitters and other factors.[3]Many Neurotransmitters are in effect to experience wakefulness to include GABA, Acetylcholine, Adenosine, Serotonin, Norepinephrine, Histamine, and Dopamine.[5]There is not an isolated neurotransmitter that alone is responsible for the sensation of wakefulness. However, it is known that many transmitters are used together to cause this effect.[5][6]Research to map the wakefulness circuitry is ongoing.[6] Beta powerhas been used as an indicator of cortical arousal or alertness by several studies.[further explanation needed][7]A study also measured alertness withEEGdata.[further explanation needed][8] Additional information can be found on theneurobiology,neuroscience,brain,behavioral neuroscience, andneurotransmitterpages. Thestimulantandadenosine receptor antagonistcaffeineis widely used to increase alertness orwakefulnessand improvemoodorperformance. People typically self-administer it in the form of drinks likegreen tea(where it is present alongside thel-theanine),energy drinks(often containingsugar/sugar-substitutes), orcoffee(which contains variouspolyphenols). The chemicals that accompany caffeine in these preparations can potentially alter the alertness-promoting effects of caffeine.[9]Caffeine is the world's most consumed stimulant drug.[10] Various natural biochemicals and herbs may have similar anti-fatigue effects, such asrhodiola rosea.[11]Variouspsychostimulantslikebromantanehave also been investigated as potential treatments for conditions where fatigue is a primary symptom.[12]Thealkaloidstheacrineandmethylliberineare structurally similar to caffeine and preliminary research supports their pro-alertness effects.[13] During the Second World War, U.S. soldiers and aviators were givenbenzedrine, anamphetaminedrug, to increase their alertness during long periods on duty. While air force pilots[where?]are able to use the drug to remain awake during combat flights, the use of amphetamines by commercial airline pilots is forbidden.[where?][citation needed]British troops used 72 million amphetamine tablets in the second world war[14]and the Royal Air Force used so many that "Methedrinewon the Battle of Britain" according to one report.[15][attribution needed]American bomber pilots used amphetamines ("go pills") to stay awake during long missions. TheTarnak Farm incident, in which an AmericanF-16pilot killed several friendly Canadian soldiers on the ground, was blamed by the pilot on his use of amphetamine. A nonjudicial hearing rejected the pilot's claim. Amphetamine is a common study aid among college and high-school students.[16]Amphetamine increases energy levels, concentration, and motivation, allowing students to study for an extended period of time.[17]These drugs are often acquired through diverted prescriptions of medication used to treatADHD, acquired from fellow students, rather than illicitly produced drugs.[18]Cocaineis also used to increase alertness,[19]and is present incoca tea.[20] Theeugeroicmodafinilhas recently gained popularity with theUS Military[21][vague]andother militaries. Beyond good sleep, physical activity, andhealthy diet, a review suggests odours,music, and extrinsicmotivationmay increase alertness or decrease mental fatigue.[22]Short rest periods and adjustments to lighting (level and type of) may also be useful.[23]Various types ofneurostimulationare being researched,[24][further explanation needed]as is themicrobiomeand related interventions.[2] A study suggests non-genetic determinants of alertness uponwaking up from sleepare:[25][26] The baseline of daily alertness[clarification needed]is related to the quality oftheir[clarification needed]sleep (currently[may be outdated as of July 2023]measured only by self-reported quality), positive emotional state (specifically self-report happiness), and age.[26]There are genes that enable people to be apparently healthy and alert with little sleep. However, twin-pair analyses indicate that the genetic contribution to daytime alertness is small.[26]Other factors such as natural light exposure[26]and synchronicity with thecircadian rhythmmay matter as well. Vigilance is important for animals so that they may watch out for predators. Typically a reduction in alertness is observed in animals that live in larger groups. Studies on vigilance have been conducted on various animals including thescaly-breasted munia.[28]
https://en.wikipedia.org/wiki/Alertness
Incryptography,Square(sometimes writtenSQUARE) is ablock cipherinvented byJoan DaemenandVincent Rijmen. The design, published in 1997, is a forerunner toRijndael, which has been adopted as theAdvanced Encryption Standard. Square was introduced together with a new form ofcryptanalysisdiscovered byLars Knudsen, called the "Square attack". The structure of Square is asubstitution–permutation networkwith eight rounds, operating on 128-bit blocks and using a 128-bitkey. Square is not patented. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Square_(cipher)
TheErdős number(Hungarian:[ˈɛrdøːʃ]) describes the "collaborative distance" between mathematicianPaul Erdősand another person, as measured by authorship ofmathematical papers. The same principle has been applied in other fields where a particular individual has collaborated with a large and broad number of peers. Paul Erdős (1913–1996) was an influential Hungarian mathematician who, in the latter part of his life, spent a great deal of time writing papers with a large number of colleagues — more than 500 — working on solutions to outstanding mathematical problems.[1]He published more papers during his lifetime (at least 1,525[2]) than any other mathematician in history.[1](Leonhard Eulerpublished more total pages of mathematics but fewer separate papers: about 800.)[3]Erdős spent most of his career with no permanent home or job. He traveled with everything he owned in two suitcases, and would visit mathematicians with whom he wanted to collaborate, often unexpectedly, and expect to stay with them.[4][5][6] The idea of the Erdős number was originally created by the mathematician's friends as a tribute to his enormous output. Later it gained prominence as a tool to study how mathematicians cooperate to find answers to unsolved problems. Several projects are devoted to studying connectivity among researchers, using the Erdős number as a proxy.[7]For example, Erdőscollaboration graphscan tell us how authors cluster, how the number of co-authors per paper evolves over time, or how new theories propagate.[8] Several studies have shown that leading mathematicians tend to have particularly low Erdős numbers,i.e., high proximity).[9]The median Erdős number ofFields Medalistsis 3. Only 7,097 (about 5% of mathematicians with a collaboration path) have an Erdős number of 2 or lower.[10]As time passes, the lowest Erdős number that can still be achieved will necessarily increase, as mathematicians with low Erdős numbers die and become unavailable for collaboration. Still, historical figures can have low Erdős numbers. For example, renowned Indian mathematicianSrinivasa Ramanujanhas an Erdős number of only 3 (throughG. H. Hardy, Erdős number 2), even though Paul Erdős was only 7 years old when Ramanujan died.[11] To be assigned an Erdős number, someone must be a coauthor of a research paper with another person who has a finite Erdős number. Paul Erdős himself is assigned an Erdős number of zero. A certain author's Erdős number is one greater than the lowest Erdős number of any of their collaborators; for example, an author who has coauthored a publication with Erdős would have an Erdős number of 1. TheAmerican Mathematical Societyprovides a free online tool to determine the collaboration distance between two mathematical authors listed in theMathematical Reviewscatalogue.[11] Erdős wrote around 1,500 mathematical articles in his lifetime, mostly co-written. He had 509 direct collaborators;[7]these are the people with Erdős number 1. The people who have collaborated with them (but not with Erdős himself) have an Erdős number of 2 (12,600 people as of 7 August 2020[12]), those who have collaborated with people who have an Erdős number of 2 (but not with Erdős or anyone with an Erdős number of 1) have an Erdős number of 3, and so forth. A person with no such coauthorship chain connecting to Erdős has an Erdős number ofinfinity(or anundefinedone). Since the death of Paul Erdős, the lowest Erdős number that a new researcher can obtain is 2. There is room for ambiguity over what constitutes a link between two authors. The American Mathematical Society collaboration distance calculator uses data fromMathematical Reviews, which includes most mathematics journals but covers other subjects only in a limited way, and which also includes some non-research publications. The Erdős Number Project web site says: ... One drawback of the MR system is that it considers all jointly authored works as providing legitimate links, even articles such as obituaries, which are not really joint research. ...[13] It also says: ... Our criterion for inclusion of an edge between vertices u and v is some research collaboration between them resulting in a published work. Any number of additional co-authors is permitted,... but excludes non-research publications such as elementary textbooks, joint editorships, obituaries, and the like. The "Erdős number of the second kind" restricts assignment of Erdős numbers to papers with only two collaborators.[14] The Erdős number was most likely first defined in print by Casper Goffman, ananalystwhose own Erdős number is 2.[12]Goffman published his observations about Erdős' prolific collaboration in a 1969 article entitled "And what is your Erdős number?"[15]See also some comments in an obituary by Michael Golomb.[16] The median Erdős number among Fields Medalists is as low as 3.[10]Fields Medalists with Erdős number 2 includeAtle Selberg,Kunihiko Kodaira,Klaus Roth,Alan Baker,Enrico Bombieri,David Mumford,Charles Fefferman,William Thurston,Shing-Tung Yau,Jean Bourgain,Richard Borcherds,Manjul Bhargava,Jean-Pierre SerreandTerence Tao. There are no Fields Medalists with Erdős number 1;[17]however,Endre Szemerédiis anAbel PrizeLaureate with Erdős number 1.[9] While Erdős collaborated with hundreds of co-authors, there were some individuals with whom he co-authored dozens of papers. This is a list of the ten persons who most frequently co-authored with Erdős and their number of papers co-authored with Erdős,i.e., their number of collaborations.[18] As of 2022[update], all Fields Medalists have a finite Erdős number, with values that range between 2 and 6, and a median of 3. In contrast, the median Erdős number across all mathematicians (with a finite Erdős number) is 5, with an extreme value of 13.[19]The table below summarizes the Erdős number statistics forNobel prizelaureates in Physics, Chemistry, Medicine, and Economics.[20]The first column counts the number of laureates. The second column counts the number of winners with a finite Erdős number. The third column is the percentage of winners with a finite Erdős number. The remaining columns report the minimum, maximum, average, and median Erdős numbers among those laureates. Among the Nobel Prize laureates in Physics,Albert EinsteinandSheldon Glashowhave an Erdős number of 2. Nobel Laureates with an Erdős number of 3 includeEnrico Fermi,Otto Stern,Wolfgang Pauli,Max Born,Willis E. Lamb,Eugene Wigner,Richard P. Feynman,Hans A. Bethe,Murray Gell-Mann,Abdus Salam,Steven Weinberg,Norman F. Ramsey,Frank Wilczek,David Wineland, andGiorgio Parisi. Fields Medal-winning physicistEd Wittenhas an Erdős number of 3.[10] Computational biologistLior Pachterhas an Erdős number of 2.[21]Evolutionary biologistRichard Lenskihas an Erdős number of 3, having co-authored a publication with Lior Pachter and with mathematicianBernd Sturmfels, each of whom has an Erdős number of 2.[22] There are at least two winners of theNobel Prize in Economicswith an Erdős number of 2:Harry M. Markowitz(1990) andLeonid Kantorovich(1975). Other financial mathematicians with Erdős number of 2 includeDavid Donoho,Marc Yor,Henry McKean,Daniel Stroock, andJoseph Keller. Nobel Prize laureates in Economics with an Erdős number of 3 includeKenneth J. Arrow(1972),Milton Friedman(1976),Herbert A. Simon(1978),Gerard Debreu(1983),John Forbes Nash, Jr.(1994),James Mirrlees(1996),Daniel McFadden(2000),Daniel Kahneman(2002),Robert J. Aumann(2005),Leonid Hurwicz(2007),Roger Myerson(2007),Alvin E. Roth(2012), andLloyd S. Shapley(2012) andJean Tirole(2014).[23] Some investment firms have been founded by mathematicians with low Erdős numbers, among themJames B. AxofAxcom Technologies, andJames H. SimonsofRenaissance Technologies, both with an Erdős number of 3.[24][25] Since the more formal versions of philosophy share reasoning with the basics of mathematics, these fields overlap considerably, and Erdős numbers are available for many philosophers.[26]PhilosophersJohn P. BurgessandBrian Skyrmshave an Erdős number of 2.[12]Jon BarwiseandJoel David Hamkins, both with Erdős number 2, have also contributed extensively to philosophy, but are primarily described as mathematicians. JudgeRichard Posner, having coauthored withAlvin E. Roth, has an Erdős number of at most 4.Roberto Mangabeira Unger, a politician, philosopher, and legal theorist who teaches at Harvard Law School, has an Erdős number of at most 4, having coauthored withLee Smolin. Angela Merkel,Chancellor of Germanyfrom 2005 to 2021, has an Erdős number of at most 5.[17] Some fields of engineering, in particularcommunication theoryandcryptography, make direct use of the discrete mathematics championed by Erdős. It is therefore not surprising that practitioners in these fields have low Erdős numbers. For example,Robert McEliece, a professor ofelectrical engineeringatCaltech, had an Erdős number of 1, having collaborated with Erdős himself.[27]CryptographersRon Rivest,Adi Shamir, andLeonard Adleman, inventors of theRSAcryptosystem, all have Erdős number 2.[21] The Romanian mathematician and computational linguistSolomon Marcushad an Erdős number of 1 for a paper inActa Mathematica Hungaricathat he co-authored with Erdős in 1957.[28] Erdős numbers have been a part of thefolkloreof mathematicians throughout the world for many years. Among all working mathematicians at the turn of the millennium who have a finite Erdős number, the numbers range up to 15, the median is 5, and the mean is 4.65;[7]almost everyone with a finite Erdős number has a number less than 8. Due to the very high frequency of interdisciplinary collaboration in science today, very large numbers of non-mathematicians in many other fields of science also have finite Erdős numbers.[29]For example, political scientistSteven Bramshas an Erdős number of 2. In biomedical research, it is common for statisticians to be among the authors of publications, and many statisticians can be linked to Erdős viaPersi DiaconisorPaul Deheuvels, who have Erdős numbers of 1, orJohn Tukey, who has an Erdős number of 2. Similarly, the prominent geneticistEric Landerand the mathematicianDaniel Kleitmanhave collaborated on papers,[30][31]and since Kleitman has an Erdős number of 1,[32]a large fraction of the genetics and genomics community can be linked via Lander and his numerous collaborators. Similarly, collaboration withGustavus Simmonsopened the door forErdős numberswithin thecryptographicresearch community, and manylinguistshave finite Erdős numbers, many due to chains of collaboration with such notable scholars asNoam Chomsky(Erdős number 4),[33]William Labov(3),[34]Mark Liberman(3),[35]Geoffrey Pullum(3),[36]orIvan Sag(4).[37]There are also connections withartsfields.[38] According to Alex Lopez-Ortiz, all theFieldsandNevanlinna prizewinners during the three cycles in 1986 to 1994 have Erdős numbers of at most 9. Earlier mathematicians published fewer papers than modern ones, and more rarely published jointly written papers. The earliest person known to have a finite Erdős number is eitherAntoine Lavoisier(born 1743, Erdős number 13),Richard Dedekind(born 1831, Erdős number 7), orFerdinand Georg Frobenius(born 1849, Erdős number 3), depending on the standard of publication eligibility.[39] Martin Tompa[40]proposed adirected graphversion of the Erdős number problem, by orienting edges of the collaboration graph from the alphabetically earlier author to the alphabetically later author and defining themonotone Erdős numberof an author to be the length of alongest pathfrom Erdős to the author in this directed graph. He finds a path of this type of length 12. Also,Michael Barrsuggests "rational Erdős numbers", generalizing the idea that a person who has writtenpjoint papers with Erdős should be assigned Erdős number 1/p.[41]From the collaboration multigraph of the second kind (although he also has a way to deal with the case of the first kind)—with one edge between two mathematicians foreachjoint paper they have produced—form an electrical network with a one-ohm resistor on each edge. The total resistance between two nodes tells how "close" these two nodes are. It has been argued that "for an individual researcher, a measure such as Erdős number captures the structural properties of [the] network whereas theh-indexcaptures the citation impact of the publications," and that "One can be easily convinced that ranking in coauthorship networks should take into account both measures to generate a realistic and acceptable ranking."[42] In 2004 William Tozier, a mathematician with an Erdős number of 4 auctioned off a co-authorship oneBay, hence providing the buyer with an Erdős number of 5. The winning bid of $1031 was posted by a Spanish mathematician, who refused to pay and only placed the bid to stop what he considered a mockery.[43][44] A number of variations on the concept have been proposed to apply to other fields, notably theBacon number(as in the gameSix Degrees of Kevin Bacon), connecting actors to the actorKevin Baconby a chain of joint appearances in films. It was created in 1994, 25 years after Goffman's article on the Erdős number. A small number of people are connected to both Erdős and Bacon and thus have anErdős–Bacon number, which combines the two numbers by taking their sum. One example is the actress-mathematicianDanica McKellar, best known for playing Winnie Cooper on the TV seriesThe Wonder Years. Her Erdős number is 4,[45]and her Bacon number is 2.[46] Further extension is possible. For example, the "Erdős–Bacon–Sabbath number" is the sum of the Erdős–Bacon number and the collaborative distance to the bandBlack Sabbathin terms of singing in public. PhysicistStephen Hawkinghad an Erdős–Bacon–Sabbath number of 8,[47]and actressNatalie Portmanhas one of 11 (her Erdős number is 5).[48] Inchess, theMorphy numberdescribes a player's connection toPaul Morphy, widely considered the greatest chess player of his time and an unofficialWorld Chess Champion.[49] Ingo, theShusakunumber describes a player's connection to Honinbo Shusaku, the strongest player of his time.[50][51] Invideo games, theRyunumber describes a video game character's connection to theStreet Fightercharacter Ryu.[52][53]
https://en.wikipedia.org/wiki/Erd%C5%91s_number
TheFederal Information Processing Standard Publication 140-2, (FIPS PUB 140-2),[1][2]is aU.S.governmentcomputer securitystandardused to approvecryptographic modules. The title isSecurity Requirements for Cryptographic Modules. Initial publication was on May 25, 2001, and was last updated December 3, 2002. Its successor,FIPS 140-3, was approved on March 22, 2019, and became effective on September 22, 2019.[3]FIPS 140-3 testing began on September 22, 2020, and the first FIPS 140-3 validation certificates were issued in December 2022.[4]FIPS 140-2 testing was still available until September 21, 2021 (later changed for applications already in progress to April 1, 2022[5]), creating an overlapping transition period of more than one year. FIPS 140-2 test reports that remain in the CMVP queue will still be granted validations after that date, but all FIPS 140-2 validations will be moved to the Historical List on September 21, 2026 regardless of their actual final validation date.[6] TheNational Institute of Standards and Technology(NIST) issued theFIPS 140Publication Series to coordinate the requirements and standards for cryptography modules that include both hardware and software components. Protection of a cryptographic module within a security system is necessary to maintain the confidentiality and integrity of the information protected by the module. This standard specifies the security requirements that will be satisfied by a cryptographic module. The standard provides four increasing qualitative levels of security intended to cover a wide range of potential applications and environments. The security requirements cover areas related to the secure design and implementation of a cryptographic module. These areas include cryptographic module specification; cryptographic module ports and interfaces; roles, services, and authentication; finite state model; physical security; operational environment; cryptographic key management; electromagnetic interference/electromagnetic compatibility (EMI/EMC); self-tests; design assurance; and mitigation of other attacks.[7] Federal agencies and departments can validate that the module in use is covered by an existingFIPS 140-1or FIPS 140-2 certificate that specifies the exact module name, hardware, software, firmware, and/or applet version numbers. The cryptographic modules are produced by theprivate sectororopen sourcecommunities for use by the U.S. government and other regulated industries (such as financial and health-care institutions) that collect, store, transfer, share and disseminatesensitive but unclassified(SBU) information. A commercial cryptographic module is also commonly referred to as ahardware security module(HSM). FIPS 140-2 defines four levels of security, simply named "Level 1" to "Level 4". It does not specify in detail what level of security is required by any particular application. Security Level 1 provides the lowest level of security. Basic security requirements are specified for a cryptographic module (e.g., at least one Approved algorithm or Approved security function shall be used). No specific physical security mechanisms are required in a Security Level 1 cryptographic module beyond the basic requirement for production-grade components. An example of a Security Level 1 cryptographic module is a personal computer (PC) encryption board. Security Level 2 improves upon the physical security mechanisms of a Security Level 1 cryptographic module by requiring features that show evidence of tampering, including tamper-evident coatings or seals that must be broken to attain physical access to the plaintext cryptographic keys andcritical security parameters(CSPs) within the module, or pick-resistant locks on covers or doors to protect against unauthorized physical access. In addition to the tamper-evident physical security mechanisms required at Security Level 2, Security Level 3 attempts to prevent the intruder from gaining access to CSPs held within the cryptographic module. Physical security mechanisms required at Security Level 3 are intended to have a high probability of detecting and responding to attempts at physical access, use or modification of the cryptographic module. The physical security mechanisms may include the use of strong enclosures and tamper-detection/response circuitry that zeroes all plaintext CSPs when the removable covers/doors of the cryptographic module are opened. Security Level 4 provides the highest level of security. At this security level, the physical security mechanisms provide a complete envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access. Penetration of the cryptographic module enclosure from any direction has a very high probability of being detected, resulting in the immediate deletion of all plaintext CSPs. Security Level 4 cryptographic modules are useful for operation in physically unprotected environments. Security Level 4 also protects a cryptographic module against a security compromise due to environmental conditions or fluctuations outside of the module's normal operating ranges for voltage and temperature. Intentional excursions beyond the normal operating ranges may be used by an attacker to thwart a cryptographic module's defenses. A cryptographic module is required to either include special environmental protection features designed to detect fluctuations and delete CSPs, or to undergo rigorous environmental failure testing to provide a reasonable assurance that the module will not be affected by fluctuations outside of the normal operating range in a manner that can compromise the security of the module. For Levels 2 and higher, the operating platform upon which the validation is applicable is also listed. Vendors do not always maintain their baseline validations. FIPS 140-2 establishes theCryptographic Module Validation Program(CMVP) as a joint effort by the NIST and theCommunications Security Establishment(CSE) for theGovernment of Canada Security programs overseen by NIST and CSE focus on working with government and industry to establish more secure systems and networks by developing, managing and promoting security assessment tools, techniques, services, and supporting programs for testing, evaluation and validation; and addresses such areas as: development and maintenance of security metrics, security evaluation criteria and evaluation methodologies, tests and test methods; security-specific criteria for laboratory accreditation; guidance on the use of evaluated and tested products; research to address assurance methods and system-wide security and assessment methodologies; security protocol validation activities; and appropriate coordination with assessment-related activities of voluntary industry standards bodies and other assessment regimes. The FIPS 140-2 standard is aninformation technologysecurity approval program for cryptographic modules produced by private sector vendors who seek to have their products certified for use in government departments and regulated industries (such as financial and health-care institutions) that collect, store, transfer, share and disseminatesensitive but unclassified(SBU) information. Tamper evident FIPS 140-2 security labels are utilized to deter and detect tampering of modules. All of the tests under the CMVP are handled by third-party laboratories that are accredited as Cryptographic Module Testing laboratories[8]by the National Voluntary Laboratory Accreditation Program (NVLAP).[9]Vendors interested in validation testing may select any of the twenty-one accredited labs. NVLAP accredited Cryptographic Modules Testing laboratories perform validation testing of cryptographic modules.[10][11]Cryptographic modules are tested against requirements found in FIPS PUB 140–2, Security Requirements for Cryptographic Modules. Security requirements cover 11 areas related to the design and implementation of a cryptographic module. Within most areas, a cryptographic module receives a security level rating (1–4, from lowest to highest), depending on what requirements are met. For other areas that do not provide for different levels of security, a cryptographic module receives a rating that reflects fulfillment of all of the requirements for that area. An overall rating is issued for the cryptographic module, which indicates: On a vendor's validation certificate, individual ratings are listed, as well as the overall rating. NIST maintains validation lists[12]for all of its cryptographic standards testing programs (past and present). All of these lists are updated as new modules/implementations receive validation certificates from NIST and CSE. Items on the FIPS 140-1 and FIPS 140-2 validation list reference validated algorithm implementations that appear on the algorithm validation lists. In addition to using a valid cryptographic module, encryption solutions are required to use cipher suites with approved algorithms or security functions established by the FIPS 140-2 Annex A to be considered FIPS 140-2 compliant. FIPS PUB 140-2 Annexes: Steven Marquess has posted a criticism that FIPS 140-2 validation can lead to incentives to keep vulnerabilities and other defects hidden. CMVP can decertify software in which vulnerabilities are found, but it can take a year to re-certify software if defects are found, so companies can be left without a certified product to ship. As an example, Steven Marquess mentions a vulnerability that was found, publicised, and fixed in the FIPS-certified open-source derivative of OpenSSL, with the publication meaning that the OpenSSL derivative was decertified. This decertification hurt companies relying on the OpenSSL-derivative's FIPS certification. By contrast, companies that had renamed and certified a copy of the open-source OpenSSL derivative were not decertified, even though they were basically identical, and did not fix the vulnerability. Steven Marquess therefore argues that the FIPS process inadvertently encourages hiding software's origins, to de-associate it from defects since found in the original, while potentially leaving the certified copy vulnerable.[13] In recent years, CMVP has taken steps to avoid the situation described by Marquess, moving validations to the Historical List based on the algorithms and functions contained in the module, rather than based on the provenance.[14]
https://en.wikipedia.org/wiki/FIPS_140-2
Wikibaseis a set of software tools for working withversionedsemi-structured datain a centralrepository. It is based uponJSONinstead of theunstructured dataofwikitextnormally used in MediaWiki. It stores and organizes information that can be collaboratively edited and read by humans and by computers, translated into multiple languages and shared with the rest of the world as part of the Linked Open Data (LOD) web.[3]It is primary made up of twoMediaWikiextensions, theWikibase Repository, an extension for storing and managing data, and theWikibase Clientwhich allows for the retrieval and embedding ofstructured datafrom a Wikibase repository. It was developed for and is used byWikidata,[4]byWikimedia Deutschland. Thedata modelfor Wikibase links consists of "entities" which include individual "items", labels or identifiers to describe them (potentially in multiple languages), and semantic statements that attribute "properties" to the item. These properties may either be other items within the database, textual information or other semi structured information.[5] Wikibase has aJavaScript-based user interface, a fully features API, and provides exports of all or subsets of data in many formats. Projects using it includeWikidata,Wikimedia Commons,[6]Europeana's Project,Lingua Libre,[7]FactGrid, theOpenStreetMapwiki,[8]and wikibase.cloud.
https://en.wikipedia.org/wiki/Wikibase
Astory within a story, also referred to as anembedded narrative, is aliterary devicein which a character within astorybecomes the narrator of a second story (within the first one).[1]Multiple layers of stories within stories are sometimes callednested stories. A play may have a brief play within it, such as in Shakespeare's playHamlet; a film may show the characters watching a short film; or a novel may contain a short story within the novel. A story within a story can be used in all types of narration includingpoems, andsongs. Stories within stories can be used simply to enhance entertainment for the reader or viewer, or can act as examples to teach lessons to other characters.[2]The inner story often has a symbolic and psychological significance for the characters in the outer story. There is often some parallel between the two stories, and the fiction of the inner story is used to reveal the truth in the outer story.[3]Often the stories within a story are used to satirize views, not only in the outer story, but also in the real world. When a story is told within another instead of being told as part of the plot, it allows the author to play on the reader's perceptions of the characters—the motives and thereliability of the storytellerare automatically in question.[2] Stories within a story may disclose the background of characters or events, tell of myths and legends that influence the plot, or even seem to be extraneous diversions from the plot. In some cases, the story within a story is involved in the action of the plot of the outer story. In others, the inner story is independent, and could either be skipped or stand separately, although many subtle connections may be lost. Often there is more than one level of internal stories, leading to deeply-nested fiction.Mise en abymeis theFrenchterm for a similar literary device (also referring to the practice inheraldryof placing the image of a small shield on a larger shield). The literary device of stories within a story dates back to a device known as a "frame story", where a supplemental story is used to help tell the main story. Typically, the outer story or "frame" does not have much matter, and most of the work consists of one or more complete stories told by one or more storytellers. The earliest examples of "frame stories" and "stories within stories" were in ancient Egyptian andIndian literature, such as the Egyptian "Tale of the Shipwrecked Sailor"[4]andIndian epicslike theRamayana,Seven Wise Masters,HitopadeshaandVikrama and Vethala. InVishnu Sarma'sPanchatantra, an inter-woven series of colorful animal tales are told with one narrative opening within another, sometimes three or four layers deep, and then unexpectedly snapping shut in irregular rhythms to sustain attention. In the epicMahabharata, theKurukshetra Waris narrated by a character inVyasa'sJaya, which itself is narrated by a character inVaisampayana'sBharata, which itself is narrated by a character in Ugrasrava'sMahabharata. BothThe Golden AssbyApuleiusandMetamorphosesbyOvidextend the depths of framing to several degrees. Another early example is theOne Thousand and One Nights(Arabian Nights), where the general story is narrated by an unknown narrator, and in this narration the stories are told byScheherazade. In many of Scheherazade's narrations, there are alsostories narrated, and even in some of these, there are some other stories.[5]An example of this is "The Three Apples", amurder mysterynarrated by Scheherazade. Within the story, after the murderer reveals himself, he narrates aflashbackof events leading up to the murder. Within this flashback, anunreliable narratortells a story to mislead the would-be murderer, who later discovers that he was misled after another character narrates the truth to him.[6]As the story concludes, the "Tale of Núr al-Dín Alí and his Son" is narrated within it. This perennially popular work can be traced back toArabic,Persian, and Indian storytelling traditions. Mary Shelley'sFrankensteinhas a deeply nested frame story structure, that features the narration of Walton, who records the narration of Victor Frankenstein, who recounts the narration of his creation, who narrates the story of a cabin dwelling family he secretly observes. Another classic novel with a frame story isWuthering Heights, the majority of which is recounted by the central family's housekeeper to a boarder. Similarly,Roald Dahl's storyThe Wonderful Story of Henry Sugaris about a rich bachelor who finds an essay written by someone who learned to "see" playing cards from the reverse side. The full text of this essay is included in the story, and itself includes a lengthy sub-story told as a true experience by one of the essay's protagonists, Imhrat Khan. Lewis Carroll'sAlicebooks,Alice's Adventures in Wonderland(1865) andThrough the Looking-Glass(1871), have several multiple poems that are mostly recited by several characters to the titular character. The most notable examples are "You Are Old, Father William","'Tis the Voice of the Lobster", "Jabberwocky", and "The Walrus and the Carpenter". Chaucer'sThe Canterbury TalesandBoccaccio'sDecameronare also classic frame stories. In Chaucer'sCanterbury Tales, the characters tell tales suited to their personalities and tell them in ways that highlight their personalities. The noble knight tells a noble story, the boring character tells a very dull tale, and the rude miller tells a smutty tale.Homer'sOdysseytoo makes use of this device;Odysseus' adventures at sea are all narrated by Odysseus to the court of kingAlcinousinScheria. Other shorter tales, many of them false, account for much of theOdyssey. Many modern children's story collections are essentiallyanthologyworks connected by this device, such asArnold Lobel'sMouse Tales,Paula Fox'sThe Little Swineherd, and Phillip and Hillary Sherlock'sEars and Tails and Common Sense. A well-known modern example of framing is the fantasy genre workThe Princess Bride(boththe bookandthe film). In the film, a grandfather is reading the story ofThe Princess Brideto his grandson. In the book, a more detailed frame story has a father editing a much longer (but fictive) work for his son, creating his own "Good Parts Version" (as the book called it) by leaving out all the parts that would bore or displease a young boy. Both the book and the film assert that the central story is from a book calledThe Princess Brideby a nonexistent author namedS. Morgenstern. In the Welsh novelAelwyd F'Ewythr Robert(1852) see byGwilym Hiraethog, a visitor to a farm in north Wales tells the story ofUncle Tom's Cabinto those gathered around the hearth. Sometimes a frame story exists in the same setting as the main story. On the television seriesThe Young Indiana Jones Chronicles, each episode was framed as though it were being told byIndywhen he was older (usually acted byGeorge Hall, but once byHarrison Ford). The same device of an adult narrator representing the older version of a young protagonist is used in the filmsStand by MeandA Christmas Story, and the television showThe Wonder YearsandHow I Met Your Mother. InThe Amory Wars, a tale told through the music ofCoheed and Cambria, tells a story for the first two albums but reveals that the story is being actively written by a character called the Writer in the third. During the album, the Writer delves into his own story and kills one of the characters, much to the dismay of the main character. The critically acclaimedBeatlesalbumSgt. Pepper's Lonely Hearts Club Bandis presented as a stage show by the fictional eponymous band, and one of its songs, "A Day in the Life", is in the form of a story within a dream. Similarly, theFugeesalbumThe Scoreis presented as the soundtrack to a fictional film, as are several other notableconcept albums, whileWyclef Jean'sThe Carnivalis presented as testimony at a trial. The majority ofAyreon's albums outline a sprawling, loosely interconnected science fiction narrative, as do the albums ofJanelle Monae. OnTom Waits's concept albumAlice(consisting of music he wrote for the musical of the same name), most of the songs are (very) loosely inspired by bothAlice in Wonderland, and the book's real-life author,Lewis Carroll, and inspirationAlice Liddell. The song "Poor Edward", however, is presented as a story told by a narrator aboutEdward Mordrake, and the song "Fish and Bird" is presented as a retold story that the narrator heard from a sailor. In his 1895historical novelPharaoh,Bolesław Prusintroduces a number of stories within the story, ranging in length fromvignettesto full-blown stories, many of them drawn fromancient Egyptiantexts, that further the plot, illuminatecharacters, and even inspire the fashioning of individual characters.Jan Potocki'sThe Manuscript Found in Saragossa(1797–1805) has an interlocking structure with stories-within-stories reaching several levels of depth. Theprovenanceof the story is sometimes explained internally, as inThe Lord of the RingsbyJ. R. R. Tolkien, which depicts theRed Book of Westmarch(a story-internal version of the book itself) as a history compiled by several of the characters. ThesubtitleofThe Hobbit("There and Back Again") is depicted as part of a rejected title of this book within a book, andThe Lord of the Ringsis a part of the final title.[7] An example of an interconnected inner story is "The Mad Trist" inEdgar Allan Poe'sFall of the House of Usher, where through somewhat mystical means the narrator's reading of the story within a story influences the reality of the story he has been telling, so that what happens in "The Mad Trist" begins happening in "The Fall of the House of Usher". Also, inDon QuixotebyMiguel de Cervantes, there are many stories within the story that influence the hero's actions (there are others that even the author himself admits are purely digressive). Most of the first part is presented as a translation of afound manuscriptby (fictional)Cide Hamete Benengeli. A commonly independentlyanthologisedstory is "The Grand Inquisitor" byDostoevskyfrom his longpsychological novelThe Brothers Karamazov, which is told by one brother to another to explain, in part, his view on religion and morality. It also, in a succinct way, dramatizes many of Dostoevsky's interior conflicts. An example of a "bonus material" style inner story is the chapter "The Town Ho's Story" inHerman Melville's novelMoby-Dick; that chapter tells a fully formed story of an excitingmutinyand contains many plot ideas that Melville had conceived during the early stages of writingMoby-Dick—ideas originally intended to be used later in the novel—but as the writing progressed, these plot ideas eventually proved impossible to fit around the characters that Melville went on tocreate and develop. Instead of discarding the ideas altogether, Melville wove them into a coherent short story and had the character Ishmael demonstrate his eloquence and intelligence bytelling the storyto his impressed friends. One of the most complicated structures of a story within a story was used byVladimir Nabokovin his novelThe Gift. There, as inner stories, function both poems and short stories by the main character Fyodor Cherdyntsev as well as the whole Chapter IV, a critical biography of NikolayChernyshevsky(also written by Fyodor). This novel is considered one of the first metanovels in literature. With the rise ofliterary modernism, writers experimented with ways in which multiple narratives might nest imperfectly within each other. A particularly ingenious example of nested narratives isJames Merrill's 1974modernist poem"Lost in Translation". InRabih Alameddine's novelThe Hakawati, orThe Storyteller, the protagonist describes coming home to the funeral of his father, one of a long line of traditional Arabic storytellers. Throughout the narrative, the author becomes hakawati (an Arabic word for a teller of traditional tales) himself, weaving the tale of the story of his own life and that of his family with folkloric versions of tales from Qur'an, the Old Testament, Ovid, and One Thousand and One Nights. Both the tales he tells of his family (going back to his grandfather) and the embedded folk tales, themselves embed other tales, often 2 or more layers deep. InSue Townsend'sAdrian Mole: The Wilderness Years,Adrianwrites the bookLo! The Flat Hills of My Homeland, in which the character Jake Westmorland writes a book calledSparg of Kronk, where the character Sparg writes a book with no language. InAnthony Horowitz'sMagpie Murders, a significant proportion of the book features a fictional but authentically formatted mystery novel by Alan Conway, titled 'Magpie Murders'. The secondary novel ends before its conclusion returning the narrative to the original, and primary, story where the protagonist and reviewer of the book attempts to find the final chapter. As this progresses characters and messages within the fictionalMagpie Murdersmanifest themselves within the primary narrative and the final chapter's content reveals the reason for its original absence. Dreams are a common way of including stories inside stories, and can sometimes go several levels deep. Both the bookThe Arabian Nightmareand the curse of "eternal waking" from theNeil GaimanseriesThe Sandmanfeature an endless series of waking from one dream into another dream. InCharles Maturin's novelMelmoth the Wanderer, the use of vast stories-within-stories creates a sense of dream-like quality in the reader. The 2023 Christian fictional novelJust OncebyKaren Kingsburyfeatures a series of three nested stories, all centering around the main characters of Hank and Irvel Myers:[citation needed] This structure is also found in classic religious and philosophical texts. The structure ofThe SymposiumandPhaedo, attributed toPlato, is of a story within a story within a story. In the ChristianBible, thegospelsare accounts of the life and ministry ofJesus. However, they also include within them theparablesthat Jesus told. In more modern philosophical works,Jostein Gaarder's books often feature this device. Examples areThe Solitaire Mystery, where the protagonist receives a small book from a baker, in which the baker tells the story of a sailor who tells the story of another sailor, andSophie's World, about a girl who is actually a character in a book that is being read by Hilde, a girl in another dimension. Later on in the book Sophie questions this idea, and realizes that Hilde too could be a character in a story that in turn is being read by another. Mahabharata, an Indian epic that is also the world's longest epic, has a nested structure.[8] The experimental modernist works that incorporate multiple narratives into one story are quite often science fiction or science fiction influenced. These include most of the various novels written by the American authorKurt Vonnegut. Vonnegut includes the recurring characterKilgore Troutin many of his novels. Trout acts as the mysteriousscience fictionwriter who enhances the morals of the novels through plot descriptions of his stories. Books such asBreakfast of ChampionsandGod Bless You, Mr. Rosewaterare sprinkled with these plot descriptions.Stanisław Lem'sTale of the Three Storytelling Machines of King GeniusfromThe Cyberiadhas several levels of storytelling. All levels tell stories of the same person, Trurl. House of Leavesis the tale of a man who finds a manuscript telling the story of a documentary that may or may not have ever existed, contains multiple layers of plot. The book includes footnotes and letters that tell their own stories only vaguely related to the events in the main narrative of the book, and footnotes for fake books. Robert A. Heinlein's later books (The Number of the Beast,The Cat Who Walks Through WallsandTo Sail Beyond the Sunset) propose the idea that every real universe is a fiction in another universe. Thishypothesisenables many writers who are characters in the books to interact with their own creations.Margaret Atwood's novelThe Blind Assassinis interspersed with excerpts from a novel written by one of the main characters; the novel-within-a-novel itself contains ascience fictionstory written by one ofthatnovel's characters. InPhilip K. Dick's novelThe Man in the High Castle, each character comes into interaction with a book calledThe Grasshopper Lies Heavy, which was written by the Man in the High Castle. As Dick's novel details a world in which theAxis Powers of World War IIhadsucceeded in dominating the known world, the novel within the novel details an alternative to this history in which the Allies overcome the Axis and bring stability to the world – a victory which itself is quite different from real history. InRed Orc's RagebyPhilip J. Farmer, a doublyrecursive methodis used to intertwine its fictional layers. This novel is part of a science fiction series, theWorld of Tiers. Farmer collaborated in the writing of this novel with an American psychiatrist, A. James Giannini, who had previously used theWorld of Tiersseries in treating patients in group therapy. During these therapeutic sessions, the content and process of the text and novelist was discussed rather than the lives of the patients. In this way subconscious defenses could be circumvented. Farmer took the real life case-studies and melded these with adventures of his characters in the series.[9] TheQuantum LeapnovelKnights of the Morningstaralso features a character who writes a book by that name. InMatthew Stover'sStar WarsnovelShatterpoint, the protagonistMace Windunarrates the story within his journal, while the main story is being told from thethird-person limitedpoint of view. SeveralStar Trektales are stories or events within stories, such asGene Roddenberry'snovelizationofStar Trek: The Motion Picture,J. A. Lawrence'sMudd's Angels,John M. Ford'sThe Final Reflection,Margaret Wander Bonanno'sStrangers from the Sky(which adopts the conceit that it is a book from the future by an author called Gen Jaramet-Sauner), and J. R. Rasmussen's "Research" in the anthologyStar Trek: Strange New WorldsII.Steven Barnes's novelization of theStar Trek: Deep Space Nineepisode "Far Beyond the Stars" partners withGreg Cox'sThe Eugenics Wars: The Rise and Fall of Khan Noonien Singh(Volume Two) to tell us that the fictional story "Far Beyond the Stars" (whose setting and cast closely resembleDeep Space Nine)—and, by extension, all ofStar Trekitself—is the creation of 1950s writer Benny Russell. The bookCloud Atlas(later adapted into a film byThe WachowskisandTom Tykwer) consisted of six interlinked stories nested inside each other in a Russian doll fashion. The first story (that of Adam Ewing in the 1850s befriending an escaped slave) is interrupted halfway through and revealed to be part of a journal being read by composer Robert Frobisher in 1930s Belgium. His own story of working for a more famous composer is told in a series of letters to his lover Rufus Sixsmith, which are interrupted halfway through and revealed to be in the possession of an investigative journalist named Luisa Rey and so on. Each of the first five tales are interrupted in the middle, with the sixth tale being told in full, before the preceding five tales are finished in reverse order. Each layer of the story either challenges the veracity of the previous layer, or is challenged by the succeeding layer. Presuming each layer to be a true telling within the overall story, a chain of events is created linking Adam Ewing's embrace of the abolitionist movement in the 1850s to the religious redemption of a post-apocalyptic tribal man over a century after the fall of modern civilization. The characters in each nested layer take inspiration or lessons from the stories of their predecessors in a manner that validates a belief stated in the sixth tale that "Our lives are not our own. We are bound to others, past and present and by each crime, and every kindness, we birth our future." The Crying of Lot 49byThomas Pynchonhas several characters seeing a play calledThe Courier's Tragedyby the fictitiousJacobeanplaywrightRichard Wharfinger. The events of the play broadly mirror those of the novel and give the character Oedipa Maas a greater context to consider her predicament; the play concerns a feud between two rival mail distribution companies, which appears to be ongoing to the present day, and in which, if this is the case, Oedipa has found herself involved. As inHamlet, the director makes changes to the original script; in this instance, a couplet that was added, possibly by religious zealots intent on giving the play extra moral gravity, are said only on the night that Oedipa sees the play. From what Pynchon relates, this is the only mention in the play of Thurn and Taxis' rivals' name—Trystero—and it is the seed for the conspiracy that unfurls. A significant portion ofWalter Moers'Labyrinth of Dreaming Booksis anekphrasison the subject of an epic puppet theater presentation. Another example is found inSamuel Delany'sTrouble on Triton, which features a theater company that produces elaborate staged spectacles for randomly selected single-person audiences. Plays produced by the "Caws of Art" theater company also feature in Russell Hoban's modern fable,The Mouse and His Child.Raina Telgemeier's best-sellingDramais a graphic novel about a middle-school musical production, and the tentative romantic fumblings of its cast members. InManuel Puig'sKiss of the Spider Woman, ekphrases on various old movies, some real, and some fictional, make up a substantial portion of the narrative. InPaul Russell'sBoys of Life, descriptions of movies by director/antihero Carlos (loosely inspired by controversial directorPier Paolo Pasolini) provide a narrative counterpoint and add a touch of surrealism to the main narrative. They additionally raise the question of whether works of artistic genius justify or atone for the sins and crimes of their creators. Auster'sThe Book of Illusions(2002) and Theodore Roszak'sFlicker(1991) also rely heavily on fictional films within their respective narratives. This dramatic device was probably first used byThomas KydinThe Spanish Tragedyaround 1587, where the play is presented before an audience of two of the characters, who comment upon the action.[10][11]From references in other contemporary works, Kyd is also assumed to have been the writer of an early, lost version ofHamlet(the so-calledUr-Hamlet), with a play-within-a-play interlude.[12]William Shakespeare'sHamletretains this device by having Hamlet ask some strolling players to performThe Murder of Gonzago. The action and characters inThe Murdermirror the murder of Hamlet's father in the main action, and Prince Hamlet writes additional material to emphasize this. Hamlet wishes to provoke the murderer, his uncle, and sums this up by saying "the play's the thing wherein I'll catch the conscience of the king." Hamlet calls this new playThe Mouse-trap(a title thatAgatha Christielater took for the long-running playThe Mousetrap). Christie's work was parodied in Tom Stoppard'sThe Real Inspector Hound, in which two theater critics are drawn into the murder mystery they are watching. The audience is similarly absorbed into the action in Woody Allen's playGod, which is about two failed playwrights in Ancient Greece. The phrase "The Conscience of the King" also became the title of aStar Trekepisode featuring a production of Hamlet which leads to the exposure of a murderer (although not a king). The playI Hate Hamletand the movieA Midwinter's Taleare about a production ofHamlet, which in turn includes a production ofThe Murder of Gonzago, as does theHamlet-based filmRosencrantz & Guildenstern Are Dead, which even features a third-level puppet theatre version within their play. Similarly, inAnton Chekhov'sThe Seagullthere are specific allusions toHamlet: in the first act a son stages a play to impress his mother, a professional actress, and her new lover; the mother responds by comparing her son to Hamlet. Later he tries to come between them, as Hamlet had done with his mother and her new husband. The tragic developments in the plot follow in part from the scorn the mother shows for her son's play.[13] Shakespeare adopted the play-within-a-play device for many of his other plays as well, includingA Midsummer Night's DreamandLove's Labours Lost. Almost the whole ofThe Taming of the Shrewis a play-within-a-play, presented to convinceChristopher Sly, a drunken tinker, that he is a nobleman watching a private performance, but the device has no relevance to the plot (unless Katharina's subservience to her "lord" in the last scene is intended to strengthen the deception against the tinker[14]) and is often dropped in modern productions. The musicalKiss Me, Kateis about the production of a fictitious musical,The Taming of the Shrew, based on the comedyThe Taming of the ShrewbyWilliam Shakespeare, and features several scenes from it.Pericles, Prince of Tyredraws in part on the 14th-centuryConfessio Amantis(itself a frame story) byJohn Gower, and Shakespeare has the ghost of Gower "assume man's infirmities" to introduce his work to the contemporary audience and comment on the action of the play.[15] InFrancis Beaumont'sKnight of the Burning Pestle(c. 1608) a supposed common citizen from the audience, actually a "planted" actor, condemns the play that has just started and "persuades" the players to present something about a shopkeeper. The citizen's "apprentice" then acts, pretending to extemporise, in the rest of the play. This is a satirical tilt at Beaumont's playwright contemporaries and their current fashion for offering plays about London life.[16] The operaPagliacciis about a troupe of actors who perform a play about marital infidelity that mirrors their own lives,[17]and composerRichard Rodney Bennettandplaywright-librettistBeverley Cross'sThe Mines of Sulphurfeatures a ghostly troupe of actors who perform a play about murder that similarly mirrors the lives of their hosts, from whom they depart, leaving them with the plague as nemesis.[18]John Adams'Nixon in China(1985–1987) features a surreal version ofMadam Mao'sRed Detachment of Women, illuminating the ascendance of human values over the disillusionment of high politics in the meeting.[19] InBertolt Brecht'sThe Caucasian Chalk Circle, a play is staged as aparableto villagers in theSoviet Unionto justify the re-allocation of their farmland: the tale describes how a child is awarded to a servant-girl rather than its natural mother, an aristocrat, as the woman most likely to care for it well. This kind of play-within-a-play, which appears at the beginning of the main play and acts as a "frame" for it, is called an "induction". Brecht's one-act playThe Elephant Calf(1926) is a play-within-a-play performed in the foyer of the theatre during hisMan Equals Man. InJean Giraudoux's playOndine, all of act two is a series of scenes within scenes, sometimes two levels deep. This increases thedramatic tensionand also makes more poignant the inevitable failure of the relationship between themortalHans andwater spriteOndine. The Two-Character PlaybyTennessee Williamshas a concurrent double plot with the convention of a play within a play. Felice and Clare are siblings and are both actor/producers touringThe Two-Character Play. They have supposedly been abandoned by their crew and have been left to put on the play by themselves. The characters in the play are also brother and sister and are also named Clare and Felice. The Mysteries, a modern reworking of the medievalmystery plays, remains faithful to its roots by having the modern actors play the sincere, naïve tradesmen and women as they take part in the original performances.[20] Alternatively, a play might be about the production of a play, and include the performance of all or part of the play, as inNoises Off,A Chorus of Disapproval, orLilies. Similarly, the musicalMan of La Manchapresents the story of Don Quixote as an impromptu play staged in prison byQuixote's author,Miguel de Cervantes. In most stagings of the musicalCats, which include the song "Growltiger's Last Stand" – a recollection of an old play by Gus the Theatre Cat – the character of LadyGriddlebonesings "The Ballad of Billy McCaw". (However, many productions of the show omit "Growltiger's Last Stand", and "The Ballad of Billy McCaw" has at times been replaced with a mock aria, so this metastory is not always seen.) Depending on the production, there is another musical scene called "The Awful Battle of the Pekes and the Pollices" where the Jellicles put on a show for their leader. InLestat: The Musical, there are three play within a plays. First, when Lestat visits his childhood friend, Nicolas, who works in a theater, where he discovers his love for theater; and two more when the Theater of the Vampires perform. One is used as a plot mechanism to explain the vampire god, Marius, which sparks an interest in Lestat to find him. A play within a play occurs in the musicalThe King and I, where Princess Tuptim and the royal dancers give a performance ofSmall House of Uncle Thomas(orUncle Tom's Cabin) to their English guests. The play mirrors Tuptim's situation, as she wishes to run away from slavery to be with her lover, Lun Tha. In stagings ofDina Rubina's playAlways the Same Dream, the story is about staging a school play based on a poem byPushkin. Joseph Heller's 1967 playWe Bombed in New Havenis about actors engaged in a play about military airmen; the actors themselves become at times unsure whether they are actors or actual airmen. The 1937 musicalBabes in Armsis about a group of kids putting on a musical to raise money. The central plot device was retained for the popular 1939 film version withJudy GarlandandMickey Rooney. A similar plot was recycled for the filmsWhite ChristmasandThe Blues Brothers. The 1946 film noirThe Locketcontains a nestedflashbackstructure, with a screenplay bySheridan Gibneybased on the story "What Nancy Wanted" by Norma Barzman. TheFrançois TruffautfilmDay for Nightis about the making of a fictitious movie calledMeet Pamela(Je vous présente Pamela) and shows the interactions of the actors as they are making this movie about a woman who falls for her husband's father. The story ofPamelainvolves lust, betrayal, death, sorrow, and change, events that are mirrored in the experiences of the actors portrayed inDay for Night. There are a wealth of other movies that revolve around the film industry itself, even if not centering exclusively on one nested film. These include the darkly satirical classicSunset Boulevardabout an aging star and her parasitic victim, and the Coen Brothers' farceHail, Caesar! The script toKarel Reisz's movieThe French Lieutenant's Woman(1981), written byHarold Pinter, is a film-within-a-film adaptation ofJohn Fowles's book. In addition to the Victorian love story of the book, Pinter creates a present-day background story that shows a love affair between the main actors. The Muppet Moviebegins withthe Muppetssitting down in a theater to watch the eponymous movie, whichKermit the Frogclaims to be a semi-biographical account of how they all met. InBuster Keaton'sSherlock Jr., Keaton's protagonist actually enters into a film while it is playing in a cinema, as does the main character in theArnold SchwarzeneggerfilmThe Last Action Hero. A similar device is used in the music video for the song "Take On Me" byA-ha, which features a woman entering a pencil sketch. Conversely,Woody Allen'sPurple Rose of Cairois about a film character exiting the film to interact with the real world. Allen's earlier filmPlay it Again, Samfeatured liberal use of characters, dialogue and clips from the film classicCasablancaas a central device. The 2002Pedro AlmodóvarfilmTalk to Her(Hable con ella) has the chief character Benigno tell a story calledThe Shrinking Loverto Alicia, a long-term comatose patient whom Benigno, a male nurse, is assigned to care for. The film presentsThe Shrinking Loverin the form of a black-and-white silent melodrama. To prove his love to a scientist girlfriend,The Shrinking Loverprotagonist drinks a potion that makes him progressively smaller. The resulting seven-minute scene, which is readily intelligible and enjoyable as a stand-alone short subject, is considerably more overtly comic than the rest ofTalk to Her—the protagonist climbs giant breasts as if they were rock formations and even ventures his way inside a (compared to him) gigantic vagina. Critics have noted thatThe Shrinking Loveressentially is a sex metaphor. Later inTalk to Her, the comatose Alicia is discovered to be pregnant and Benigno is sentenced to jail for rape.The Shrinking Loverwas named Best Scene of 2002 in theSkandies, an annual survey of online cinephiles and critics invited each year by critic Mike D'Angelo.[21] Tropic Thunder(2008) is acomedy filmrevolving around a group ofprima donnaactors making aVietnam Warfilm (itself also namedTropic Thunder) when their fed-up writer and director decide to abandon them in the middle of the jungle, forcing them to fight their way out. The concept was perhaps[original research?]inspired by the 1986 comedyThree Amigos, where three washed-up silent film stars are expected to live out a real-life version of their old hit movies. The same idea of life being forced to imitate art is also reprised in theStar TrekparodyGalaxy Quest. The first episode of theanimeseriesThe Melancholy of Haruhi Suzumiyaconsists almost entirely of a poorly made film that the protagonists created, complete withKyon's typical, sarcastic commentary. Chuck Jones's 1953cartoonDuck AmuckshowsDaffy Ducktrapped in a cartoon that an unseen animator repeatedly manipulates. At the end, it is revealed that the whole cartoon was being controlled byBugs Bunny. TheDuck Amuckplot was essentially replicated in one of Jones' later cartoons,Rabbit Rampage(1955), in which Bugs Bunny turns out to be the victim of the sadistic animator (Elmer Fudd). A similar plot was also included in an episode ofNew Looney Tunes, in which Bugs is the victim, Daffy is the animator, and it was made on a computer instead of a pencil and paper. In 2007, theDuck Amucksequence was parodied onDrawn Together("Nipple Ring-Ring Goes to Foster Care"). All feature-length films byJörg ButtgereitexceptSchrammfeature a film within the film. InNekromantik, the protagonist goes to the cinema to see the fictional slasher filmVera. InDer Todesking, one of the character watches a video of the fictional Nazi exploitation filmVera – Todesengel der Gestapoand inNekromantik 2, the characters go to see a film calledMon déjeuner avec Vera, which is a parody ofLouis Malle'sMy Dinner with André. Quentin Tarantino'sInglourious Basterdsdepicts aNazi propagandafilm calledNation's Pride, which glorifies a soldier in the German army.Nation's Prideis directed byEli Roth. Joe Dante'sMatineedepictsMant, an early-1960s sci-fi/horror movie about a man who turns into anant. In one scene, the protagonists see aDisney-style family movie calledThe Shook-Up Shopping Cart. The 2002 martial arts epicHeropresented the same narrative several different times, as recounted by different storytellers, but with both factual and aesthetic differences. Similarly, in the whimsical 1988Terry GilliamfilmThe Adventures of Baron Munchausen, and the 2003Tim BurtonfilmBig Fish, the bulk of the film is a series of stories told by an (extremely) unreliable narrator. In the 2006 Tarsem filmThe Fall, an injured silent-movie stuntman tellsheroic fantasystories to a little girl with a broken arm to pass time in the hospital, which the film visualizes and presents with the stuntman's voice becoming voiceover narration. The fantasy tale bleeds back into and comments on the film's "present-tense" story. There are often incongruities based on the fact that the stuntman is an American and the girl Persian—the stuntman's voiceover refers to "Indians", "a squaw" and "a teepee", but the visuals show a Bollywood-style devi and a Taj Mahal-like castle. The same conceit of an unreliable narrator was used to very different effect in the 1995 crime dramaThe Usual Suspects(which garnered an Oscar forKevin Spacey's performance). Walt Disney's 1946 live-action drama filmSong of the Southhas three animated sequences, all based on theBr'er Rabbitstories, told as moral fables byUncle Remus(James Baskett) to seven-year-old Johnny (Bobby Driscoll) and his friends Ginny (Luana Patten) and Toby (Glenn Leedy). The seminal 1950 Japanese filmRashomon, based on the Japanese short story "In a Grove" (1921), utilizes theflashback-within-a-flashback technique. The story unfolds in flashback as the four witnesses in the story—the bandit, the murderedsamurai, his wife, and the nameless woodcutter—recount the events of one afternoon in a grove. But it is also a flashback within a flashback, because the accounts of the witnesses are being retold by a woodcutter and a priest to a ribald commoner as they wait out a rainstorm in a ruined gatehouse. The filmInceptionhas a deeply nested structure that is itself part of the setting, as the characters travel deeper and deeper into layers of dreams within dreams. Similarly, in the beginning of the music video for theMichael Jacksonsong "Thriller", the heroine is terrorized by her monster boyfriend in what turns out to be a film within a dream. The filmThe Grand Budapest Hotelhas four layers of narration: starting with a young girl at the author's memorial reading his book, it cuts to the old author in 1985 telling of an incident in 1968 when he, as a young author, stayed at the hotel and met the owner, old Zero. He was then told the story of young Zero and M. Gustave, from 1932, which makes up most of the narrative. Then in 2025, The filmDog Manis a flim in a comic for theDog Manseries. The 2001 filmMoulin Rouge!features a fictitious musical within a film, called "Spectacular Spectacular". The 1942Ernst LubitschcomedyTo Be or Not to Beconfuses the audience in the opening scenes with a play, "The Naughty Nazis", about Adolf Hitler which appears to be taking place within the actual plot of the film. Thereafter, the acting company players serve as the protagonists of the film and frequently use acting/costumes to deceive various characters in the film.Hamletalso serves as an important throughline in the film, as suggested by the title.Laurence Oliviersets the opening scene of his 1944 film ofHenry Vin thetiring roomof the oldGlobe Theatreas the actors prepare for their roles on stage. The early part of the film follows the actors in these "stage" performances and only later does the action almost imperceptibly expand to the full realism of theBattle of Agincourt. By way of increasingly more artificial sets (based on mediaeval paintings) the film finally returns to The Globe. Mel Brooks' filmThe Producersrevolves around a scheme to make money by producing a disastrously bad Broadway musical,Springtime for Hitler.Ironically the film itself was later made into its own Broadway musical (although a more intentionally successful one). TheOutkastmusic video for the song "Roses" is a short film about a high school musical. InDiary of a Wimpy Kid, the middle-schoolers put on a play ofThe Wizard of Oz, whileHigh School Musicalis a romantic comedy about the eponymous musical itself. A high school production is also featured in the gay teen romantic comedyLove, Simon. A 2012 Italian film,Caesar Must Die, stars real-life Italian prisoners who rehearse Shakespeare'sJulius CaesarinRebibbiaprison playingfictionalItalian prisoners rehearsing the same play in the same prison. In addition, the film itself becomes aJulius Caesaradaption of sorts as the scenes are frequently acted all around the prison, outside of rehearsals, and the prison life becomes indistinguishable from the play.[22] The main plot device inRepo! The Genetic Operais an opera which is going to be held the night of the events of the film. All of the principal characters of the film play a role in the opera, though the audience watching the opera is unaware that some of the events portrayed are more than drama. The 1990 biopicKorczak, about the last days of a Jewish children's orphanage in Nazi occupied Poland, features an amateur production ofRabindranath Tagore'sThe Post Office, which was selected by the orphanage's visionary leader as a way of preparing his charges for their own impending death. That same production is also featured in the stage playKorczak's Children,also inspired by the same historical events. The 1973 filmThe National Health, an adaptation of the 1969 playThe National HealthbyPeter Nichols, features a send-up of a typical American hospitalsoap operabeing shown on a television situated in an underfunded, unmistakably BritishNHShospital. TheJim CarreyfilmThe Truman Showis about a person who grows to adulthood without ever realizing that he is the unwitting hero of the immersive eponymous television show. InToy Story 2, the lead characterWoodylearns that he is based on the lead character of the same name of a 1950sWesternshow known asWoody's Roundup, which was seemingly cancelled due to the rise ofscience fiction, though this is eventually debunked after the final episode of the show can be seen playing. The first example of a video game within a video game is almost certainlyTim Stryker's 1980s era[vague]text-only gameFazuul(also the world's first online multiplayer game), in which one of the objects that the player can create is a minigame. Another early use of this trope was inCliff Johnson's 1987 hitThe Fool's Errand, a thematically linked narrative puzzle game, in which several of the puzzles were semi-independent games played against NPCs. Power Factorhas been cited as a rare example of a video game in which the entire concept is a video game within a video game: The player takes on the role of a character who is playing a "Virtual Reality Simulator", in which he in turn takes on the role of the hero Redd Ace.[23]The.hackfranchise also gives the concept a central role. It features a narrative in which internet advancements have created an MMORPG franchise called The World. Protagonists Kite andHaseotry to uncover the mysteries of the events surrounding The World. Characters in.hackare aware that they are video game characters. More commonly, however, the video game within a video game device takes the form of mini-games that are non-plot-oriented, and optional to the completion of the game. For example, in theYakuzaandShenmuefranchises, there are playable arcade machines featuring other Sega games that are scattered throughout the game world. InFinal Fantasy VIIthere are several video games that can be played in an arcade in the Gold Saucer theme park. InAnimal Crossing, the player can acquire individual NES emulations through various means and place them within their house, where they are playable in their entirety. When placed in the house, the games take the form of aNintendo Entertainment System. InFallout 4andFallout 76, the protagonist can find several cartridges throughout the wasteland that can be played on their pip-boy (an electronic device that exists only in the world of the game) or any terminal computer. InCeleste, there is a hidden room in which the protagonist can play the originalPICO-8prototype of the game. In theRemedyvideo game titledMax Payne, players can chance upon a number of ongoing television shows when activating or happening upon various television sets within the game environs, depending on where they are within the unfolding game narrative. Among them areLords & Ladies,Captain Baseball Bat Boy,Dick Justiceand the pinnacle television serialAddress Unknown– heavily inspired byDavid Lynch-style film narrative, particularlyTwin Peaks,Address Unknownsometimes prophesies events or character motives yet to occur in the Max Payne narrative. InGrand Theft Auto IV, the player can watch several TV channels which include many programs: reality shows, cartoons, and even game shows.[24] Terrance & PhillipfromSouth Parkcomments on the levels of violence and acceptable behaviour in the media and allow criticism of the outer cartoon to be addressed in the cartoon itself. Similarly, on the long running animated sitcomThe Simpsons, Bart's favorite cartoon,Itchy and Scratchy(a parody ofTom & Jerry), often echoes the plotlines of the main show.The Simpsonsalso parodied this structure with numerous 'layers' of sub-stories in the Season 17 episode "The Seemingly Never-Ending Story". The animated seriesSpongeBob SquarePantsfeatures numerous fictional shows, most notably,The Adventures of Mermaid Man and Barnacle Boy,which stars the titular elderly superheroesMermaid Man(Ernest Borgnine) andBarnacle Boy(Tim Conway). On the showDear White People, theScandalparodyDefamationoffers an ironic commentary on the main show's theme of interracial relationships. Similarly, each season of theHBOshowInsecurehas featured a different fictional show, including the slavery-era soap operaDue North, the rebooted black 1990s sitcomKev'yn,and the investigative documentary seriesLooking for LaToya. TheIrishtelevision seriesFather Tedfeatures a television show,Father Ben, which has characters and storylines almost identical to that ofFather Ted. The television shows30 Rock,Studio 60 on the Sunset Strip,Sonny with a Chance, andKappa Mikeyfeature a sketch show within the TV show. An extended plotline on the semi-autobiographical sitcomSeinfelddealt with the main characters developing a sitcom about their lives. The gag was reprised onCurb Your Enthusiasm, another semi-autobiographical show by and aboutSeinfeldco-creator Larry David, when the long-anticipatedSeinfeldreunion was staged entirely inside the new show. The "USS Callister" episode of theBlack Mirroranthology television series is about a man who is obsessed with aStar Trek-like show and recreates it as part of a virtual reality game. The concept of a film within a television series is employed in theMacrossuniverse.The Super Dimension Fortress Macross: Do You Remember Love?(1984) was originally intended as an alternative theatrical re-telling of the television seriesThe Super Dimension Fortress Macross(1982), but was later "retconned" into the Macrosscanonas a popular film within the television seriesMacross 7(1994). TheStargate SG-1episode "Wormhole X-Treme!" features a fictional TV show with an almost identical premise toStargate SG-1. A later episode, "200", depicts ideas for a possible reboot ofWormhole X-Treme!, including using a "younger and edgier" cast, or evenThunderbirds-style puppets. TheGleeepisode "Extraordinary Merry Christmas" features the members of New Directions starring in a black-and-white Christmas television special that is presented within the episode itself. The special is a homage to bothStar Wars Holiday Specialand the "Judy Garland Christmas Special". The British TV seriesDon't Hug Me I'm Scared, based on theweb seriesDon't Hug Me I'm Scared, is notable for being apuppet showthat includes a fictionalclaymationTV series within the show:Grolton & Hovris, a parody ofWallace and Gromit. Seinfeldhad a number of reoccurring fictional films, including a sci-fi film calledThe Flaming Globes of Sigmundand, most notably,Rochelle, Rochelle, a parody of artsy but exploitative foreign films. The trippy, metaphysically loopy thrillerDeath Castleis a central element of theMaster of Noneepisode "New York, I Love You". Theseries finaleofBarryfeatures a biopic of the titular character which was calledThe Mask Collector,and its production served as the catalyst for the last 4 episodes of Barry's final season. Stories inside stories can allow for genre changes.Arthur Ransomeuses the device to let his young characters in theSwallows and Amazonsseriesof children's books, set in the recognisable everyday world, take part in fantastic adventures of piracy in distant lands: two of the twelve books,Peter DuckandMissee Lee(and some would includeGreat Northern?as a third), are adventures supposedly made up by the characters.[25]Similarly, the film version ofChitty Chitty Bang Banguses a story within a story format to tell a purely fantastic fairy tale within a relatively more realistic frame-story. The film version ofThe Wizard of Ozdoes the same thing by making its inner story into a dream. Lewis Carroll's celebratedAlicebooks use the same device of a dream as an excuse for fantasy, while Carroll's less well-knownSylvie and Brunosubverts the trope by allowing the dream figures to enter and interact with the "real" world. In each episode ofMister Rogers' Neighborhood, the main story was realistic fiction, with live action human characters, while an inner story took place in theNeighborhood of Make-Believe, in which most characters were puppets, except Lady Aberlin and occasionally Mr. McFeely, played byBetty AberlinandDavid Newellin both realms. Some stories feature what might be called a literary version of theDroste effect, where an image contains a smaller version of itself (also a common feature in manyfractals). An early version is found in an ancient Chinese proverb, in which an old monk situated in a temple found on a high mountain recursively tells the same story to a younger monk about an old monk who tells a younger monk a story regarding an old monk sitting in a temple located on a high mountain, and so on.[26]The same concept is at the heart ofMichael Ende's classic children's novelThe Neverending Story, which prominently features a book of the same title. This is later revealed to be the same book the audience is reading, when it begins to be retold again from the beginning, thus creating an infinite regression that features as a plot element. Another story that includes versions of itself isNeil Gaiman'sThe Sandman: Worlds' Endwhich contains several instances of multiple storytelling levels, includingCerements(issue #55) where one of the inmost levels corresponds to one of the outer levels, turning the story-within-a-story structure into an infinite regression. Jesse Ball'sThe Way Through Doorsfeatures a deeply nested set of stories within stories, most of which explore alternate versions of the main characters. The frame device is that the main character is telling stories to a woman in a coma (similar to Almodóvar'sTalk to Her, mentioned above). Richard Adams' classic Watership Down includes several memorable tales about the legendary prince of rabbits, El-Ahraira, as told by master storyteller, Dandelion. Samuel Delany's great surrealist sci-fi classicDhalgrenfeatures the main character discovering a diary apparently written by a version of himself, with incidents that usually reflect, but sometimes contrast with the main narrative. The last section of the book is taken up entirely by journal entries, about which readers must choose whether to take as completing the narrator's own story. Similarly, inKiese Laymon'sLong Division, the main character discovers a book, also calledLong Division, featuring what appears to be himself, except as living twenty years earlier. The title book in Charles Yu'sHow to Live Safely in a Science Fictional Universeexists within itself as a stable creation of a closed loop in time. Likewise, in the Will Ferrell comedyStranger than Fictionthe main character discovers he is a character in a book that (along with its author) also exists in the same universe. The 1979 bookGödel, Escher, BachbyDouglas Hofstadterincludes a narrative betweenAchilles and the Tortoise(characters borrowed fromLewis Carroll, who in turn borrowed them fromZeno), and within this story they find the book "Provocative Adventures of Achilles and the Tortoise Taking Place in Sundry Spots of the Globe", which they begin to read, the Tortoise taking the part of the Tortoise, and Achilles taking the part of Achilles. Within this self-referential narrative, the two characters find the book "Provocative Adventures of Achilles and the Tortoise Taking Place in Sundry Spots of the Globe", which they begin to read, this time each taking the other's part. The 1979 experimental novelIf on a winter's night a travelerbyItalo Calvinofollows a reader, addressed in the second person, trying to read the very same book, but being interrupted by ten other recursively nested incomplete stories. Robert Altman's satirical Hollywood noirThe Playerends with theantiherobeing pitched a movie version of his own story, complete with an unlikely happy ending. The long-running musicalA Chorus Linedramatizes its own creation, and the life stories of its own original cast members. The famous final number does double duty as the showstopper for both the musical the audience is watching and the one the characters are appearing in.Austin Powers in Goldmemberbegins with an action film opening, which turns out to be a sequence being filmed bySteven Spielberg. Near the ending, the events of the film itself are revealed to be a movie being enjoyed by the characters. Jim Henson'sThe Muppet Movieis framed as a screening of the movie itself, and the screenplay for the movie is present inside the movie, which ends with an abstracted, abbreviated re-staging of its own events. The 1985 Tim Burton filmPee-Wee's Big Adventureends with the main characters watching a film version of their own adventures, but as reimagined as a Hollywood blockbuster action film, withJames Brolinas a more stereotypically manly version of thePaul Reubenstitle character. Episode 14 of theanimeseriesMartian Successor Nadesicois essentially a clip show, but has several newly animated segments based onGekigangar III, an anime that exists within its universe and that many characters are fans of, that involves the characters of that show watching Nadesico. The episode ends with the crew of the Nadesico watching the very same episode of Gekigangar, causing aparadox.Mel Brooks's 1974 comedyBlazing Saddlesleaves its Western setting when the climactic fight scene breaks out, revealing the setting to have been a set in theWarner Bros.studio lot; the fight spills out onto an adjacent musical set, then into the studio canteen, and finally onto the streets. The two protagonists arrive atGrauman's Chinese Theatre, which is showing the "premiere" ofBlazing Saddles; they enter the cinema to watch the conclusion of their own film. Brooks recycled the gag in his 1987Star Warsparody,Spaceballs, where the villains are able to locate the heroes by watching a copy of the movie they are in on VHS video tape (a comic exaggeration of the phenomenon of films being available on video before their theatrical release). Brooks also made the 1976 parodySilent Movieabout a buffoonish team of filmmakers trying to make the first Hollywood silent film in forty years—which is essentially that film itself (another forty years later, life imitated art imitating art, when an actual modern silent movie became a hit, the Oscar winnerThe Artist). The film-within-a-film format is used in theScreamhorror series. InScream 2, the opening scene takes place in a movie theater where a screening ofStabis played which depicts the events fromthe first film. In between the events ofScream 2andScream 3, a second film was released calledStab 2.Scream 3is about the actors filming a fictional third installment in the Stab series. The actors playing the trilogy's characters end up getting killed, much in the same way as the characters they are playing on screen and in the same order. In between the events ofScream 3andScream 4, four other Stab films are released. In the opening sequence ofScream 4two characters are watchingStab 7before they get killed. There's also a party in which all seven Stab movies were going to be shown. References are also made toStab 5involvingtime travelas a plot device. In the fifth installment of the series, also namedScream, an eighth Stab film is mentioned having been released before the film takes place. The characters in the film, several of which are fans of the series, heavily criticize the film, similar to howScream 4was criticized. Additionally, late in the film, Mindy watches the first Stab by herself. During the depiction of Ghostface sneaking up behind Randy on the couch from the first film in Stab, Ghostface sneaks up on Mindy and attacks and stabs her. DirectorSpike Jonze'sAdaptationis a fictionalized version of screenwriterCharlie Kaufman's struggles to adapt the non-cinematic bookThe Orchid Thiefinto a Hollywood blockbuster. As his onscreen self succumbs to the temptation to commercialize the narrative, Kaufman incorporates those techniques into the script, including tropes such as an invented romance, a car chase, a drug-running sequence, and an imaginary identical twin for the protagonist. (The movie also features scenes about the making ofBeing John Malkovich, previously written by Kaufman and directed by Jonze.) Similarly, in Kaufman's self-directed 2008 filmSynecdoche, New York, the main character Caden Cotard is a skilled director of plays who receives a grant, and ends up creating a remarkable theater piece intended as a carbon copy of the outside world. The layers of copies of the world ends up several layers deep. The same conceit was previously used by frequent Kaufman collaboratorMichel Gondryin his music video for theBjörksong "Bachelorette", which features a musical that is about, in part, the creation of that musical. A mini-theater and small audience appear on stage to watch the musical-within-a-musical, and at some point, within that second musical a yet-smaller theater and audience appear. Fractal fiction is sometimes utilized invideo gamesto play with the concept of player choice: In the first chapter ofStories Untold, the player is required to play atext adventure, which eventually becomes apparent to be happening in the same environment the player is in; inSuperhotthe narrative itself is constructed around the player playing a game called Superhot. Occasionally, a story within a story becomes such a popular element that the producer(s) decide to develop it autonomously as a separate and distinct work. This is an example of aspin-off. Such spin-offs may be produced as a way of providing additional information on the fictional world for fans. InHomestuckbyAndrew Hussie, there is a comic calledSweet Bro and Hella Jeff, created by one of the characters, Dave Strider. It was later adapted to its own ongoing series. In theToy Storyfilm universe,Buzz Lightyearis an animated toy action figure, which was based on a fictitious cartoon series,Buzz Lightyear of Star Command, which did not exist in the real world except for snippets seen withinToy Story. Later,Buzz Lightyear of Star Commandwas produced in the real world and was itself later joined byLightyear, a film described as the source material for the toy and cartoon series. Kujibiki Unbalance, a series in theGenshikenuniverse, has spawned merchandise of its own, and been remade into a series on its own. The popularDog Manseries of children's graphic novels is presented as a creation of the main characters of authorDav Pilkey's earlier series,Captain Underpants. In the animated online franchiseHomestar Runner, many of the best-known features were spun off from each other. The best known was "Strong Bad Emails", which depicted the villain of the original story giving snarky answers to fan emails, but that in turn spawned several other long-running features which started out as figments of Strong Bad's imagination, including the teen-oriented cartoon parody "Teen Girl Squad" and the anime parody "20X6". In theHarry Potterseries, three such supplemental books have been produced:Fantastic Beasts and Where to Find Them, a guidebook used by the characters;Quidditch Through the Ages, a book from the school library; andThe Tales of Beedle the Bard, presenting fairy tales told to children of the wizarding world. In the works ofKurt Vonnegut,Kilgore Trouthas written a novel calledVenus on the Half-Shell. In 1975 real-world authorPhilip José Farmerwrote a science-fiction novel calledVenus on the Half-Shell, published under the name Kilgore Trout. Captain Proton: Defender of the Earth, a story byDean Wesley Smith, was adapted from the holonovelCaptain Protonin theStar Trekuniverse. One unique example is theTyler Perrycomedy/horror hitBoo! A Madea Halloween, which originated as a parody of Tyler Perry films in theChris RockfilmTop 5.
https://en.wikipedia.org/wiki/Story_within_a_story
rsync(remote sync) is a utility fortransferringandsynchronizingfilesbetween a computer and a storage drive and acrossnetworkedcomputersby comparing themodification timesand sizes of files.[8]It is commonly found onUnix-likeoperating systemsand is under theGPL-3.0-or-laterlicense.[4][5][9][10][11][12] rsync is written inCas a single-threadedapplication.[13]The rsync algorithm is a type ofdelta encoding, and is used for minimizing network usage.Zstandard,LZ4, orZlibmay be used for additionaldata compression,[8]andSSHorstunnelcan be used for security. rsync is typically used for synchronizing files and directories between two different systems. For example, if the commandrsync local-file user@remote-host:remote-fileis run, rsync will use SSH to connect asusertoremote-host.[14]Once connected, it will invoke the remote host's rsync and then the two programs will determine what parts of the local file need to be transferred so that the remote file matches the local one. One application of rsync is the synchronization ofsoftware repositoriesonmirror sitesused bypackage management systems.[15][16] rsync can also operate in adaemonmode (rsyncd), serving and receiving files in the native rsync protocol (using thersync://syntax). Andrew TridgellandPaul Mackerraswrote the original rsync, which was first announced on 19 June 1996.[1]It is similar in function and invocation tordist(rdist -c), created byRalph Campbellin 1983 and released as part of4.3BSD.[17]Tridgell discusses the design, implementation, and performance of rsync in chapters 3 through 5 of his 1999Ph.D.thesis.[18]As of 2023[ref], it is maintained byWayne Davison.[2] Because of its flexibility, speed, and scriptability,rsynchas become a standard Linux utility, included in all popular Linux distributions.[citation needed]It has been ported to Windows (viaCygwin,Grsync, orSFU[19]),FreeBSD,[20]NetBSD,[21]OpenBSD,[22]andmacOS. Similar tocp,rcpandscp,rsyncrequires the specification of a source and a destination, of which at least one must be local.[23] Generic syntax: whereSRCis the file or directory (or a list of multiple files and directories) to copy from,DESTis the file or directory to copy to, and square brackets indicate optional parameters. rsynccan synchronize Unix clients to a central Unix server usingrsync/sshand standard Unix accounts. It can be used in desktop environments, for example to efficiently synchronize files with a backup copy on an external hard drive. A scheduling utility such ascroncan carry out tasks such as automated encryptedrsync-based mirroring between multiple hosts and a central server. A command line to mirrorFreeBSDmight look like:[24] TheApache HTTP Serversupports rsync only for updating mirrors.[25] The preferred (and simplest) way to mirror aPuTTYwebsite to the current directory is to use rsync.[26] A way to mimic the capabilities ofTime Machine (macOS);[27] Make a full backup of system root directory:[28] Delete all files and directories, within a directory, extremely fast: An rsync process operates by communicating with another rsync process, a sender and a receiver. At startup, an rsync client connects to a peer process. If the transfer is local (that is, between file systems mounted on the same host) the peer can be created with fork, after setting up suitable pipes for the connection. If a remote host is involved, rsync starts a process to handle the connection, typicallySecure Shell. Upon connection, a command is issued to start an rsync process on the remote host, which uses the connection thus established. As an alternative, if the remote host runs an rsync daemon, rsync clients can connect by opening a socket on TCP port 873, possibly using a proxy.[29] Rsync has numerous command line options and configuration files to specify alternative shells, options, commands, possibly with full path, and port numbers. Besides using remote shells, tunnelling can be used to have remote ports appear as local on the server where an rsync daemon runs. Those possibilities allow adjusting security levels to the state of the art, while a naive rsync daemon can be enough for a local network. One solution is the--dry-runoption, which allows users to validate theircommand-line argumentsand to simulate what would happen when copying the data without actually making any changes or transferring any data. By default, rsync determines which files differ between the sending and receiving systems by checking the modification time and size of each file. If time or size is different between the systems, it transfers the file from the sending to the receiving system. As this only requires reading file directory information, it is quick, but it will miss unusual modifications which change neither.[8] Rsync performs a slower but comprehensive check if invoked with--checksum. This forces a full checksum comparison on every file present on both systems. Barring rarechecksum collisions, this avoids the risk of missing changed files at the cost of reading every file present on both systems. The rsync utility uses analgorithminvented by Australian computer programmerAndrew Tridgellfor efficiently transmitting a structure (such as a file) across a communications link when the receiving computer already has a similar, but not identical, version of the same structure.[30] The recipient splits its copy of the file into chunks and computes twochecksumsfor each chunk: theMD5hash, and a weaker but easier to compute 'rolling checksum'.[31]It sends these checksums to the sender. The sender computes the checksum for each rolling section in its version of the file having the same size as the chunks used by the recipient's. While the recipient calculates the checksum only for chunks starting at full multiples of the chunk size, the sender calculates the checksum for all sections starting at any address. If any such rolling checksum calculated by the sender matches a checksum calculated by the recipient, then this section is a candidate for not transmitting the content of the section, but only the location in the recipient's file instead. In this case, the sender uses the more computationally expensive MD5 hash to verify that the sender's section and recipient's chunk are equal. Note that the section in the sender may not be at the same start address as the chunk at the recipient. This allows efficient transmission of files which differ by insertions and deletions.[32]The sender then sends the recipient those parts of its file that did not match, along with information on where to merge existing blocks into the recipient's version. This makes the copies identical. Therolling checksumused in rsync is based on Mark Adler'sadler-32checksum, which is used inzlib, and is itself based onFletcher's checksum. If the sender's and recipient's versions of the file have many sections in common, the utility needs to transfer relatively little data to synchronize the files. If typicaldata compressionalgorithms are used, files that are similar when uncompressed may be very different when compressed, and thus the entire file will need to be transferred. Some compression programs, such asgzip, provide a special "rsyncable" mode which allows these files to be efficiently rsynced, by ensuring that local changes in the uncompressed file yield only local changes in the compressed file. Rsync supports other key features that aid significantly in data transfers or backup. They include compression and decompression of data block by block usingZstandard,LZ4, orzlib, and support for protocols such assshandstunnel. Therdiffutility uses the rsync algorithm to generatedelta fileswith the difference from file A to file B (like the utilitydiff, but in a different delta format). The delta file can then be applied to file A, turning it into file B (similar to thepatchutility). rdiff works well withbinary files. Therdiff-backupscript maintains abackupmirror of a file or directory either locally or remotely over the network on another server. rdiff-backup stores incremental rdiff deltas with the backup, with which it is possible to recreate any backup point.[33] Thelibrsynclibrary used by rdiff is an independent implementation of the rsync algorithm. It does not use the rsync network protocol and does not share any code with the rsync application.[34]It is used byDropbox, rdiff-backup,duplicity, and other utilities.[34] Theacrosynclibrary is an independent, cross-platform implementation of the rsync network protocol.[35]Unlike librsync, it is wire-compatible with rsync (protocol version 29 or 30). It is released under theReciprocal Public Licenseand used by the commercial rsync softwareAcrosync.[36] Theduplicitybackup software written inpythonallows for incremental backups with simple storage backend services like local file system,sftp,Amazon S3and many others. It utilizes librsync to generate delta data against signatures of the previous file versions, encrypting them usinggpg, and storing them on the backend. For performance reasons a local archive-dir is used to cache backup chain signatures, but can be re-downloaded from the backend if needed. As of macOS 10.5 and later, there is a special-Eor--extended-attributesswitch which allows retaining much of theHFS+file metadata when syncing between two machines supporting this feature. This is achieved by transmitting theResource Forkalong with the Data Fork.[37] zsyncis an rsync-like tool optimized for many downloads per file version. zsync is used by Linux distributions such asUbuntu[38]for distributing fast changing betaISO imagefiles. zsync uses the HTTP protocol and .zsync files with pre-calculated rolling hash to minimize server load yet permit diff transfer for network optimization.[39] Rcloneis an open-source tool inspired by rsync that focuses on cloud and other high latency storage. It supports more than 50 different providers and provides an rsync-like interface for cloud storage.[40]However, Rclone does not support rolling checksums for partial file syncing (binary diffs) because cloud storage providers do not usually offer the feature and Rclone avoids storing additional metadata.[41]
https://en.wikipedia.org/wiki/Rsync
Confidentialityinvolves a set of rules or a promise sometimes executed throughconfidentiality agreementsthat limits the access to or places restrictions on the distribution of certain types ofinformation. By law, lawyers are often required to keep confidential anything on the representation of a client. The duty of confidentiality is much broader than theattorney–client evidentiary privilege, which only coverscommunicationsbetween the attorney and the client.[1] Both the privilege and the duty serve the purpose of encouraging clients to speak frankly about their cases. This way, lawyers can carry out their duty to provide clients with zealous representation. Otherwise, the opposing side may be able to surprise the lawyer in court with something he did not know about his client, which may weaken the client's position. Also, a distrustful client might hide a relevant fact he thinks is incriminating, but that a skilled lawyer could turn to the client's advantage (for example, by raisingaffirmative defenseslike self-defense). However, most jurisdictions have exceptions for situations where the lawyer has reason to believe that the client may kill or seriously injure someone, may cause substantial injury to the financial interest or property of another, or is using (or seeking to use) the lawyer's services to perpetrate a crime or fraud. In such situations the lawyer has the discretion, but not the obligation, to disclose information designed to prevent the planned action. Most states have a version of this discretionary disclosure rule under Rules of Professional Conduct, Rule 1.6 (or its equivalent). A few jurisdictions have made this traditionally discretionary duty mandatory. For example, see the New Jersey and Virginia Rules of Professional Conduct, Rule 1.6. In some jurisdictions, the lawyer must try to convince the client to conform his or her conduct to the boundaries of the law before disclosing any otherwise confidential information. These exceptions generally do not cover crimes that have already occurred, even in extreme cases where murderers have confessed the location of missing bodies to their lawyers but the police are still looking for those bodies. TheU.S. Supreme Courtand manystate supreme courtshave affirmed the right of a lawyer to withhold information in such situations. Otherwise, it would be impossible for any criminal defendant to obtain a zealous defense. California is famous for having one of the strongest duties of confidentiality in the world; its lawyers must protect client confidences at "every peril to himself [or herself]" under former California Business and Professions Code section 6068(e). Until an amendment in 2004 (which turned subsection (e) into subsection (e)(1) and added subsection (e)(2) to section 6068), California lawyers were not even permitted to disclose that a client was about to commit murder or assault. The Supreme Court of California promptly amended the California Rules of Professional Conduct to conform to the new exception in the revised statute. Recent legislation in the UK curtails the confidentiality professionals like lawyers and accountants can maintain at the expense of the state.[2]Accountants, for example, are required to disclose to the state any suspicions of fraudulent accounting and, even, the legitimate use of tax saving schemes if those schemes are not already known to the tax authorities. The "three traditional requirements of the cause of action for breach of confidence"[3]: [19]were identified byMegarry JinCoco v A N Clark (Engineers) Ltd(1968) in the following terms:[4] In my judgment, three elements are normally required if, apart from contract, a case of breach of confidence is to succeed. First, the information itself, in the words of Lord Greene, M.R. in theSaltmancase on page 215, must "have the necessary quality of confidence about it." Secondly, that information must have been imparted in circumstances importing an obligation of confidence. Thirdly, there must be an unauthorised use of that information to the detriment of the party communicating it. The 1896 case featuring the royalaccoucheurDrWilliam Smoult Playfairshowed the difference between lay and medical views. Playfair was consulted by Linda Kitson; he ascertained that she had been pregnant while separated from her husband. He informed his wife, a relative of Kitson's, in order that she protect herself and their daughters from moral contagion. Kitson sued, and the case gained public notoriety, with huge damages awarded against the doctor.[5] Confidentiality is commonly applied to conversations between doctors and patients. Legal protections prevent physicians from revealing certain discussions with patients, even under oath in court.[6]Thisphysician-patient privilegeonly applies to secrets shared between physician and patient during the course of providing medical care.[6][7] The rule dates back to at least theHippocratic Oath, which reads in part:Whatever, in connection with my professional service, or not in connection with it, I see or hear, in the life of men, which ought not to be spoken of abroad, I will not divulge, as reckoning that all such should be kept secret. Traditionally, medical ethics has viewed the duty of confidentiality as a relatively non-negotiable tenet of medical practice.[8] Confidentiality is standard in the United States byHIPAAlaws, specifically the Privacy Rule, and various state laws, some more rigorous than HIPAA. However, numerous exceptions to the rules have been carved out over the years. For example, many American states require physicians to report gunshot wounds to the police and impaired drivers to the Department of Motor Vehicles. Confidentiality is also challenged in cases involving the diagnosis of a sexually transmitted disease in a patient who refuses to reveal the diagnosis to a spouse, and in the termination of a pregnancy in an underage patient, without the knowledge of the patient's parents. Many states in the U.S. have laws governing parental notification in underage abortion.[9]Confidentiality can be protected in medical research viacertificates of confidentiality. Due to theEUDirective 2001/20/EC, inspectors appointed by the Member States have to maintain confidentiality whenever they gain access to confidential information as a result of thegood clinical practiceinspections in accordance with applicable national and international requirements.[10] A typical patient declaration might read: I have been informed of the benefit that I gain from the protection and the rights granted by the European Union Data Protection Directive and other national laws on the protection of my personal data. I agree that the representatives of the sponsor or possibly the health authorities can have access to my medical records. My participation in the study will be treated as confidential. I will not be referred to by my name in any report of the study. My identity will not be disclosed to any person, except for the purposes described above and in the event of a medical emergency or if required by the law. My data will be processed electronically to determine the outcome of this study, and to provide it to the health authorities. My data may be transferred to other countries (such as the USA). For these purposes the sponsor has to protect my personal information even in countries whosedata privacylaws are less strict than those of this country. In the United Kingdom information about an individual's HIV status is kept confidential within theNational Health Service. This is based in law, in the NHS Constitution, and in key NHS rules and procedures. It is also outlined in every NHS employee's contract of employment and in professional standards set by regulatory bodies.[11]The National AIDS Trust's Confidentiality in the NHS: Your Information, Your Rights[12]outlines these rights. All registered healthcare professionals must abide by these standards and if they are found to have breached confidentiality, they can face disciplinary action. A healthcare worker shares confidential information with someone else who is, or is about to, provide the patient directly with healthcare to make sure they get the best possible treatment. They only share information that is relevant to their care in that instance, and with consent. There are two ways to give consent:explicit consentorimplied consent. Explicit consent is when a patient clearly communicates to a healthcare worker, verbally or in writing or in some other way, that relevant confidential information can be shared. Implied consent means that a patient's consent to share personal confidential information is assumed. When personal confidential information is shared between healthcare workers, consent is taken as implied. If a patient doesn't want a healthcare worker to share confidential health information, they need to make this clear and discuss the matter with healthcare staff. Patients have the right, in most situations, to refuse permission for ahealth careprofessional to share their information with another healthcare professional, even one giving them care—but are advised, where appropriate, about the dangers of this course of action, due to possible drug interactions. However, in a few limited instances, a healthcare worker can share personal information without consent if it is in the public interest. These instances are set out in guidance from the General Medical Council,[13]which is the regulatory body for doctors. Sometimes the healthcare worker has to provide the information – if required by law or in response to a court order. TheNational AIDS Trusthas written a guide for people living with HIV to confidentiality in the NHS.[14] The ethical principle of confidentiality requires that information shared by a client with atherapistisn't shared without consent, and that the sharing of information would be guided by ETHIC Model: Examining professional values, after thinking about ethical standards of the certifying association, hypothesize about different courses of action and possible consequences, identifying how it and to whom will it be beneficial per professional standards, and after consulting with supervisor and colleagues.[15]Confidentiality principle bolsters thetherapeutic alliance, as it promotes an environment of trust. There are important exceptions to confidentiality, namely where it conflicts with the clinician'sduty to warnorduty to protect. This includes instances ofsuicidal behaviororhomicidalplans,child abuse,elder abuseanddependent adult abuse. Information shared by a client with a therapist is considered asprivileged communication, however in certain cases and based on certain provinces and states they are negated, it is determined by the use of negative and positive freedom.[16] Some legal jurisdictions recognise a category of commercial confidentiality whereby abusinessmay withhold information on the basis of perceived harm to "commercial interests".[17]For example,Coca-Cola's main syrupformularemains atrade secret. Banking secrecy,[18][19]alternatively known as financial privacy, banking discretion, or bank safety,[20][21]is aconditional agreementbetween a bank and its clients that all foregoing activities remain secure, confidential, and private.[22]Most often associated withbanking in Switzerland, banking secrecy is prevalent inLuxembourg,Monaco,Hong Kong,Singapore,Ireland, andLebanon, among otheroff-shore banking institutions. Also known as bank–client confidentiality or banker–client privilege,[23][24]the practice was started byItalian merchantsduring the 1600s nearNorthern Italy(a region that would become theItalian-speaking regionof Switzerland). Confidentiality agreements that "seal"litigation settlementsare not uncommon, but this can leave regulators and society ignorant of public hazards. In the U.S. state of Washington, for example, journalists discovered that about two dozen medical malpractice cases had been improperly sealed by judges, leading to improperly weak discipline by the state Department of Health.[25]In the 1990s and early 2000s, theCatholic sexual abuse scandalinvolved a number of confidentiality agreements with victims.[26]Some states have passed laws that limit confidentiality. For example, in 1990 Florida passed a 'Sunshine in Litigation' law that limits confidentiality from concealing public hazards.[27]Washington state, Texas, Arkansas, and Louisiana have laws limiting confidentiality as well, although judicial interpretation has weakened the application of these types of laws.[28]In the U.S. Congress, a similar federal Sunshine in Litigation Act has been proposed but not passed in 2009, 2011, 2014, and 2015.[29] The dictionary definition ofconfidentialityat Wiktionary Quotations related toConfidentialityat Wikiquote
https://en.wikipedia.org/wiki/Confidentiality
There are many different types of software available to producecharts. A number of notable examples (with their own Wikipedia articles) are given below and organized according to theprogramming languageor other context in which they are used.
https://en.wikipedia.org/wiki/List_of_charting_software
In various science/engineering applications, such asindependent component analysis,[1]image analysis,[2]genetic analysis,[3]speech recognition,[4]manifold learning,[5]and time delay estimation[6]it is useful to estimate thedifferential entropyof a system or process, given some observations. The simplest and most common approach useshistogram-based estimation, but other approaches have been developed and used, each with its own benefits and drawbacks.[7]The main factor in choosing a method is often a trade-off between the bias and the variance of the estimate,[8]although the nature of the (suspected) distribution of the data may also be a factor,[7]as well as the sample size and the size of the alphabet of the probability distribution.[9] The histogram approach uses the idea that the differential entropy of a probability distributionf(x){\displaystyle f(x)}for a continuous random variablex{\displaystyle x}, can be approximated by first approximatingf(x){\displaystyle f(x)}with ahistogramof the observations, and then finding thediscrete entropyof a quantization ofx{\displaystyle x} with bin probabilities given by that histogram. The histogram is itself amaximum-likelihood (ML) estimateof the discretized frequency distribution[citation needed]), wherew{\displaystyle w}is the width of thei{\displaystyle i}th bin. Histograms can be quick to calculate, and simple, so this approach has some attraction. However, the estimate produced isbiased, and although corrections can be made to the estimate, they may not always be satisfactory.[10] A method better suited for multidimensionalprobability density functions(pdf) is to first make apdf estimatewith some method, and then, from the pdf estimate, compute the entropy. A useful pdf estimate method is e.g. Gaussianmixture modeling(GMM), where theexpectation maximization(EM) algorithm is used to find an ML estimate of aweighted sumof Gaussian pdf's approximating the data pdf. If the data is one-dimensional, we can imagine taking all the observations and putting them in order of their value. The spacing between one value and the next then gives us a rough idea of (thereciprocalof) the probability density in that region: the closer together the values are, the higher the probability density. This is a very rough estimate with highvariance, but can be improved, for example by thinking about the space between a given value and the onemaway from it, wheremis some fixed number.[7] The probability density estimated in this way can then be used to calculate the entropy estimate, in a similar way to that given above for the histogram, but with some slight tweaks. One of the main drawbacks with this approach is going beyond one dimension: the idea of lining the data points up in order falls apart in more than one dimension. However, using analogous methods, some multidimensional entropy estimators have been developed.[11][12] For each point in our dataset, we can find the distance to itsnearest neighbour. We can in fact estimate the entropy from the distribution of the nearest-neighbour-distance of our datapoints.[7](In a uniform distribution these distances all tend to be fairly similar, whereas in a strongly nonuniform distribution they may vary a lot more.) When in under-sampled regime, having a prior on the distribution can help the estimation. One suchBayesian estimatorwas proposed in the neuroscience context known as the NSB (Nemenman–Shafee–Bialek) estimator.[13][14]The NSB estimator uses a mixture ofDirichlet prior, chosen such that the induced prior over the entropy is approximately uniform. A new approach to the problem of entropy evaluation is to compare the expected entropy of a sample of random sequence with the calculated entropy of the sample. The method gives very accurate results, but it is limited to calculations of random sequences modeled asMarkov chainsof the first order with small values of bias and correlations. This is the first known method that takes into account the size of the sample sequence and its impact on the accuracy of the calculation of entropy.[15][16] A deep neural network (DNN) can be used to estimate the joint entropy and called Neural Joint Entropy Estimator (NJEE).[17]Practically, the DNN is trained as a classifier that maps an input vector or matrix X to an output probability distribution over the possible classes of random variable Y, given input X. For example, in an image classification task, the NJEE maps a vector of pixel values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by a Softmax layer with number of nodes that is equal to the alphabet size of Y. NJEE uses continuously differentiable activation functions, such that the conditions for the universal approximation theorem holds. It is shown that this method provides a strongly consistent estimator and outperforms other methods in case of large alphabet sizes.[17][9]
https://en.wikipedia.org/wiki/Entropy_estimation
TheZero Page(or Base Page) is adata structureused inCP/Msystems for programs to communicate with the operating system. In 8-bit CP/M versions it is located in thefirst 256 bytes of memory, hence its name. The equivalent structure inDOSis theProgram Segment Prefix(PSP), a 256-byte (page-sized) structure, which is by default located exactly before offset 0 of the program's load segment, rather than in segment 0. A segment register is initialised to 0x10 less than the code segment, in order to address it. In 8-bit CP/M, it has the following structure: InCP/M-86, the structure is:
https://en.wikipedia.org/wiki/Zero_page_(CP/M)
Akeyincryptographyis a piece of information, usually a string of numbers or letters that are stored in a file, which, when processed through a cryptographicalgorithm, canencodeor decode cryptographic data. Based on the used method, the key can be different sizes and varieties, but in all cases, the strength of the encryption relies on the security of the key being maintained. A key'ssecurity strengthis dependent on its algorithm, the size of the key, the generation of the key, and the process of key exchange. The key is what is used to encrypt data fromplaintexttociphertext.[1]There are different methods for utilizing keys and encryption. Symmetric cryptographyrefers to the practice of the same key being used for both encryption and decryption.[2] Asymmetric cryptographyhas separate keys for encrypting and decrypting.[3][4]These keys are known as the public and private keys, respectively.[5] Since the key protects the confidentiality and integrity of the system, it is important to be kept secret from unauthorized parties. With public key cryptography, only the private key must be kept secret, but with symmetric cryptography, it is important to maintain the confidentiality of the key.Kerckhoff's principlestates that the entire security of the cryptographic system relies on the secrecy of the key.[6] Key sizeis the number ofbitsin the key defined by the algorithm. This size defines the upper bound of the cryptographic algorithm's security.[7]The larger the key size, the longer it will take before the key is compromised by a brute force attack. Since perfect secrecy is not feasible for key algorithms, researches are now more focused on computational security. In the past, keys were required to be a minimum of 40 bits in length, however, as technology advanced, these keys were being broken quicker and quicker. As a response, restrictions on symmetric keys were enhanced to be greater in size. Currently, 2048 bitRSA[8]is commonly used, which is sufficient for current systems. However, current RSA key sizes would all be cracked quickly with a powerful quantum computer.[9] "The keys used in public key cryptography have some mathematical structure. For example, public keys used in the RSA system are the product of two prime numbers. Thus public key systems require longer key lengths than symmetric systems for an equivalent level of security. 3072 bits is the suggested key length for systems based on factoring and integer discrete logarithms which aim to have security equivalent to a 128 bit symmetric cipher."[10] To prevent a key from being guessed, keys need to be generated randomly and contain sufficiententropy. The problem of how to safely generate random keys is difficult and has been addressed in many ways by various cryptographic systems. A key can directly be generated by using the output of a Random Bit Generator (RBG), a system that generates a sequence of unpredictable and unbiased bits.[11]A RBG can be used to directly produce either a symmetric key or the random output for an asymmetric key pair generation. Alternatively, a key can also be indirectly created during a key-agreement transaction, from another key or from a password.[12] Some operating systems include tools for "collecting" entropy from the timing of unpredictable operations such as disk drive head movements. For the production of small amounts of keying material, ordinary dice provide a good source of high-quality randomness. The security of a key is dependent on how a key is exchanged between parties. Establishing a secured communication channel is necessary so that outsiders cannot obtain the key. A key establishment scheme (or key exchange) is used to transfer an encryption key among entities. Key agreement and key transport are the two types of a key exchange scheme that are used to be  remotely exchanged between entities . In a key agreement scheme, a secret key, which is used between the sender and the receiver to encrypt and decrypt information, is set up to be sent indirectly. All parties exchange information (the shared secret) that permits each party to derive the secret key material. In a key transport scheme, encrypted keying material that is chosen by the sender is transported to the receiver. Either symmetric key or asymmetric key techniques can be used in both schemes.[12] TheDiffie–Hellman key exchangeandRivest-Shamir-Adleman(RSA) are the most two widely used key exchange algorithms.[13]In 1976,Whitfield DiffieandMartin Hellmanconstructed theDiffie–Hellmanalgorithm, which was the first public key algorithm. TheDiffie–Hellmankey exchange protocol allows key exchange over an insecure channel by electronically generating a shared key between two parties. On the other hand,RSAis a form of the asymmetric key system which consists of three steps: key generation, encryption, and decryption.[13] Key confirmation delivers an assurance between the key confirmation recipient and provider that the shared keying materials are correct and established. TheNational Institute of Standards and Technologyrecommends key confirmation to be integrated into a key establishment scheme to validate its implementations.[12] Key managementconcerns the generation, establishment, storage, usage and replacement of cryptographic keys. Akey management system(KMS) typically includes three steps of establishing, storing and using keys. The base of security for the generation, storage, distribution, use and destruction of keys depends on successful key management protocols.[14] A password is a memorized series of characters including letters, digits, and other special symbols that are used to verify identity. It is often produced by a human user or a password management software to protect personal and sensitive information or generate cryptographic keys. Passwords are often created to be memorized by users and may contain non-random information such as dictionary words.[12]On the other hand, a key can help strengthen password protection by implementing a cryptographic algorithm which is difficult to guess or replace the password altogether. A key is generated based on random or pseudo-random data and can often be unreadable to humans.[15] A password is less safe than a cryptographic key due to its low entropy, randomness, and human-readable properties. However, the password may be the only secret data that is accessible to the cryptographic algorithm forinformation securityin some applications such as securing information in storage devices. Thus, a deterministic algorithm called akey derivation function(KDF) uses a password to generate the secure cryptographic keying material to compensate for the password's weakness. Various methods such as adding asaltor key stretching may be used in the generation.[12]
https://en.wikipedia.org/wiki/Key_(cryptography)
Inmathematics,multipliers and centralizersare algebraic objects in the study ofBanach spaces. They are used, for example, in generalizations of theBanach–Stone theorem. Let (X, ‖·‖) be a Banach space over a fieldK(either therealorcomplex numbers), and let Ext(X) be the set ofextreme pointsof theclosed unit ballof thecontinuous dual spaceX∗. Acontinuous linear operatorT:X→Xis said to be amultiplierif every pointpin Ext(X) is aneigenvectorfor theadjoint operatorT∗:X∗→X∗. That is, there exists a functionaT: Ext(X) →Ksuch that makingaT(p){\displaystyle a_{T}(p)}the eigenvalue corresponding top. Given two multipliersSandTonX,Sis said to be anadjointforTif i.e.aSagrees withaTin the real case, and with thecomplex conjugateofaTin the complex case. Thecentralizer(orcommutant) ofX, denotedZ(X), is the set of all multipliers onXfor which an adjoint exists.
https://en.wikipedia.org/wiki/Multipliers_and_centralizers_(Banach_spaces)
Word-sense disambiguationis the process of identifying whichsenseof awordis meant in asentenceor other segment ofcontext. In humanlanguage processingandcognition, it is usually subconscious. Given that natural language requires reflection of neurological reality, as shaped by the abilities provided by the brain'sneural networks, computer science has had a long-term challenge in developing the ability in computers to donatural language processingandmachine learning. Many techniques have been researched, including dictionary-based methods that use the knowledge encoded in lexical resources,supervised machine learningmethods in which aclassifieris trained for each distinct word on acorpusof manually sense-annotated examples, and completely unsupervised methods that cluster occurrences of words, thereby inducing word senses. Among these, supervised learning approaches have been the most successfulalgorithmsto date. Accuracy of current algorithms is difficult to state without a host of caveats. In English, accuracy at the coarse-grained (homograph) level is routinely above 90% (as of 2009), with some methods on particular homographs achieving over 96%. On finer-grained sense distinctions, top accuracies from 59.1% to 69.0% have been reported in evaluation exercises (SemEval-2007, Senseval-2), where the baseline accuracy of the simplest possible algorithm of always choosing the most frequent sense was 51.4% and 57%, respectively. Disambiguation requires two strict inputs: adictionaryto specify the senses which are to be disambiguated and a corpus oflanguagedata to be disambiguated (in some methods, atraining corpusof language examples is also required). WSD task has two variants: "lexical sample" (disambiguating the occurrences of a small sample of target words which were previously selected) and "all words" task (disambiguation of all the words in a running text). "All words" task is generally considered a more realistic form of evaluation, but the corpus is more expensive to produce because human annotators have to read the definitions for each word in the sequence every time they need to make a tagging judgement, rather than once for a block of instances for the same target word. WSD was first formulated as a distinct computational task during the early days of machine translation in the 1940s, making it one of the oldest problems in computational linguistics.Warren Weaverfirst introduced the problem in a computational context in his 1949 memorandum on translation.[1]Later,Bar-Hillel(1960) argued[2]that WSD could not be solved by "electronic computer" because of the need in general to model all world knowledge. In the 1970s, WSD was a subtask of semantic interpretation systems developed within the field of artificial intelligence, starting withWilks' preference semantics. However, since WSD systems were at the time largely rule-based and hand-coded they were prone to a knowledge acquisition bottleneck. By the 1980s large-scale lexical resources, such as theOxford Advanced Learner's Dictionary of Current English(OALD), became available: hand-coding was replaced with knowledge automatically extracted from these resources, but disambiguation was still knowledge-based or dictionary-based. In the 1990s, the statistical revolution advanced computational linguistics, and WSD became a paradigm problem on which to apply supervised machine learning techniques. The 2000s saw supervised techniques reach a plateau in accuracy, and so attention has shifted to coarser-grained senses,domain adaptation, semi-supervised and unsupervised corpus-based systems, combinations of different methods, and the return of knowledge-based systems via graph-based methods. Still, supervised systems continue to perform best. One problem with word sense disambiguation is deciding what the senses are, as differentdictionariesandthesauruseswill provide different divisions of words into senses. Some researchers have suggested choosing a particular dictionary, and using its set of senses to deal with this issue. Generally, however, research results using broad distinctions in senses have been much better than those using narrow ones.[3][4]Most researchers continue to work onfine-grainedWSD. Most research in the field of WSD is performed by usingWordNetas a reference sense inventory for English. WordNet is a computationallexiconthat encodes concepts assynonymsets (e.g. the concept of car is encoded as { car, auto, automobile, machine, motorcar }). Other resources used for disambiguation purposes includeRoget's Thesaurus[5]andWikipedia.[6]More recently,BabelNet, a multilingual encyclopedic dictionary, has been used for multilingual WSD.[7] In any real test,part-of-speech taggingand sense tagging have proven to be very closely related, with each potentially imposing constraints upon the other. The question whether these tasks should be kept together or decoupled is still not unanimously resolved, but recently scientists incline to test these things separately (e.g. in the Senseval/SemEvalcompetitions parts of speech are provided as input for the text to disambiguate). Both WSD and part-of-speech tagging involve disambiguating or tagging with words. However, algorithms used for one do not tend to work well for the other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word may be determined by words further away. The success rate for part-of-speech tagging algorithms is at present much higher than that for WSD, state-of-the art being around 96%[8]accuracy or better, as compared to less than 75%[citation needed]accuracy in word sense disambiguation withsupervised learning. These figures are typical for English, and may be very different from those for other languages. Another problem isinter-judgevariance. WSD systems are normally tested by having their results on a task compared against those of a human. However, while it is relatively easy to assign parts of speech to text, training people to tag senses has been proven to be far more difficult.[9]While users can memorize all of the possible parts of speech a word can take, it is often impossible for individuals to memorize all of the senses a word can take. Moreover, humans do not agree on the task at hand – give a list of senses and sentences, and humans will not always agree on which word belongs in which sense.[10] As human performance serves as the standard, it is anupper boundfor computer performance. Human performance, however, is much better oncoarse-grainedthanfine-graineddistinctions, so this again is why research on coarse-grained distinctions[11][12]has been put to test in recent WSD evaluation exercises.[3][4] A task-independent sense inventory is not a coherent concept:[13]each task requires its own division of word meaning into senses relevant to the task. Additionally, completely different algorithms might be required by different applications. In machine translation, the problem takes the form of target word selection. The "senses" are words in the target language, which often correspond to significant meaning distinctions in the source language ("bank" could translate to the Frenchbanque– that is, 'financial bank' orrive– that is, 'edge of river'). In information retrieval, a sense inventory is not necessarily required, because it is enough to know that a word is used in the same sense in the query and a retrieved document; what sense that is, is unimportant. Finally, the very notion of "word sense" is slippery and controversial. Most people can agree in distinctions at thecoarse-grainedhomographlevel (e.g., pen as writing instrument or enclosure), but go down one level tofine-grainedpolysemy, and disagreements arise. For example, in Senseval-2, which used fine-grained sense distinctions, human annotators agreed in only 85% of word occurrences.[14]Word meaning is in principle infinitely variable and context-sensitive. It does not divide up easily into distinct or discrete sub-meanings.[15]Lexicographersfrequently discover in corpora loose and overlapping word meanings, and standard or conventional meanings extended, modulated, and exploited in a bewildering variety of ways. The art of lexicography is to generalize from the corpus to definitions that evoke and explain the full range of meaning of a word, making it seem like words are well-behaved semantically. However, it is not at all clear if these same meaning distinctions are applicable incomputational applications, as the decisions of lexicographers are usually driven by other considerations. In 2009, a task – namedlexical substitution– was proposed as a possible solution to the sense discreteness problem.[16]The task consists of providing a substitute for a word in context that preserves the meaning of the original word (potentially, substitutes can be chosen from the full lexicon of the target language, thus overcoming discreteness). There are two main approaches to WSD – deep approaches and shallow approaches. Deep approaches presume access to a comprehensive body ofworld knowledge. These approaches are generally not considered to be very successful in practice, mainly because such a body of knowledge does not exist in a computer-readable format, outside very limited domains.[17]Additionally due to the long tradition incomputational linguistics, of trying such approaches in terms of coded knowledge and in some cases, it can be hard to distinguish between knowledge involved in linguistic or world knowledge. The first attempt was that byMargaret Mastermanand her colleagues, at the Cambridge Language Research Unit in England, in the 1950s. This attempt used as data a punched-card version of Roget's Thesaurus and its numbered "heads", as an indicator of topics and looked for repetitions in text, using a set intersection algorithm. It was not very successful,[18]but had strong relationships to later work, especially Yarowsky's machine learning optimisation of a thesaurus method in the 1990s. Shallow approaches do not try to understand the text, but instead consider the surrounding words. These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to the computer's limited world knowledge. There are four conventional approaches to WSD: Almost all these approaches work by defining a window ofncontent words around each word to be disambiguated in the corpus, and statistically analyzing thosensurrounding words. Two shallow approaches used to train and then disambiguate areNaïve Bayes classifiersanddecision trees. In recent research,kernel-based methodssuch assupport vector machineshave shown superior performance insupervised learning. Graph-based approaches have also gained much attention from the research community, and currently achieve performance close to the state of the art. TheLesk algorithm[19]is the seminal dictionary-based method. It is based on the hypothesis that words used together in text are related to each other and that the relation can be observed in the definitions of the words and their senses. Two (or more) words are disambiguated by finding the pair of dictionary senses with the greatest word overlap in their dictionary definitions. For example, when disambiguating the words in "pine cone", the definitions of the appropriate senses both include the words evergreen and tree (at least in one dictionary). A similar approach[20]searches for the shortest path between two words: the second word is iteratively searched among the definitions of every semantic variant of the first word, then among the definitions of every semantic variant of each word in the previous definitions and so on. Finally, the first word is disambiguated by selecting the semantic variant which minimizes the distance from the first to the second word. An alternative to the use of the definitions is to consider general word-senserelatednessand to compute thesemantic similarityof each pair of word senses based on a given lexical knowledge base such asWordNet.Graph-basedmethods reminiscent ofspreading activationresearch of the early days of AI research have been applied with some success. More complex graph-based approaches have been shown to perform almost as well as supervised methods[21]or even outperforming them on specific domains.[3][22]Recently, it has been reported that simplegraph connectivity measures, such asdegree, perform state-of-the-art WSD in the presence of a sufficiently rich lexical knowledge base.[23]Also, automatically transferringknowledgein the form ofsemantic relationsfrom Wikipedia to WordNet has been shown to boost simple knowledge-based methods, enabling them to rival the best supervised systems and even outperform them in a domain-specific setting.[24] The use of selectional preferences (or selectional restrictions) is also useful, for example, knowing that one typically cooks food, one can disambiguate the word bass in "I am cooking basses" (i.e., it's not a musical instrument). Supervisedmethods are based on the assumption that the context can provide enough evidence on its own to disambiguate words (hence,common senseandreasoningare deemed unnecessary). Probably every machine learning algorithm going has been applied to WSD, including associated techniques such asfeature selection, parameter optimization, andensemble learning.Support Vector Machinesandmemory-based learninghave been shown to be the most successful approaches, to date, probably because they can cope with the high-dimensionality of the feature space. However, these supervised methods are subject to a new knowledge acquisition bottleneck since they rely on substantial amounts of manually sense-tagged corpora for training, which are laborious and expensive to create. Because of the lack of training data, many word sense disambiguation algorithms usesemi-supervised learning, which allows both labeled and unlabeled data. TheYarowsky algorithmwas an early example of such an algorithm.[25]It uses the ‘One sense per collocation’ and the ‘One sense per discourse’ properties of human languages for word sense disambiguation. From observation, words tend to exhibit only one sense in most given discourse and in a given collocation.[26] Thebootstrappingapproach starts from a small amount of seed data for each word: either manually tagged training examples or a small number of surefire decision rules (e.g., 'play' in the context of 'bass' almost always indicates the musical instrument). The seeds are used to train an initialclassifier, using any supervised method. This classifier is then used on the untagged portion of the corpus to extract a larger training set, in which only the most confident classifications are included. The process repeats, each new classifier being trained on a successively larger training corpus, until the whole corpus is consumed, or until a given maximum number of iterations is reached. Other semi-supervised techniques use large quantities of untagged corpora to provideco-occurrenceinformation that supplements the tagged corpora. These techniques have the potential to help in the adaptation of supervised models to different domains. Also, an ambiguous word in one language is often translated into different words in a second language depending on the sense of the word. Word-alignedbilingualcorpora have been used to infer cross-lingual sense distinctions, a kind of semi-supervised system.[citation needed] Unsupervised learningis the greatest challenge for WSD researchers. The underlying assumption is that similar senses occur in similar contexts, and thus senses can be induced from text byclusteringword occurrences using somemeasure of similarityof context,[27]a task referred to asword sense inductionor discrimination. Then, new occurrences of the word can be classified into the closest induced clusters/senses. Performance has been lower than for the other methods described above, but comparisons are difficult since senses induced must be mapped to a known dictionary of word senses. If amappingto a set of dictionary senses is not desired, cluster-based evaluations (including measures of entropy and purity) can be performed. Alternatively, word sense induction methods can be tested and compared within an application. For instance, it has been shown that word sense induction improves Web search result clustering by increasing the quality of result clusters and the degree diversification of result lists.[28][29]It is hoped that unsupervised learning will overcome theknowledge acquisitionbottleneck because they are not dependent on manual effort. Representing words considering their context through fixed-size dense vectors (word embeddings) has become one of the most fundamental blocks in several NLP systems.[30][31][32]Even though most of traditional word-embedding techniques conflate words with multiple meanings into a single vector representation, they still can be used to improve WSD.[33]A simple approach to employ pre-computed word embeddings to represent word senses is to compute the centroids of sense clusters.[34][35]In addition to word-embedding techniques, lexical databases (e.g.,WordNet,ConceptNet,BabelNet) can also assist unsupervised systems in mapping words and their senses as dictionaries. Some techniques that combine lexical databases and word embeddings are presented in AutoExtend[36][37]and Most Suitable Sense Annotation (MSSA).[38]In AutoExtend,[37]they present a method that decouples an object input representation into its properties, such as words and their word senses. AutoExtend uses a graph structure to map words (e.g. text) and non-word (e.g.synsetsinWordNet) objects as nodes and the relationship between nodes as edges. The relations (edges) in AutoExtend can either express the addition or similarity between its nodes. The former captures the intuition behind the offset calculus,[30]while the latter defines the similarity between two nodes. In MSSA,[38]an unsupervised disambiguation system uses the similarity between word senses in a fixed context window to select the most suitable word sense using a pre-trained word-embedding model andWordNet. For each context window, MSSA calculates the centroid of each word sense definition by averaging the word vectors of its words in WordNet'sglosses(i.e., short defining gloss and one or more usage example) using a pre-trained word-embedding model. These centroids are later used to select the word sense with the highest similarity of a target word to its immediately adjacent neighbors (i.e., predecessor and successor words). After all words are annotated and disambiguated, they can be used as a training corpus in any standard word-embedding technique. In its improved version, MSSA can make use of word sense embeddings to repeat its disambiguation process iteratively. Other approaches may vary differently in their methods: The knowledge acquisition bottleneck is perhaps the major impediment to solving the WSD problem.Unsupervised methodsrely on knowledge about word senses, which is only sparsely formulated in dictionaries and lexical databases.Supervised methodsdepend crucially on the existence of manually annotated examples for every word sense, a requisite that can so far[when?]be met only for a handful of words for testing purposes, as it is done in theSensevalexercises. One of the most promising trends in WSD research is using the largestcorpusever accessible, theWorld Wide Web, to acquire lexical information automatically.[50]WSD has been traditionally understood as an intermediate language engineering technology which could improve applications such asinformation retrieval(IR). In this case, however, the reverse is also true:web search enginesimplement simple and robust IR techniques that can successfully mine the Web for information to use in WSD. The historic lack of training data has provoked the appearance of some new algorithms and techniques, as described inAutomatic acquisition of sense-tagged corpora. Knowledge is a fundamental component of WSD. Knowledge sources provide data which are essential to associate senses with words. They can vary from corpora of texts, either unlabeled or annotated with word senses, to machine-readable dictionaries, thesauri, glossaries, ontologies, etc. They can be[51][52]classified as follows: Structured: Unstructured: Comparing and evaluating different WSD systems is extremely difficult, because of the different test sets, sense inventories, and knowledge resources adopted. Before the organization of specific evaluation campaigns most systems were assessed on in-house, often small-scale,data sets. In order to test one's algorithm, developers should spend their time to annotate all word occurrences. And comparing methods even on the same corpus is not eligible if there is different sense inventories. In order to define common evaluation datasets and procedures, public evaluation campaigns have been organized.Senseval(now renamedSemEval) is an international word sense disambiguation competition, held every three years since 1998:Senseval-1(1998),Senseval-2(2001),Senseval-3[usurped](2004), and its successor,SemEval(2007). The objective of the competition is to organize different lectures, preparing and hand-annotating corpus for testing systems, perform a comparative evaluation of WSD systems in several kinds of tasks, including all-words and lexical sample WSD for different languages, and, more recently, new tasks such assemantic role labeling, gloss WSD,lexical substitution, etc. The systems submitted for evaluation to these competitions usually integrate different techniques and often combine supervised and knowledge-based methods (especially for avoiding bad performance in lack of training examples). In recent years2007-2012, the WSD evaluation task choices had grown and the criterion for evaluating WSD has changed drastically depending on the variant of the WSD evaluation task. Below enumerates the variety of WSD tasks: As technology evolves, the Word Sense Disambiguation (WSD) tasks grows in different flavors towards various research directions and for more languages:
https://en.wikipedia.org/wiki/Word_sense_disambiguation