text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
Decorrelationis a general term for any process that is used to reduceautocorrelationwithin a signal, orcross-correlationwithin a set of signals, while preserving other aspects of the signal.[citation needed]A frequently used method of decorrelation is the use of a matchedlinear filterto reduce theautocorrelationof a signal as far as possible. Since the minimum possible autocorrelation for a given signal energy is achieved by equalising the power spectrum of the signal to be similar to that of awhite noisesignal, this is often referred to assignal whitening.
Most decorrelation algorithms arelinear, but there are alsonon-lineardecorrelation algorithms.
Many data compression algorithms incorporate a decorrelation stage.[citation needed]For example, manytransform codersfirst apply a fixed linear transformation that would, on average, have the effect of decorrelating a typical signal of the class to be coded, prior to any later processing. This is typically aKarhunen–Loève transform, or a simplified approximation such as thediscrete cosine transform.
By comparison,sub-band codersdo not generally have an explicit decorrelation step, but instead exploit the already-existing reduced correlation within each of the sub-bands of the signal, due to the relative flatness of each sub-band of the power spectrum in many classes of signals.
Linear predictive coderscan be modelled as an attempt to decorrelate signals by subtracting the best possible linear prediction from the input signal, leaving a whitened residual signal.
Decorrelation techniques can also be used for many other purposes, such as reducingcrosstalkin a multi-channel signal, or in the design ofecho cancellers.
Inimage processingdecorrelation techniques can be used to enhance or stretch,colourdifferences found in eachpixelof an image. This is generally termed as 'decorrelation stretching'.[1]
The concept of decorrelation can be applied in many other fields.
Inneuroscience, decorrelation is used in the analysis of theneural networksin the human visual system.
Incryptography, it is used in cipher design (seeDecorrelation theory) and in the design ofhardware random number generators.
Thiscomputational physics-related article is astub. You can help Wikipedia byexpanding it.
Thissignal processing-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Decorrelation
|
Acar, or anautomobile, is amotor vehiclewithwheels. Most definitions of cars state that they run primarily onroads,seatone to eight people, have four wheels, and mainly transportpeoplerather thancargo.[1][2]There are around one billion cars in use worldwide.[citation needed]
The French inventorNicolas-Joseph Cugnotbuilt the first steam-powered road vehicle in 1769, while the Swiss inventorFrançois Isaac de Rivazdesigned and constructed the first internal combustion-powered automobile in 1808. The modern car—a practical, marketable automobile for everyday use—was invented in 1886, when the German inventorCarl Benzpatented hisBenz Patent-Motorwagen. Commercial cars became widely available during the 20th century. The 1901Oldsmobile Curved Dashand the 1908Ford Model T, both American cars, are widely considered the first mass-produced[3][4]and mass-affordable[5][6][7]cars, respectively. Cars were rapidly adopted in the US, where they replacedhorse-drawn carriages.[8]In Europe and other parts of the world, demand for automobiles did not increase untilafter World War II.[9]In the 21st century, car usage is still increasing rapidly, especially in China, India, and othernewly industrialised countries.[10][11]
Cars have controls fordriving,parking,passengercomfort, and a variety oflamps. Over the decades, additional features and controls have been added to vehicles, making them progressively more complex. These includerear-reversing cameras,air conditioning,navigation systems, andin-car entertainment. Most cars in use in the early 2020s are propelled by aninternal combustion engine, fueled by thecombustionoffossil fuels.Electric cars, which were invented early in thehistory of the car, became commercially available in the 2000s and are predicted to cost less to buy than petrol-driven cars before 2025.[12][13]The transition from fossil fuel-powered cars to electric cars features prominently in mostclimate change mitigation scenarios,[14]such asProject Drawdown's 100 actionable solutions for climate change.[15]
There arecosts and benefits to car use. The costs to the individual include acquiring the vehicle, interest payments (if the car is financed), repairs andmaintenance, fuel,depreciation, driving time, parking fees, taxes, andinsurance.[16]The costs to society include resources used to produce cars and fuel, maintaining roads,land-use,road congestion,air pollution,noise pollution,public health, anddisposing of the vehicle at the end of its life.Traffic collisionsare the largest cause of injury-related deaths worldwide.[17]Personal benefits include on-demand transportation, mobility, independence, and convenience.[18]Societal benefits include economic benefits, such as job and wealth creation from theautomotive industry, transportation provision, societal well-being from leisure and travel opportunities. People's ability to move flexibly from place to place hasfar-reaching implications for the nature of societies.[19]
TheEnglishwordcaris believed to originate fromLatincarrus/carrum"wheeled vehicle" or (viaOld North French)Middle Englishcarre"two-wheeled cart", both of which in turn derive fromGaulishkarros"chariot".[20][21]It originally referred to any wheeledhorse-drawn vehicle, such as acart,carriage, orwagon.[22]The word also occurs in other Celtic languages.[23]
"Motor car", attested from 1895, is the usual formal term inBritish English.[2]"Autocar", a variant likewise attested from 1895 and literally meaning "self-propelledcar", is now considered archaic.[24]"Horseless carriage" is attested from 1895.[25]
"Automobile", aclassical compoundderived fromAncient Greekautós(αὐτός) "self" and Latinmobilis"movable", entered English fromFrenchand was first adopted by theAutomobile Club of Great Britainin 1897.[26]It fell out of favour in Britain and is now used chiefly inNorth America,[27]where the abbreviated form "auto" commonly appears as an adjective in compound formations like "auto industry" and "auto mechanic".[28][29]
In 1649,Hans HautschofNurembergbuilt a clockwork-driven carriage.[32][33]The first steam-powered vehicle was designed byFerdinand Verbiest, aFlemishmember of aJesuit mission in Chinaaround 1672. It was a 65-centimetre-long (26 in) scale-model toy for theKangxi Emperorthat was unable to carry a driver or a passenger.[18][34][35]It is not known with certainty if Verbiest's model was successfully built or run.[35]
Nicolas-Joseph Cugnotis widely credited with building the first full-scale, self-propelled mechanical vehicle in about 1769; he created a steam-powered tricycle.[36]He also constructed two steam tractors for the French Army, one of which is preserved in theFrench National Conservatory of Arts and Crafts.[36]His inventions were limited by problems with water supply and maintaining steam pressure.[36]In 1801,Richard Trevithickbuilt and demonstrated hisPuffing Devilroad locomotive, believed by many to be the first demonstration of a steam-powered road vehicle. It was unable to maintain sufficient steam pressure for long periods and was of little practical use.
The development of external combustion (steam) engines is detailed as part of the history of the car but often treated separately from the development of cars in their modern understanding. A variety of steam-powered road vehicles were used during the first part of the 19th century, includingsteam cars,steam buses,phaetons, andsteam rollers. In the United Kingdom, sentiment against them led to theLocomotive Actsof 1865.
In 1807,Nicéphore Niépceand his brother Claude created what was probably the world's firstinternal combustion engine(which they called aPyréolophore), but installed it in a boat on the riverSaonein France.[37]Coincidentally, in 1807, the Swiss inventorFrançois Isaac de Rivazdesigned his own "de Rivaz internal combustion engine", and used it to develop the world's first vehicle to be powered by such an engine. The Niépces' Pyréolophore was fuelled by a mixture ofLycopodium powder(dried spores of theLycopodiumplant), finely crushed coal dust and resin that were mixed with oil, whereas de Rivaz used a mixture ofhydrogenandoxygen.[37]Neither design was successful, as was the case with others, such asSamuel Brown,Samuel Morey, andEtienne Lenoir,[38]who each built vehicles (usually adapted carriages or carts) powered by internal combustion engines.[39]
In November 1881, French inventorGustave Trouvédemonstrated a three-wheeled car powered by electricity at theInternational Exposition of Electricity.[40]Although several other German engineers (includingGottlieb Daimler,Wilhelm Maybach, andSiegfried Marcus) were working on cars at about the same time, the year 1886 is regarded as the birth year of the modern car—a practical, marketable automobile for everyday use—when the GermanCarl Benzpatented hisBenz Patent-Motorwagen; he is generally acknowledged as the inventor of the car.[39][41][42]
In 1879, Benz was granted a patent for his first engine, which had been designed in 1878. Many of his other inventions made the use of the internal combustion engine feasible for powering a vehicle. His firstMotorwagenwas built in 1885 inMannheim, Germany. He was awarded the patent for its invention as of his application on 29 January 1886 (under the auspices of his major company,Benz & Cie., which was founded in 1883). Benz began promotion of the vehicle on 3 July 1886, and about 25 Benz vehicles were sold between 1888 and 1893, when his first four-wheeler was introduced along with a cheaper model. They also were powered withfour-strokeengines of his own design. Emile Roger of France, already producing Benz engines under license, now added the Benz car to his line of products. Because France was more open to the early cars, initially more were built and sold in France through Roger than Benz sold in Germany. In August 1888,Bertha Benz, the wife and business partner of Carl Benz, undertook the firstroad tripby car, to prove the road-worthiness of her husband's invention.[43]
In 1896, Benz designed and patented the first internal-combustionflat engine, calledboxermotor. During the last years of the 19th century, Benz was the largest car company in the world with 572 units produced in 1899 and, because of its size, Benz & Cie., became ajoint-stock company. The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed toTatra) in 1897, thePräsidentautomobil.
Daimler and Maybach foundedDaimler Motoren Gesellschaft(DMG) inCannstattin 1890, and sold their first car in 1892 under the brand nameDaimler. It was a horse-drawn stagecoach built by another manufacturer, which they retrofitted with an engine of their design. By 1895, about 30 vehicles had been built by Daimler and Maybach, either at the Daimler works or in the Hotel Hermann, where they set up shop after disputes with their backers. Benz, Maybach, and the Daimler team seem to have been unaware of each other's early work. They never worked together; by the time of the merger of the two companies, Daimler and Maybach were no longer part of DMG. Daimler died in 1900 and later that year, Maybach designed an engine namedDaimler-Mercedesthat was placed in a specially ordered model built to specifications set byEmil Jellinek. This was a production of a small number of vehicles for Jellinek to race and market in his country. Two years later, in 1902, a new model DMG car was produced and the model was named Mercedes after the Maybach engine, which generated 35 hp. Maybach quit DMG shortly thereafter and opened a business of his own. Rights to theDaimlerbrand name were sold to other manufacturers.
In 1890,Émile LevassorandArmand Peugeotof France began producing vehicles with Daimler engines, and so laid the foundation of theautomotive industry in France. In 1891,Auguste Doriotand his Peugeot colleague Louis Rigoulot completed the longest trip by a petrol-driven vehicle when their self-designed and built Daimler poweredPeugeot Type 3completed 2,100 kilometres (1,300 mi) fromValentigneyto Paris and Brest and back again. They were attached to the firstParis–Brest–Parisbicycle race, but finished six days after the winning cyclist,Charles Terront.
The first design for an American car with a petrol internal combustion engine was made in 1877 byGeorge SeldenofRochester, New York. Selden applied for a patent for a car in 1879, but the patent application expired because the vehicle was never built. After a delay of 16 years and a series of attachments to his application, on 5 November 1895, Selden was granted a US patent (U.S. patent 549,160) for atwo-strokecar engine,which hindered, more than encouraged, development of cars in the United States. His patent was challenged byHenry Fordand others, and overturned in 1911.
In 1893, the first running, petrol-drivenAmerican carwas built and road-tested by theDuryea brothersofSpringfield, Massachusetts. The first public run of theDuryea Motor Wagontook place on 21 September 1893, on Taylor Street inMetro CenterSpringfield.[44][45]Studebaker, subsidiary of a long-established wagon and coach manufacturer, started to build cars in 1897[46]: 66and commenced sales of electric vehicles in 1902 and petrol vehicles in 1904.[47]
In Britain, there had been several attempts to build steam cars with varying degrees of success, withThomas Ricketteven attempting a production run in 1860.[48]Santlerfrom Malvern is recognised by the Veteran Car Club of Great Britain as having made the first petrol-driven car in the country in 1894,[49]followed byFrederick William Lanchesterin 1895, but these were both one-offs.[49]The first production vehicles in Great Britain came from theDaimler Company, a company founded byHarry J. Lawsonin 1896, after purchasing the right to use the name of the engines. Lawson's company made its first car in 1897, and they bore the name Daimler.[49]
In 1892, German engineerRudolf Dieselwas granted a patent for a "New Rational Combustion Engine". In 1897, he built the firstdiesel engine.[39]Steam-, electric-, and petrol-driven vehicles competed for a few decades, with petrol internal combustion engines achieving dominance in the 1910s. Although variouspistonless rotary enginedesigns have attempted to compete with the conventionalpistonandcrankshaftdesign, onlyMazda's version of theWankel enginehas had more than very limited success. All in all, it is estimated that over 100,000 patents created the modern automobile and motorcycle.[50]
Large-scale,production-linemanufacturing of affordable cars was started byRansom Oldsin 1901 at hisOldsmobilefactory inLansing, Michigan, and based upon stationaryassembly linetechniques pioneered byMarc Isambard Brunelat thePortsmouth Block Mills, England, in 1802. The assembly line style of mass production and interchangeable parts had been pioneered in the US byThomas Blanchardin 1821, at theSpringfield ArmoryinSpringfield, Massachusetts.[51]This concept was greatly expanded byHenry Ford, beginning in 1913 with the world's firstmovingassembly line for cars at theHighland Park Ford Plant.
As a result, Ford's cars came off the line in 15-minute intervals, much faster than previous methods, increasing productivity eightfold, while using less manpower (from 12.5 manhours to 1 hour 33 minutes).[52]It was so successful,paintbecame a bottleneck. OnlyJapan blackwould dry fast enough, forcing the company to drop the variety of colours available before 1913, until fast-dryingDucolacquerwas developed in 1926. This is the source of Ford'sapocryphalremark, "any color as long as it's black".[52]In 1914, an assembly line worker could buy a Model T with four months' pay.[52]
Ford's complex safety procedures—especially assigning each worker to a specific location instead of allowing them to roam about—dramatically reduced the rate of injury.[53]The combination of high wages and high efficiency is called "Fordism" and was copied by most major industries. The efficiency gains from the assembly line also coincided with the economic rise of the US. The assembly line forced workers to work at a certain pace with very repetitive motions which led to more output per worker while other countries were using less productive methods.
In the automotive industry, its success was dominating, and quickly spread worldwide seeing the founding of Ford France and Ford Britain in 1911, Ford Denmark 1923, Ford Germany 1925; in 1921,Citroënwas the first native European manufacturer to adopt the production method. Soon, companies had to have assembly lines, or risk going bankrupt; by 1930, 250 companies which did not, had disappeared.[52]
Development of automotive technology was rapid, due in part to the hundreds of small manufacturers competing to gain the world's attention. Key developments included electricignitionand the electric self-starter (both byCharles Kettering, for theCadillacMotor Company in 1910–1911), independentsuspension, and four-wheel brakes.
Since the 1920s, nearly all cars have been mass-produced to meet market needs, so marketing plans often have heavily influenced car design. It wasAlfred P. Sloanwho established the idea of different makes of cars produced by one company, called theGeneral Motors Companion Make Program, so that buyers could "move up" as their fortunes improved.
Reflecting the rapid pace of change, makes shared parts with one another so larger production volume resulted in lower costs for each price range. For example, in the 1930s,LaSalles, sold byCadillac, used cheaper mechanical parts made byOldsmobile; in the 1950s,Chevroletshared bonnet, doors, roof, and windows withPontiac; by the 1990s, corporatepowertrainsand sharedplatforms(with interchangeablebrakes, suspension, and other parts) were common. Even so, only major makers could afford high costs, and even companies with decades of production, such asApperson,Cole,Dorris,Haynes, or Premier, could not manage: of some two hundred American car makers in existence in 1920, only 43 survived in 1930, and with theGreat Depression, by 1940, only 17 of those were left.[52]
In Europe, much the same would happen.Morrisset up its production line atCowleyin 1924, and soon outsold Ford, while beginning in 1923 to follow Ford's practice ofvertical integration, buyingHotchkiss'British subsidiary (engines),Wrigley(gearboxes), and Osberton (radiators), for instance, as well as competitors, such asWolseley: in 1925, Morris had 41 per cent of total British car production. Most British small-car assemblers, fromAbbeytoXtra, had gone under. Citroën did the same in France, coming to cars in 1919; between them and other cheap cars in reply such asRenault's 10CV andPeugeot's5CV, they produced 550,000 cars in 1925, andMors,Hurtu, and others could not compete.[52]Germany's first mass-manufactured car, theOpel 4PSLaubfrosch(Tree Frog), came off the line atRüsselsheimin 1924, soon making Opel the top car builder in Germany, with 37.5 per cent of the market.[52]
In Japan, car production was very limited before World War II. Only a handful of companies were producing vehicles in limited numbers, and these were small, three-wheeled for commercial uses, likeDaihatsu, or were the result of partnering with European companies, likeIsuzubuilding theWolseley A-9in 1922.Mitsubishiwas also partnered withFiatand built theMitsubishi Model Abased on a Fiat vehicle.Toyota,Nissan,Suzuki,Mazda, andHondabegan as companies producing non-automotive products before the war, switching to car production during the 1950s. Kiichiro Toyoda's decision to takeToyoda Loom Worksinto automobile manufacturing would create what would eventually becomeToyota Motor Corporation, the largest automobile manufacturer in the world.Subaru, meanwhile, was formed from a conglomerate of six companies who banded together asFuji Heavy Industries, as a result of having been broken up underkeiretsulegislation.
Most cars in use in the early 2020s run onpetrolburnt in aninternal combustion engine(ICE). Some cities ban older more polluting petrol-driven cars and some countries plan to ban sales in future. However, some environmental groups say thisphase-out of fossil fuel vehiclesmust be brought forwards to limit climate change. Production of petrol-fuelled cars peaked in 2017.[55][56]
Other hydrocarbon fossil fuels also burnt bydeflagration(rather thandetonation) in ICE cars includediesel,autogas, andCNG. Removal offossil fuel subsidies,[57][58]concerns aboutoil dependence, tighteningenvironmental lawsand restrictions ongreenhouse gas emissionsare propelling work on alternative power systems for cars. This includeshybrid vehicles,plug-in electric vehiclesandhydrogen vehicles. Out of all cars sold in 2021, nine per cent were electric, and by the end of that year there were more than 16 millionelectric carson the world's roads.[59]Despite rapid growth, less than two per cent of cars on the world's roads werefully electricandplug-in hybridcars by the end of 2021.[59]Cars for racing orspeed recordshave sometimes employedjetorrocketengines, but these are impractical for common use.Oil consumptionhas increased rapidly in the 20th and 21st centuries because there are more cars; the1980s oil gluteven fuelled the sales of low-economy vehicles inOECDcountries. TheBRICcountries are adding to this consumption.[citation needed]
In almost all hybrid (evenmild hybrid) and pure electric carsregenerative brakingrecovers and returns to a battery some energy which would otherwise be wasted by friction brakes getting hot.[60]Although all cars must have friction brakes (frontdisc brakesand either disc ordrum rear brakes[61]) for emergency stops, regenerative braking improves efficiency, particularly in city driving.[62]
Cars are equipped with controls used for driving, passenger comfort, and safety, normally operated by a combination of the use of feet and hands, and occasionally by voice on 21st-century cars. These controls include asteering wheel, pedals for operating the brakes and controlling the car's speed (and, in a manual transmission car, a clutch pedal), a shift lever or stick for changing gears, and a number of buttons and dials for turning on lights, ventilation, and other functions. Modern cars' controls are now standardised, such as the location for the accelerator and brake, but this was not always the case. Controls are evolving in response to new technologies, for example, theelectric carand the integration of mobile communications.
Some of the original controls are no longer required. For example, all cars once had controls for the choke valve, clutch,ignition timing, and a crank instead of an electricstarter. However, new controls have also been added to vehicles, making them more complex. These includeair conditioning,navigation systems, andin-car entertainment. Another trend is the replacement of physical knobs and switches by secondary controls with touchscreen controls such asBMW'siDriveandFord'sMyFord Touch. Another change is that while early cars' pedals were physically linked to the brake mechanism and throttle, in the early 2020s, cars have increasingly replaced these physical linkages with electronic controls.
Cars are typically equipped with interior lighting which can be toggled manually or be set to light up automatically with doors open, anentertainment systemwhich originated fromcar radios, sidewayswindowswhich can be lowered or raised electrically (manually on earlier cars), and one or multipleauxiliary power outletsfor supplying portable appliances such asmobile phones, portable fridges,power inverters, and electrical air pumps from the on-board electrical system.[63][64][a]More costly upper-class andluxury carsare equipped with features earlier such as massage seats andcollision avoidance systems.[65][66]
Dedicated automotive fuses and circuit breakersprevent damage fromelectrical overload.
Cars are typically fitted with multiple types of lights. These includeheadlights, which are used to illuminate the way ahead and make the car visible to other users, so that the vehicle can be used at night; in some jurisdictions,daytime running lights; red brake lights to indicate when the brakes are applied; amber turn signal lights to indicate the turn intentions of the driver; white-coloured reverse lights to illuminate the area behind the car (and indicate that the driver will be or is reversing); and on some vehicles, additional lights (e.g., side marker lights) to increase the visibility of the car. Interior lights on the ceiling of the car are usually fitted for the driver and passengers. Some vehicles also have a boot light and, more rarely, an engine compartment light.
During the late 20th and early 21st century, cars increased in weight due to batteries,[68]modern steel safety cages, anti-lock brakes, airbags, and "more-powerful—if more efficient—engines"[69]and, as of 2019[update], typically weigh between 1 and 3 tonnes (1.1 and 3.3 short tons; 0.98 and 2.95 long tons).[70]Heavier cars are safer for the driver from a crash perspective, but more dangerous for other vehicles and road users.[69]The weight of a car influences fuel consumption and performance, with more weight resulting in increased fuel consumption and decreased performance. TheWuling Hongguang Mini EV, a typicalcity car, weighs about 700 kilograms (1,500 lb). Heavier cars include SUVs and extended-length SUVs like theSuburban. Cars have also become wider.[71]
Some places tax heavier cars more:[72]as well as improving pedestrian safety this can encourage manufacturers to use materials such as recycledaluminiuminstead of steel.[73]It has been suggested that one benefit of subsidisingcharging infrastructureis that cars can use lighter batteries.[74]
Most cars are designed to carry multiple occupants, often with four or five seats. Cars with five seats typically seat two passengers in the front and three in the rear.Full-size carsand largesport utility vehiclescan often carry six, seven, or more occupants depending on the arrangement of the seats. On the other hand,sports carsare most often designed with only two seats. Utility vehicles likepickup trucks, combine seating with extra cargo or utility functionality. The differing needs for passenger capacity and their luggage or cargo space has resulted in the availability of a large variety of body styles to meet individual consumer requirements that include, among others, thesedan/saloon,hatchback,station wagon/estate,coupe, andminivan.
Traffic collisions are the largest cause of injury-related deaths worldwide.[17]Mary Wardbecame one of the first documented car fatalities in 1869 inParsonstown, Ireland,[75]andHenry Blissone of the US's first pedestrian car casualties in 1899 in New York City.[76]There are now standard tests for safety in new cars, such as theEuroandUSNCAP tests,[77]and insurance-industry-backed tests by theInsurance Institute for Highway Safety(IIHS).[78]However, not all such tests consider the safety of people outside the car, such as drivers of other cars, pedestrians and cyclists.[79]
The costs of car usage, which may include the cost of: acquiring the vehicle, repairs andauto maintenance, fuel,depreciation, driving time,parking fees, taxes, and insurance,[16]are weighed against the cost of the alternatives, and the value of the benefits—perceived and real—of vehicle usage. The benefits may include on-demand transportation, mobility, independence, and convenience,[18]andemergency power.[81]During the 1920s, cars had another benefit: "[c]ouples finally had a way to head off on unchaperoned dates, plus they had a private space to snuggle up close at the end of the night."[82]
Similarly the costs to society of car use may include;maintaining roads,land use,air pollution,noise pollution,road congestion,public health, health care, and of disposing of the vehicle at the end of its life; and can be balanced against the value of the benefits to society that car use generates. Societal benefits may include: economy benefits, such as job and wealth creation, of car production and maintenance, transportation provision, society wellbeing derived from leisure and travel opportunities, and revenue generation from thetaxopportunities. The ability of humans to move flexibly from place to place has far-reaching implications for the nature of societies.[19]
Car production and use has a large number of environmental impacts: it causes localair pollutionplastic pollutionand contributes togreenhouse gas emissionsandclimate change.[85]Cars and vans caused 10% of energy-relatedcarbon dioxideemissions in 2022.[86]As of 2023[update],electric carsproduce about half the emissions over their lifetime as diesel and petrol cars. This is set to improve as countries produce more of their electricity fromlow-carbon sources.[87]Cars consume almost a quarter of world oil production as of 2019.[55]Cities planned around cars are often less dense, which leads to further emissions, as they are lesswalkablefor instance.[85]A growing demand for large SUVs is driving up emissions from cars.[88]
Cars are a major cause ofair pollution,[89]which stems fromexhaust gasin diesel and petrol cars and fromdust from brakes, tyres, and road wear. Electric cars do not produce tailpipe emissions, but are generally heavier and therefore produce slightly moreparticulate matter.[90]Heavy metalsand microplastics (from tyres) are also released into the environment, during production, use and at the end of life. Mining related to car manufacturing and oil spills both causewater pollution.[85]
Animals and plants are often negatively affected by cars viahabitat destructionandfragmentationfrom the road network and pollution. Animals are also killed every year on roads by cars, referred to asroadkill.[85]More recent road developments are including significant environmental mitigation in their designs, such as green bridges (designed to allowwildlife crossings) and creatingwildlife corridors.
Governments use fiscal policies, such asroad tax, to discourage the purchase and use of more polluting cars;[91]Vehicle emission standardsban the sale of new highly pollution cars.[92]Many countriesplan to stop selling fossil cars altogetherbetween 2025 and 2050.[93]Various cities have implementedlow-emission zones, banning old fossil fuel andAmsterdamis planning to ban fossil fuel cars completely.[94][95]Some cities make it easier for people to choose other forms of transport, such ascycling.[94]Many Chinese cities limit licensing of fossil fuel cars,[96]
Mass production of personal motor vehicles in the United States and other developed countries with extensive territories such as Australia, Argentina, and France vastly increased individual and group mobility and greatly increased and expanded economic development in urban, suburban, exurban and rural areas.[citation needed]Growth in the popularity of cars andcommutinghas led totraffic congestion.[97]Moscow,Istanbul,Bogotá,Mexico CityandSão Paulowere the world's most congested cities in 2018 according to INRIX, a data analytics company.[98]
In the United States, thetransport divideandcar dependencyresulting from domination ofcar-based transport systemspresents barriers to employment in low-income neighbourhoods,[99]with many low-income individuals and families forced to run cars they cannot afford in order to maintain their income.[100]Dependency on automobiles byAfrican Americansmay result in exposure to the hazards ofdriving while blackand other types ofracial discriminationrelated to buying, financing and insuring them.[101]
Air pollution from cars increases the risk oflung cancerandheart disease. It can also harm pregnancies: more children areborn too earlyor with lowerbirth weight.[85]Children are extra vulnerable to air pollution, as their bodies are still developing and air pollution in children is linked to the development ofasthma,childhood cancer, and neurocognitive issues such asautism.[102][85]The growth in popularity of the car allowed cities tosprawl, therefore encouraging more travel by car, resulting in inactivity andobesity, which in turn can lead to increased risk of a variety of diseases.[103]When places are designed around cars, children have fewer opportunities to go places by themselves, and lose opportunities to become more independent.[104][85]
Although intensive development of conventionalbattery electric vehiclesis continuing into the 2020s,[105]other carpropulsiontechnologies that are under development includewireless charging,[106]hydrogen cars,[107][108]and hydrogen/electric hybrids.[109]Research into alternative forms of power includes usingammoniainstead of hydrogen infuel cells.[110]
New materials which may replace steel car bodies include aluminium,[111]fiberglass,carbon fiber,biocomposites, andcarbon nanotubes.[112]Telematicstechnology is allowing more and more people to share cars, on apay-as-you-gobasis, throughcar shareandcarpoolschemes. Communication is also evolving due toconnected carsystems.[113]Open-source carsare not widespread.[114]
Fully autonomous vehicles, also known as driverless cars, already exist asrobotaxis[115][116]but have a long way to go before they are in general use.[117]
Car-sharearrangements andcarpoolingare also increasingly popular, in the US and Europe.[118]For example, in the US, some car-sharing services have experienced double-digit growth in revenue and membership growth between 2006 and 2007. Services like car sharing offer residents to "share" a vehicle rather than own a car in already congested neighbourhoods.[119]
The automotive industry designs, develops, manufactures, markets, and sells the world'smotor vehicles, more than three-quarters of which are cars. In 2020, there were 56 million cars manufactured worldwide,[120]down from 67 million the previous year.[121]Theautomotive industry in Chinaproduces by far the most (20 million in 2020), followed by Japan (seven million), then Germany, South Korea and India.[122]The largest market is China, followed by the US.
Around the world, there are about a billion cars on the road;[123]they burn over a trillion litres (0.26×10^12US gal; 0.22×10^12imp gal) of petrol and diesel fuel yearly, consuming about 50exajoules(14,000TWh) of energy.[124]The numbers of cars are increasing rapidly in China and India.[125]In the opinion of some, urban transport systems based around the car have proved unsustainable, consuming excessive energy, affecting the health of populations, and delivering a declining level of service despite increasing investment. Many of these negative effects fall disproportionately on those social groups who are also least likely to own and drive cars.[126][127]Thesustainable transportmovement focuses on solutions to these problems. The car industry is also facing increasing competition from the public transport sector, as some people re-evaluate their private vehicle usage. In July 2021, theEuropean Commissionintroduced the "Fit for 55" legislation package, outlining crucial directives for the automotive sector's future.[128][129]According to this package, by 2035, all newly sold cars in the European market must beZero-emissions vehicles.[130][131][132]
Established alternatives for some aspects of car use includepublic transportsuch as busses,trolleybusses, trains,subways,tramways,light rail, cycling, andwalking.Bicycle sharing systemshave been established in China and many European cities, includingCopenhagenandAmsterdam. Similar programmes have been developed in large US cities.[133][134]Additional individual modes of transport, such aspersonal rapid transitcould serve as an alternative to cars if they prove to be socially accepted.[135]A study which checked the costs and the benefits of introducingLow Traffic NeighbourhoodinLondonfound the benefits overpass the costs approximately by 100 times in the first 20 years and the difference is growing over time.[136]
General:
Effects:
Mitigation:
|
https://en.wikipedia.org/wiki/Automobile
|
Incryptography, aFeistel cipher(also known asLuby–Rackoff block cipher) is asymmetric structureused in the construction ofblock ciphers, named after theGerman-bornphysicistand cryptographerHorst Feistel, who did pioneering research while working forIBM; it is also commonly known as aFeistel network. A large number ofblock ciphersuse the scheme, including the USData Encryption Standard, the Soviet/RussianGOSTand the more recentBlowfishandTwofishciphers. In a Feistel cipher, encryption and decryption are very similar operations, and both consist of iteratively running a function called a "round function" a fixed number of times.
Many modern symmetric block ciphers are based on Feistel networks. Feistel networks were first seen commercially in IBM'sLucifercipher, designed byHorst FeistelandDon Coppersmithin 1973. Feistel networks gained respectability when the U.S. Federal Government adopted theDES(a cipher based on Lucifer, with changes made by theNSA) in 1976. Like other components of the DES, the iterative nature of the Feistel construction makes implementing the cryptosystem in hardware easier (particularly on the hardware available at the time of DES's design).
A Feistel network uses around function, a function which takes two inputs – a data block and a subkey – and returns one output of the same size as the data block.[1]In each round, the round function is run on half of the data to be encrypted, and its output is XORed with the other half of the data. This is repeated a fixed number of times, and the final output is the encrypted data. An important advantage of Feistel networks compared to other cipher designs such assubstitution–permutation networksis that the entire operation is guaranteed to be invertible (that is, encrypted data can be decrypted), even if the round function is not itself invertible. The round function can be made arbitrarily complicated, since it does not need to be designed to be invertible.[2]: 465[3]: 347Furthermore, theencryptionanddecryptionoperations are very similar, even identical in some cases, requiring only a reversal of thekey schedule. Therefore, the size of the code or circuitry required to implement such a cipher is nearly halved. Unlike substitution-permutation networks, Feistel networks also do not depend on a substitution box that could cause timing side-channels in software implementations.
The structure and properties of Feistel ciphers have been extensively analyzed bycryptographers.
Michael LubyandCharles Rackoffanalyzed the Feistel cipher construction and proved that if the round function is a cryptographically securepseudorandom function, withKiused as the seed, then 3 rounds are sufficient to make the block cipher apseudorandom permutation, while 4 rounds are sufficient to make it a "strong" pseudorandom permutation (which means that it remains pseudorandom even to an adversary who getsoracleaccess to its inverse permutation).[4]Because of this very important result of Luby and Rackoff, Feistel ciphers are sometimes called Luby–Rackoff block ciphers.
Further theoretical work has generalized the construction somewhat and given more precise bounds for security.[5][6]
LetF{\displaystyle \mathrm {F} }be the round function and letK0,K1,…,Kn{\displaystyle K_{0},K_{1},\ldots ,K_{n}}be the sub-keys for the rounds0,1,…,n{\displaystyle 0,1,\ldots ,n}respectively.
Then the basic operation is as follows:
Split the plaintext block into two equal pieces: (L0{\displaystyle L_{0}},R0{\displaystyle R_{0}}).
For each roundi=0,1,…,n{\displaystyle i=0,1,\dots ,n}, compute
where⊕{\displaystyle \oplus }meansXOR. Then the ciphertext is(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}.
Decryption of a ciphertext(Rn+1,Ln+1){\displaystyle (R_{n+1},L_{n+1})}is accomplished by computing fori=n,n−1,…,0{\displaystyle i=n,n-1,\ldots ,0}
Then(L0,R0){\displaystyle (L_{0},R_{0})}is the plaintext again.
The diagram illustrates both encryption and decryption. Note the reversal of the subkey order for decryption; this is the only difference between encryption and decryption.
Unbalanced Feistel ciphers use a modified structure whereL0{\displaystyle L_{0}}andR0{\displaystyle R_{0}}are not of equal lengths.[7]TheSkipjackcipher is an example of such a cipher. TheTexas Instrumentsdigital signature transponderuses a proprietary unbalanced Feistel cipher to performchallenge–response authentication.[8]
TheThorp shuffleis an extreme case of an unbalanced Feistel cipher in which one side is a single bit. This has better provable security than a balanced Feistel cipher but requires more rounds.[9]
The Feistel construction is also used in cryptographic algorithms other than block ciphers. For example, theoptimal asymmetric encryption padding(OAEP) scheme uses a simple Feistel network to randomize ciphertexts in certainasymmetric-key encryptionschemes.
A generalized Feistel algorithm can be used to create strong permutations on small domains of size not a power of two (seeformat-preserving encryption).[9]
Whether the entire cipher is a Feistel cipher or not, Feistel-like networks can be used as a component of a cipher's design. For example,MISTY1is a Feistel cipher using a three-round Feistel network in its round function,Skipjackis a modified Feistel cipher using a Feistel network in its G permutation, andThreefish(part ofSkein) is a non-Feistel block cipher that uses a Feistel-like MIX function.
Feistel or modified Feistel:
Generalised Feistel:
|
https://en.wikipedia.org/wiki/Feistel_network
|
Instatisticsandmachine learning,leakage(also known asdata leakageortarget leakage) is the use ofinformationin the model training process which would not be expected to be available atpredictiontime, causing the predictive scores (metrics) tooverestimatethe model's utility when run in a production environment.[1]
Leakage is often subtle and indirect, making it hard to detect and eliminate. Leakage can cause a statistician or modeler to select a suboptimal model, which could be outperformed by a leakage-free model.[1]
Leakage can occur in many steps in the machine learning process. The leakage causes can be sub-classified into two possible sources of leakage for a model: features and training examples.[1]
Feature or column-wise leakage is caused by the inclusion of columns which are one of the following: a duplicate label, a proxy for the label, or the label itself. These features, known asanachronisms, will not be available when the model is used for predictions, and result in leakage if included when the model is trained.[2]
For example, including a "MonthlySalary" column when predicting "YearlySalary"; or "MinutesLate" when predicting "IsLate".
Row-wise leakage is caused by improper sharing of information between rows of data. Types of row-wise leakage include:
A 2023 review found data leakage to be "a widespread failure mode in machine-learning (ML)-based science", having affected at least 294 academic publications across 17 disciplines, and causing a potentialreproducibility crisis.[5]
Data leakage in machine learning can be detected through various methods, focusing on performance analysis, feature examination, data auditing, and model behavior analysis. Performance-wise, unusually high accuracy or significant discrepancies between training and test results often indicate leakage.[6]Inconsistent cross-validation outcomes may also signal issues.
Feature examination involves scrutinizing feature importance rankings and ensuring temporal integrity in time series data. A thorough audit of the data pipeline is crucial, reviewing pre-processing steps, feature engineering, and data splitting processes.[7]Detecting duplicate entries across dataset splits is also important.
For language models, the Min-K% method can detect the presence of data in a pretraining dataset. It presents a sentence suspected to be present in the pretraining dataset, and computes the log-likelihood of each token, then compute the average of the lowest K of these. If this exceeds a threshold, then the sentence is likely present.[8][9]This method is improved by comparing against a baseline of the mean and variance.[10]
Analyzing model behavior can reveal leakage. Models relying heavily on counter-intuitive features or showing unexpected prediction patterns warrant investigation. Performance degradation over time when tested on new data may suggest earlier inflated metrics due to leakage.
Advanced techniques include backward feature elimination, where suspicious features are temporarily removed to observe performance changes. Using a separate hold-out dataset for final validation before deployment is advisable.[7]
|
https://en.wikipedia.org/wiki/Leakage_(machine_learning)
|
Post-Quantum Cryptography Standardization[1]is a program and competition byNISTto update their standards to includepost-quantum cryptography.[2]It was announced at PQCrypto 2016.[3]23 signature schemes and 59 encryption/KEMschemes were submitted by the initial submission deadline at the end of 2017[4]of which 69 total were deemed complete and proper and participated in the first round. Seven of these, of which 3 are signature schemes, have advanced to the third round, which was announced on July 22, 2020.[citation needed]
On August 13, 2024, NIST released final versions of the first three Post Quantum Crypto Standards: FIPS 203, FIPS 204, and FIPS 205.[5]
Academic research on the potential impact of quantum computing dates back to at least 2001.[6]A NIST published report from April 2016 cites experts that acknowledge the possibility of quantum technology to render the commonly usedRSAalgorithm insecure by 2030.[7]As a result, a need to standardizequantum-securecryptographic primitives was pursued. Since most symmetric primitives are relatively easy to modify in a way that makes them quantum resistant, efforts have focused on public-key cryptography, namelydigital signaturesandkey encapsulation mechanisms. In December 2016 NIST initiated a standardization process by announcing a call for proposals.[8]
The competition is now in its third round out of expected four, where in each round some algorithms are discarded and others are studied more closely. NIST hopes to publish the standardization documents by 2024, but may speed up the process if major breakthroughs inquantum computingare made.
It is currently undecided whether the future standards will be published asFIPSor as NIST Special Publication (SP).
Under consideration were:[9](strikethroughmeans it had been withdrawn)
Candidates moving on to the second round were announced on January 30, 2019. They are:[33]
On July 22, 2020, NIST announced seven finalists ("first track"), as well as eight alternate algorithms ("second track"). The first track contains the algorithms which appear to have the most promise, and will be considered for standardization at the end of the third round. Algorithms in the second track could still become part of the standard, after the third round ends.[53]NIST expects some of the alternate candidates to be considered in a fourth round. NIST also suggests it may re-open the signature category for new schemes proposals in the future.[54]
On June 7–9, 2021, NIST conducted the third PQC standardization conference, virtually.[55]The conference included candidates' updates and discussions on implementations, on performances, and on security issues of the candidates. A small amount of focus was spent on intellectual property concerns.
AfterNIST's announcement regarding the finalists and the alternate candidates, various intellectual property concerns were voiced, notably surroundinglattice-based schemessuch asKyberandNewHope. NIST holds signed statements from submitting groups clearing any legal claims, but there is still a concern that third parties could raise claims. NIST claims that they will take such considerations into account while picking the winning algorithms.[56]
During this round, some candidates have shown to be vulnerable to some attack vectors. It forces these candidates to adapt accordingly:
On July 5, 2022, NIST announced the first group of winners from its six-year competition.[60][61]
On July 5, 2022, NIST announced four candidates for PQC Standardization Round 4.[62]
On August 13, 2024, NIST released final versions of its first three Post Quantum Crypto Standards.[5]According to the release announcement:
While there have been no substantive changes made to the standards since the draft versions, NIST has changed the algorithms’ names to specify the versions that appear in the three finalized standards, which are:
On March 11, 2025 NIST released HQC as the fifth algorithm for post-quantum asymmetric encryption as used for key encapsulation / exchange.[66]The new algorithm is as a backup for ML-KEM, the main algorithm for general encryption. HQC is based on different math than ML-KEM, thus mitigating weakness if found.[67]The draft standard incorporating the HQC algorithm is expected in early 2026 with the final in 2027.
NIST received 50 submissions and deemed 40 to be complete and proper according to the submission requirements.[68]Under consideration are:[69](strikethroughmeans it has been withdrawn)
NIST deemed 14 submissions to pass to the second round.[127]
|
https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography_Standardization
|
Inmathematics, aninvariant subspaceof alinear mappingT:V→Vi.e. from somevector spaceVto itself, is asubspaceWofVthat is preserved byT. More generally, an invariant subspace for a collection of linear mappings is a subspace preserved by each mapping individually.
Consider a vector spaceV{\displaystyle V}and a linear mapT:V→V.{\displaystyle T:V\to V.}A subspaceW⊆V{\displaystyle W\subseteq V}is called aninvariant subspace forT{\displaystyle T}, or equivalently,T-invariant, ifTtransforms any vectorv∈W{\displaystyle \mathbf {v} \in W}back intoW. In formulas, this can be writtenv∈W⟹T(v)∈W{\displaystyle \mathbf {v} \in W\implies T(\mathbf {v} )\in W}or[1]TW⊆W.{\displaystyle TW\subseteq W{\text{.}}}
In this case,Trestrictsto anendomorphismofW:[2]T|W:W→W;T|W(w)=T(w).{\displaystyle T|_{W}:W\to W{\text{;}}\quad T|_{W}(\mathbf {w} )=T(\mathbf {w} ){\text{.}}}
The existence of an invariant subspace also has amatrix formulation. Pick abasisCforWand complete it to a basisBofV. With respect toB, the operatorThas formT=[T|WT120T22]{\displaystyle T={\begin{bmatrix}T|_{W}&T_{12}\\0&T_{22}\end{bmatrix}}}for someT12andT22, whereT|W{\displaystyle T|_{W}}here denotes the matrix ofT|W{\displaystyle T|_{W}}with respect to the basisC.
Any linear mapT:V→V{\displaystyle T:V\to V}admits the following invariant subspaces:
These are the improper and trivial invariant subspaces, respectively. Certain linear operators have no proper non-trivial invariant subspace: for instance,rotationof a two-dimensionalrealvector space. However, theaxisof a rotation in three dimensions is always an invariant subspace.
IfUis a 1-dimensional invariant subspace for operatorTwith vectorv∈U, then the vectorsvandTvmust belinearly dependent. Thus∀v∈U∃α∈R:Tv=αv.{\displaystyle \forall \mathbf {v} \in U\;\exists \alpha \in \mathbb {R} :T\mathbf {v} =\alpha \mathbf {v} {\text{.}}}In fact, the scalarαdoes not depend onv.
The equation above formulates aneigenvalueproblem. AnyeigenvectorforTspans a 1-dimensional invariant subspace, and vice-versa. In particular, a nonzeroinvariant vector(i.e. afixed pointofT) spans an invariant subspace of dimension 1.
As a consequence of thefundamental theorem of algebra, every linear operator on a nonzerofinite-dimensionalcomplexvector space has an eigenvector. Therefore, every such linear operator in at least two dimensions has a proper non-trivial invariant subspace.
Determining whether a given subspaceWis invariant underTis ostensibly a problem of geometric nature. Matrix representation allows one to phrase this problem algebraically.
WriteVas thedirect sumW⊕W′; a suitableW′can always be chosen by extending a basis ofW. The associatedprojection operatorPontoWhas matrix representation
A straightforward calculation shows thatWisT-invariant if and only ifPTP=TP.
If 1 is theidentity operator, then1-Pis projection ontoW′. The equationTP=PTholds if and only if both im(P) and im(1 −P) are invariant underT. In that case,Thas matrix representationT=[T1100T22]:im(P)⊕im(1−P)→im(P)⊕im(1−P).{\displaystyle T={\begin{bmatrix}T_{11}&0\\0&T_{22}\end{bmatrix}}:{\begin{matrix}\operatorname {im} (P)\\\oplus \\\operatorname {im} (1-P)\end{matrix}}\rightarrow {\begin{matrix}\operatorname {im} (P)\\\oplus \\\operatorname {im} (1-P)\end{matrix}}\;.}
Colloquially, a projection that commutes withT"diagonalizes"T.
As the above examples indicate, the invariant subspaces of a given linear transformationTshed light on the structure ofT. WhenVis a finite-dimensional vector space over analgebraically closed field, linear transformations acting onVare characterized (up to similarity) by theJordan canonical form, which decomposesVinto invariant subspaces ofT. Many fundamental questions regardingTcan be translated to questions about invariant subspaces ofT.
The set ofT-invariant subspaces ofVis sometimes called theinvariant-subspace latticeofTand writtenLat(T). As the name suggests, it is a (modular)lattice, withmeets and joinsgiven by (respectively)set intersectionandlinear span. Aminimal elementinLat(T)in said to be aminimal invariant subspace.
In the study of infinite-dimensional operators,Lat(T)is sometimes restricted to only theclosedinvariant subspaces.
Given a collectionTof operators, a subspace is calledT-invariant if it is invariant under eachT∈T.
As in the single-operator case, the invariant-subspace lattice ofT, writtenLat(T), is the set of allT-invariant subspaces, and bears the same meet and join operations. Set-theoretically, it is the intersectionLat(T)=⋂T∈TLat(T).{\displaystyle \mathrm {Lat} ({\mathcal {T}})=\bigcap _{T\in {\mathcal {T}}}{\mathrm {Lat} (T)}{\text{.}}}
LetEnd(V)be the set of all linear operators onV. ThenLat(End(V))={0,V}.
Given arepresentationof agroupGon a vector spaceV, we have a linear transformationT(g) :V→Vfor every elementgofG. If a subspaceWofVis invariant with respect to all these transformations, then it is asubrepresentationand the groupGacts onWin a natural way. The same construction applies torepresentations of an algebra.
As another example, letT∈ End(V)andΣbe the algebra generated by {1,T}, where 1 is the identity operator. Then Lat(T) = Lat(Σ).
Just as the fundamental theorem of algebra ensures that every linear transformation acting on a finite-dimensional complex vector space has a non-trivial invariant subspace, thefundamental theorem of noncommutative algebraasserts that Lat(Σ) contains non-trivial elements for certain Σ.
Theorem(Burnside)—AssumeVis a complex vector space of finite dimension. For every proper subalgebraΣofEnd(V),Lat(Σ)contains a non-trivial element.
One consequence is that every commuting family inL(V) can be simultaneouslyupper-triangularized. To see this, note that an upper-triangular matrix representation corresponds to aflagof invariant subspaces, that a commuting family generates a commuting algebra, and thatEnd(V)is not commutative whendim(V) ≥ 2.
IfAis analgebra, one can define aleft regular representationΦ onA: Φ(a)b=abis ahomomorphismfromAtoL(A), the algebra of linear transformations onA
The invariant subspaces of Φ are precisely the left ideals ofA. A left idealMofAgives a subrepresentation ofAonM.
IfMis a leftidealofAthen the left regular representation Φ onMnow descends to a representation Φ' on thequotient vector spaceA/M. If [b] denotes anequivalence classinA/M, Φ'(a)[b] = [ab]. The kernel of the representation Φ' is the set {a∈A|ab∈Mfor allb}.
The representation Φ' isirreducibleif and only ifMis amaximalleft ideal, since a subspaceV⊂A/Mis an invariant under {Φ'(a) |a∈A} if and only if itspreimageunder thequotient map,V+M, is a left ideal inA.
The invariant subspace problem concerns the case whereVis a separableHilbert spaceover thecomplex numbers, of dimension > 1, andTis abounded operator. The problem is to decide whether every suchThas a non-trivial, closed, invariant subspace. It is unsolved.
In the more general case whereVis assumed to be aBanach space,Per Enflo(1976) found an example of an operator without an invariant subspace. A concrete example of an operator without an invariant subspace was produced in 1985 byCharles Read.
Related to invariant subspaces are so-called almost-invariant-halfspaces (AIHS's). A closed subspaceY{\displaystyle Y}of a Banach spaceX{\displaystyle X}is said to bealmost-invariantunder an operatorT∈B(X){\displaystyle T\in {\mathcal {B}}(X)}ifTY⊆Y+E{\displaystyle TY\subseteq Y+E}for some finite-dimensional subspaceE{\displaystyle E}; equivalently,Y{\displaystyle Y}is almost-invariant underT{\displaystyle T}if there is afinite-rank operatorF∈B(X){\displaystyle F\in {\mathcal {B}}(X)}such that(T+F)Y⊆Y{\displaystyle (T+F)Y\subseteq Y}, i.e. ifY{\displaystyle Y}is invariant (in the usual sense) underT+F{\displaystyle T+F}. In this case, the minimum possible dimension ofE{\displaystyle E}(or rank ofF{\displaystyle F}) is called thedefect.
Clearly, every finite-dimensional and finite-codimensional subspace is almost-invariant under every operator. Thus, to make things non-trivial, we say thatY{\displaystyle Y}is a halfspace whenever it is a closed subspace with infinite dimension and infinite codimension.
The AIHS problem asks whether every operator admits an AIHS. In the complex setting it has already been solved; that is, ifX{\displaystyle X}is a complex infinite-dimensional Banach space andT∈B(X){\displaystyle T\in {\mathcal {B}}(X)}thenT{\displaystyle T}admits an AIHS of defect at most 1. It is not currently known whether the same holds ifX{\displaystyle X}is a real Banach space. However, some partial results have been established: for instance, anyself-adjoint operatoron an infinite-dimensional real Hilbert space admits an AIHS, as does any strictly singular (or compact) operator acting on a real infinite-dimensional reflexive space.
|
https://en.wikipedia.org/wiki/Invariant_subspace
|
Incomputer science, theEarley parseris analgorithmforparsingstringsthat belong to a givencontext-free language, though (depending on the variant) it may suffer problems with certain nullable grammars.[1]The algorithm, named after its inventorJay Earley, is achart parserthat usesdynamic programming; it is mainly used for parsing incomputational linguistics. It was first introduced in his dissertation[2]in 1968 (and later appeared in abbreviated, more legible form in a journal).[3]
Earley parsers are appealing because they can parse all context-free languages, unlikeLR parsersandLL parsers, which are more typically used incompilersbut which can only handle restricted classes of languages. The Earley parser executes in cubic time in the general caseO(n3){\displaystyle {O}(n^{3})}, wherenis the length of the parsed string, quadratic time forunambiguous grammarsO(n2){\displaystyle {O}(n^{2})},[4]and linear time for alldeterministic context-free grammars. It performs particularly well when the rules are writtenleft-recursively.
The following algorithm describes the Earley recogniser. The recogniser can be modified to create a parse tree as it recognises, and in that way can be turned into a parser.
In the following descriptions, α, β, and γ represent anystringofterminals/nonterminals(including theempty string), X and Y represent single nonterminals, andarepresents a terminal symbol.
Earley's algorithm is a top-downdynamic programmingalgorithm. In the following, we use Earley's dot notation: given aproductionX → αβ, the notation X → α • β represents a condition in which α has already been parsed and β is expected.
Input position 0 is the position prior to input. Input positionnis the position after accepting thenth token. (Informally, input positions can be thought of as locations attokenboundaries.) For every input position, the parser generates astate set. Each state is atuple(X → α • β,i), consisting of
(Earley's original algorithm included a look-ahead in the state; later research showed this to have little practical effect on the parsing efficiency, and it has subsequently been dropped from most implementations.)
A state is finished when its current position is the last position of the right side of the production, that is, when there is no symbol to the right of the dot • in the visual representation of the state.
The state set at input positionkis called S(k). The parser is seeded with S(0) consisting of only the top-level rule. The parser then repeatedly executes three operations:prediction,scanning, andcompletion.
Duplicate states are not added to the state set, only new ones. These three operations are repeated until no new states can be added to the set. The set is generally implemented as a queue of states to process, with the operation to be performed depending on what kind of state it is.
The algorithm accepts if (X → γ •, 0) ends up in S(n), where (X → γ) is the top level-rule andnthe input length, otherwise it rejects.
Adapted from Speech and Language Processing[5]byDaniel Jurafskyand James H. Martin,
Consider the following simple grammar for arithmetic expressions:
With the input:
This is the sequence of state sets:
The state (P → S •, 0) represents a completed parse. This state also appears in S(3) and S(1), which are complete sentences.
Earley's dissertation[6]briefly describes an algorithm for constructing parse trees by adding a set of pointers from each non-terminal in an Earley item back to the items that caused it to be recognized. ButTomitanoticed[7]that this does not take into account the relations between symbols, so if we consider the grammar S → SS | b and the string bbb, it only notes that each S can match one or two b's, and thus produces spurious derivations for bb and bbbb as well as the two correct derivations for bbb.
Another method[8]is to build the parse forest as you go, augmenting each Earley item with a pointer to a shared packed parse forest (SPPF) node labelled with a triple (s, i, j) where s is a symbol or an LR(0) item (production rule with dot), and i and j give the section of the input string derived by this node. A node's contents are either a pair of child pointers giving a single derivation, or a list of "packed" nodes each containing a pair of pointers and representing one derivation. SPPF nodes are unique (there is only one with a given label), but may contain more than one derivation forambiguousparses. So even if an operation does not add an Earley item (because it already exists), it may still add a derivation to the item's parse forest.
SPPF nodes are never labeled with a completed LR(0) item: instead they are labelled with the symbol that is produced so that all derivations are combined under one node regardless of which alternative production they come from.
Philippe McLean and R. Nigel Horspool in their paper"A Faster Earley Parser"combine Earley parsing with LR parsing and achieve an improvement in an order of magnitude.
|
https://en.wikipedia.org/wiki/Earley_parser
|
High-availability clusters(also known asHA clusters,fail-over clusters) are groups ofcomputersthat supportserverapplicationsthat can be reliably utilized witha minimum amount of down-time. They operate by usinghigh availability softwareto harnessredundantcomputers in groups orclustersthat provide continued service when system components fail. Without clustering, if a server running a particular application crashes, the application will be unavailable until the crashed server is fixed. HA clustering remedies this situation by detecting hardware/software faults, and immediately restarting the application on another system without requiring administrative intervention, a process known asfailover. As part of this process, clustering software may configure the node before starting the application on it. For example, appropriate file systems may need to be imported and mounted, network hardware may have to be configured, and some supporting applications may need to be running as well.[1]
HA clusters are often used for criticaldatabases, file sharing on a network, business applications, and customer services such aselectronic commercewebsites.
HA cluster implementations attempt to build redundancy into a cluster to eliminate single points of failure, including multiple network connections and data storage which is redundantly connected viastorage area networks.
HA clusters usually use aheartbeatprivate network connection which is used to monitor the health and status of each node in the cluster. One subtle but serious condition all clustering software must be able to handle issplit-brain, which occurs when all of the private links go down simultaneously, but the cluster nodes are still running. If that happens, each node in the cluster may mistakenly decide that every other node has gone down and attempt to start services that other nodes are still running. Having duplicate instances of services may cause data corruption on the shared storage.
HA clusters often also usequorumwitness storage (local or cloud) to avoid this scenario. A witness device cannot be shared between two halves of a split cluster, so in the event that all cluster members cannot communicate with each other (e.g., failed heartbeat), if a member cannot access the witness, it cannot become active.
Not every application can run in a high-availability cluster environment, and the necessary design decisions need to be made early in the software design phase. In order to run in a high-availability cluster environment, an application must satisfy at least the following technical requirements, the last two of which are critical to its reliable function in a cluster, and are the most difficult to satisfy fully:
The most common size for an HA cluster is a two-node cluster, since that is the minimum required to provide redundancy, but many clusters consist of many more, sometimes dozens of nodes.
The attached diagram is a good overview of a classic HA cluster, with the caveat that it does not make any mention of quorum/witness functionality (see above).
Such configurations can sometimes be categorized into one of the following models:
The termslogical hostorcluster logical hostis used to describe thenetwork addressthat is used to access services provided by the cluster. This logical host identity is not tied to a single cluster node. It is actually a network address/hostname that is linked with the service(s) provided by the cluster. If a cluster node with a running database goes down, the database will be restarted on another cluster node.
HA clusters usually use all available techniques to make the individual systems and shared infrastructure as reliable as possible. These include:
These features help minimize the chances that the clustering failover between systems will be required. In such a failover, the service provided is unavailable for at least a little while, so measures to avoid failover are preferred.
Systems that handle failures in distributed computing have different strategies to cure a failure. For instance, theApache CassandraAPIHectordefines three ways to configure a failover:
|
https://en.wikipedia.org/wiki/High-availability_cluster
|
Crowdfixingis a specific way ofcrowdsourcing, in which people gather together to fix public spaces of thelocal community. The main aim is to fight against deterioration of public places. Crowdfixing actions include (but are not limited to) cleaningflashmobs, mowing, repairing structures, and removing unsafe elements.
Placemaking, a concept originated in the 1960s that focused on planning, management and design of public places, was the philosophical background to the crowdfixing movement. According to placemaking, in the modern times all the resources needed to create community-friendly, enjoyablepublic spacesand keep them in good conditions are available, but decision-making processes exclude citizens' preferences.
Crowdfixing promotes the idea of public spaces as belonging to the local community, in opposition to the concept of areas merely administrated and owned by theState.
Crowdfixing also tries to create better conditions for people to interact by providing them with onlinetoolsand mechanisms that allow them to set the different stages required to fix public spaces by improving the communication processes.
|
https://en.wikipedia.org/wiki/Crowdfixing
|
Asource portis a software project based on thesource codeof agame enginethat allows the game to be played onoperating systemsorcomputing platformswith which the game was not originally compatible.
Source ports are oftencreated by fansafter the original developer hands over the maintenance support for a game by releasing itssource codeto the public (seeList of commercial video games with later released source code). In some cases, the source code used to create a source port must be obtained throughreverse engineering, in situations where the original source was never formally released by the game's developers. The term was coined after the release of the source code toDoom. Due to copyright issues concerning the sound library used by the original DOS version, id Software released only the source code to the Linux version of the game.[1][2]Since the majority of Doom players were DOS users the first step for a fan project was toportthe Linuxsourcecode to DOS.[3]A source port typically only includes the engine portion of the game and requires that the data files of the game in question already be present on users' systems.
Source ports share the similarity withunofficial patchesthat both don't change the original gameplay as such projects are by definitionmods. However many source ports add support for gameplay mods, which is usually optional (e.g.DarkPlacesconsists of a source port engine and a gameplay mod that are even distributed separately[4]). While the primary goal of any source port is compatibility with newer hardware, many projects support other enhancements. Common examples of additions include support for higher video resolutions and differentaspect ratios, hardware accelerated renderers (OpenGLand/orDirect3D), enhanced input support (including the ability to map controls onto additional input devices), 3D character models (in case of2.5Dgames), higher resolution textures, support to replaceMIDIwithdigital audio(MP3,Ogg Vorbis, etc.), and enhancedmultiplayersupport using theInternet.
Several source ports have been created for various games specifically to address online multiplayer support. Most older games were not created to take advantage of the Internet and the low latency, high bandwidth Internet connections available to computer gamers today. Furthermore, old games may use outdated network protocols to create multiplayer connections, such asIPXprotocol, instead ofInternet Protocol. Another problem was games that required a specificIP addressfor connecting with another player. This requirement made it difficult to quickly find a group of strangers to play with — the way that online games are most commonly played today. To address this shortcoming, specific source ports such asSkulltagadded "lobbies", which are basically integratedchat roomsin which players can meet and post the location of games they are hosting or may wish to join. Similar facilities may be found in newer games and online game services such as Valve'sSteam, Blizzard'sbattle.net, andGameSpy Arcade.
If the source code of a software is not available, alternative approaches to achieve portability areEmulation,Engine remakes, andStatic recompilation.
|
https://en.wikipedia.org/wiki/Source_port
|
ASPARQCodeis amatrix code(or two-dimensionalbar code)encodingstandard that is based on the physicalQR Codedefinition created by Japanese corporationDenso-Wave.
The QR Code standard as defined by Denso-Wave in ISO/IEC 18004 covers the physical encoding method of a binary data stream.[1]However, the Denso-Wave standard lacks an encoding standard for interpreting the data stream on the application layer for decoding URLs, phone numbers, and all other data types.NTT Docomohas established de facto standards for encoding some data types such as URLs, and contact information in Japan, but not all applications in other countries adhere to this convention as listed by the open-source project "zxing" for QR Code data types.[2][3]
The SPARQCode encoding standard specifies a convention for the following encoding data types.
The SPARQCode convention also recommends but does not require the inclusion of visual pictograms to denote the type of encoded data.
The use of the SPARQCode is free of any license. The termSPARQCodeitself is atrademarkof MSKYNET, but has chosen to open it to be royalty-free.[4]
|
https://en.wikipedia.org/wiki/SPARQCode
|
Achatbotis asoftwareapplication or web interface that is designed to mimic humanconversationthrough text or voice interactions.[1][2][3]Modern chatbots are typicallyonlineand usegenerative artificial intelligencesystems that are capable of maintaining a conversation with a user innatural languageand simulating the way a human would behave as a conversational partner. Such chatbots often usedeep learningandnatural language processing, but simpler chatbots have existed for decades.
Thislist of chatbotsis a general overview of notable chatbot applications and web interfaces.
|
https://en.wikipedia.org/wiki/List_of_chatbots
|
Incomputability theory, aprimitive recursive functionis, roughly speaking, a function that can be computed by acomputer programwhoseloopsare all"for" loops(that is, an upper bound of the number of iterations of every loop is fixed before entering the loop). Primitive recursive functions form a strictsubsetof thosegeneral recursive functionsthat are alsototal functions.
The importance of primitive recursive functions lies in the fact that mostcomputable functionsthat are studied innumber theory(and more generally in mathematics) are primitive recursive. For example,additionanddivision, thefactorialandexponential function, and the function which returns thenth prime are all primitive recursive.[1]In fact, for showing that a computable function is primitive recursive, it suffices to show that itstime complexityis bounded above by a primitive recursive function of the input size.[2]It is hence not particularly easy to devise acomputable functionthat isnotprimitive recursive; some examples are shown in section§ Limitationsbelow.
The set of primitive recursive functions is known asPRincomputational complexity theory.
A primitive recursive function takes a fixed number of arguments, each anatural number(nonnegative integer: {0, 1, 2, ...}), and returns a natural number. If it takesnarguments it is calledn-ary.
The basic primitive recursive functions are given by theseaxioms:
More complex primitive recursive functions can be obtained by applying theoperationsgiven by these axioms:
Interpretation:
Theprimitive recursive functionsare the basic functions and those obtained from the basic functions by applying these operations a finite number of times.
A definition of the 2-ary functionAdd{\displaystyle Add}, to compute the sum of its arguments, can be obtained using the primitive recursion operatorρ{\displaystyle \rho }. To this end, the well-known equations
are "rephrased in primitive recursive function terminology": In the definition ofρ(g,h){\displaystyle \rho (g,h)}, the first equation suggests to chooseg=P11{\displaystyle g=P_{1}^{1}}to obtainAdd(0,y)=g(y)=y{\displaystyle Add(0,y)=g(y)=y}; the second equation suggests to chooseh=S∘P23{\displaystyle h=S\circ P_{2}^{3}}to obtainAdd(S(x),y)=h(x,Add(x,y),y)=(S∘P23)(x,Add(x,y),y)=S(Add(x,y)){\displaystyle Add(S(x),y)=h(x,Add(x,y),y)=(S\circ P_{2}^{3})(x,Add(x,y),y)=S(Add(x,y))}. Therefore, the addition function can be defined asAdd=ρ(P11,S∘P23){\displaystyle Add=\rho (P_{1}^{1},S\circ P_{2}^{3})}. As a computation example,
GivenAdd{\displaystyle Add}, the 1-ary functionAdd∘(P11,P11){\displaystyle Add\circ (P_{1}^{1},P_{1}^{1})}doubles its argument,(Add∘(P11,P11))(x)=Add(x,x)=x+x{\displaystyle (Add\circ (P_{1}^{1},P_{1}^{1}))(x)=Add(x,x)=x+x}.
In a similar way as addition, multiplication can be defined byMul=ρ(C01,Add∘(P23,P33)){\displaystyle Mul=\rho (C_{0}^{1},Add\circ (P_{2}^{3},P_{3}^{3}))}. This reproduces the well-known multiplication equations:
and
The predecessor function acts as the "opposite" of the successor function and is recursively defined by the rulesPred(0)=0{\displaystyle Pred(0)=0}andPred(S(n))=n{\displaystyle Pred(S(n))=n}. A primitive recursive definition isPred=ρ(C00,P12){\displaystyle Pred=\rho (C_{0}^{0},P_{1}^{2})}. As a computation example,
The limited subtraction function (also called "monus", and denoted "−.{\displaystyle {\stackrel {.}{-}}}") is definable from the predecessor function. It satisfies the equations
Since the recursion runs over the second argument, we begin with a primitive recursive definition of the reversed subtraction,RSub(y,x)=x−.y{\displaystyle RSub(y,x)=x{\stackrel {.}{-}}y}. Its recursion then runs over the first argument, so its primitive recursive definition can be obtained, similar to addition, asRSub=ρ(P11,Pred∘P23){\displaystyle RSub=\rho (P_{1}^{1},Pred\circ P_{2}^{3})}. To get rid of the reversed argument order, then defineSub=RSub∘(P22,P12){\displaystyle Sub=RSub\circ (P_{2}^{2},P_{1}^{2})}. As a computation example,
In some settings it is natural to consider primitive recursive functions that take as inputs tuples that mix numbers withtruth values(that ist{\displaystyle t}for true andf{\displaystyle f}for false),[citation needed]or that produce truth values as outputs.[4]This can be accomplished by identifying the truth values with numbers in any fixed manner. For example, it is common to identify the truth valuet{\displaystyle t}with the number1{\displaystyle 1}and the truth valuef{\displaystyle f}with the number0{\displaystyle 0}. Once this identification has been made, thecharacteristic functionof a setA{\displaystyle A}, which always returns1{\displaystyle 1}or0{\displaystyle 0}, can be viewed as a predicate that tells whether a number is in the setA{\displaystyle A}. Such an identification of predicates with numeric functions will be assumed for the remainder of this article.
As an example for a primitive recursive predicate, the 1-ary functionIsZero{\displaystyle IsZero}shall be defined such thatIsZero(x)=1{\displaystyle IsZero(x)=1}ifx=0{\displaystyle x=0}, andIsZero(x)=0{\displaystyle IsZero(x)=0}, otherwise. This can be achieved by definingIsZero=ρ(C10,C02){\displaystyle IsZero=\rho (C_{1}^{0},C_{0}^{2})}. Then,IsZero(0)=ρ(C10,C02)(0)=C10(0)=1{\displaystyle IsZero(0)=\rho (C_{1}^{0},C_{0}^{2})(0)=C_{1}^{0}(0)=1}and e.g.IsZero(8)=ρ(C10,C02)(S(7))=C02(7,IsZero(7))=0{\displaystyle IsZero(8)=\rho (C_{1}^{0},C_{0}^{2})(S(7))=C_{0}^{2}(7,IsZero(7))=0}.
Using the propertyx≤y⟺x−.y=0{\displaystyle x\leq y\iff x{\stackrel {.}{-}}y=0}, the 2-ary functionLeq{\displaystyle Leq}can be defined byLeq=IsZero∘Sub{\displaystyle Leq=IsZero\circ Sub}. ThenLeq(x,y)=1{\displaystyle Leq(x,y)=1}ifx≤y{\displaystyle x\leq y}, andLeq(x,y)=0{\displaystyle Leq(x,y)=0}, otherwise. As a computation example,
Once a definition ofLeq{\displaystyle Leq}is obtained, the converse predicate can be defined asGeq=Leq∘(P22,P12){\displaystyle Geq=Leq\circ (P_{2}^{2},P_{1}^{2})}. Then,Geq(x,y)=Leq(y,x){\displaystyle Geq(x,y)=Leq(y,x)}is true (more precisely: has value 1) if, and only if,x≥y{\displaystyle x\geq y}.
The 3-ary if-then-else operator known from programming languages can be defined byIf=ρ(P22,P34){\displaystyle {\textit {If}}=\rho (P_{2}^{2},P_{3}^{4})}. Then, for arbitraryx{\displaystyle x},
and
That is,If(x,y,z){\displaystyle {\textit {If}}(x,y,z)}returns the then-part,y{\displaystyle y}, if the if-part,x{\displaystyle x}, is true, and the else-part,z{\displaystyle z}, otherwise.
Based on theIf{\displaystyle {\textit {If}}}function, it is easy to define logical junctors. For example, definingAnd=If∘(P12,P22,C02){\displaystyle And={\textit {If}}\circ (P_{1}^{2},P_{2}^{2},C_{0}^{2})}, one obtainsAnd(x,y)=If(x,y,0){\displaystyle And(x,y)={\textit {If}}(x,y,0)}, that is,And(x,y){\displaystyle And(x,y)}is trueif, and only if, bothx{\displaystyle x}andy{\displaystyle y}are true (logical conjunctionofx{\displaystyle x}andy{\displaystyle y}).
Similarly,Or=If∘(P12,C12,P22){\displaystyle Or={\textit {If}}\circ (P_{1}^{2},C_{1}^{2},P_{2}^{2})}andNot=If∘(P11,C01,C11){\displaystyle Not={\textit {If}}\circ (P_{1}^{1},C_{0}^{1},C_{1}^{1})}lead to appropriate definitions ofdisjunctionandnegation:Or(x,y)=If(x,1,y){\displaystyle Or(x,y)={\textit {If}}(x,1,y)}andNot(x)=If(x,0,1){\displaystyle Not(x)={\textit {If}}(x,0,1)}.
Using the above functionsLeq{\displaystyle Leq},Geq{\displaystyle Geq}andAnd{\displaystyle And}, the definitionEq=And∘(Leq,Geq){\displaystyle Eq=And\circ (Leq,Geq)}implements the equality predicate. In fact,Eq(x,y)=And(Leq(x,y),Geq(x,y)){\displaystyle Eq(x,y)=And(Leq(x,y),Geq(x,y))}is true if, and only if,x{\displaystyle x}equalsy{\displaystyle y}.
Similarly, the definitionLt=Not∘Geq{\displaystyle Lt=Not\circ Geq}implements the predicate "less-than", andGt=Not∘Leq{\displaystyle Gt=Not\circ Leq}implements "greater-than".
Exponentiationandprimality testingare primitive recursive. Given primitive recursive functionse{\displaystyle e},f{\displaystyle f},g{\displaystyle g}, andh{\displaystyle h}, a function that returns the value ofg{\displaystyle g}whene≤f{\displaystyle e\leq f}and the value ofh{\displaystyle h}otherwise is primitive recursive.
By usingGödel numberings, the primitive recursive functions can be extended to operate on other objects such as integers andrational numbers. If integers are encoded by Gödel numbers in a standard way, the arithmetic operations including addition, subtraction, and multiplication are all primitive recursive. Similarly, if the rationals are represented by Gödel numbers then thefieldoperations are all primitive recursive.
The following examples and definitions are fromKleene (1952, pp. 222–231). Many appear with proofs. Most also appear with similar names, either as proofs or as examples, inBoolos, Burgess & Jeffrey (2002, pp. 63–70) they add the logarithm lo(x, y) or lg(x, y) depending on the exact derivation.
In the following the mark " ' ", e.g. a', is the primitive mark meaning "the successor of", usually thought of as " +1", e.g. a +1 =defa'. The functions 16–20 and #G are of particular interest with respect to converting primitive recursive predicates to, and extracting them from, their "arithmetical" form expressed asGödel numbers.
The broader class ofpartial recursive functionsis defined by introducing anunbounded search operator. The use of this operator may result in apartial function, that is, a relation withat mostone value for each argument, but does not necessarily haveanyvalue for any argument (seedomain). An equivalent definition states that a partial recursive function is one that can be computed by aTuring machine. A total recursive function is a partial recursive function that is defined for every input.
Every primitive recursive function is total recursive, but not all total recursive functions are primitive recursive. TheAckermann functionA(m,n) is a well-known example of a total recursive function (in fact, provable total), that is not primitive recursive. There is a characterization of the primitive recursive functions as a subset of the total recursive functions using the Ackermann function. This characterization states that a function is primitive recursiveif and only ifthere is a natural numbermsuch that the function can be computed by a Turingmachine that always haltswithin A(m,n) or fewer steps, wherenis the sum of the arguments of the primitive recursive function.[5]
An important property of the primitive recursive functions is that they are arecursively enumerablesubset of the set of alltotal recursive functions(which is not itself recursively enumerable). This means that there is a single computable functionf(m,n) that enumerates the primitive recursive functions, namely:
fcan be explicitly constructed by iteratively repeating all possible ways of creating primitive recursive functions. Thus, it is provably total. One can use adiagonalizationargument to show thatfis not recursive primitive in itself: had it been such, so would beh(n) =f(n,n)+1. But if this equals some primitive recursive function, there is anmsuch thath(n) =f(m,n) for alln, and thenh(m) =f(m,m), leading to contradiction.
However, the set of primitive recursive functions is not thelargestrecursively enumerable subset of the set of all total recursive functions. For example, the set of provably total functions (in Peano arithmetic) is also recursively enumerable, as one can enumerate all the proofs of the theory. While all primitive recursive functions are provably total, the converse is not true.
Primitive recursive functions tend to correspond very closely with our intuition of what a computable function must be. Certainly the initial functions are intuitively computable (in their very simplicity), and the two operations by which one can create new primitive recursive functions are also very straightforward. However, the set of primitive recursive functions does not include every possible total computable function—this can be seen with a variant ofCantor's diagonal argument. This argument provides a total computable function that is not primitive recursive. A sketch of the proof is as follows:
Now define the "evaluator function"ev{\displaystyle ev}with two arguments, byev(i,j)=fi(j){\displaystyle ev(i,j)=f_{i}(j)}. Clearlyev{\displaystyle ev}is total and computable, since one can effectively determine the definition offi{\displaystyle f_{i}}, and being a primitive recursive functionfi{\displaystyle f_{i}}is itself total and computable, sofi(j){\displaystyle f_{i}(j)}is always defined and effectively computable. However a diagonal argument will show that the functionev{\displaystyle ev}of two arguments is not primitive recursive.
This argument can be applied to any class of computable (total) functions that can be enumerated in this way, as explained in the articleMachine that always halts. Note however that thepartialcomputable functions (those that need not be defined for all arguments) can be explicitly enumerated, for instance by enumerating Turing machine encodings.
Other examples of total recursive but not primitive recursive functions are known:
Instead ofCnk{\displaystyle C_{n}^{k}},
alternative definitions use just one 0-aryzero functionC00{\displaystyle C_{0}^{0}}as a primitive function that always returns zero, and built the constant functions from the zero function, the successor function and the composition operator.
Robinson[6]considered various restrictions of the recursion rule. One is the so-callediteration rulewhere the functionhdoes not have access to the parametersxi(in this case, we may assume without loss of generality that the functiongis just the identity, as the general case can be obtained by substitution):
He proved that the class of all primitive recursive functions can still be obtained in this way.
Another restriction considered by Robinson[6]ispure recursion, wherehdoes not have access to the induction variabley:
Gladstone[7]proved that this rule is enough to generate all primitive recursive functions. Gladstone[8]improved this so that even the combination of these two restrictions, i.e., thepure iterationrule below, is enough:
Further improvements are possible: Severin[9]prove that even the pure iteration rulewithout parameters, namely
suffices to generate allunaryprimitive recursive functions if we extend the set of initial functions with truncated subtractionx ∸ y. We getallprimitive recursive functions if we additionally include + as an initial function.
Some additional forms of recursion also define functions that are in fact
primitive recursive. Definitions in these forms may be easier to find or
more natural for reading or writing.Course-of-values recursiondefines primitive recursive functions. Some forms ofmutual recursionalso define primitive recursive functions.
The functions that can be programmed in theLOOP programming languageare exactly the primitive recursive functions. This gives a different characterization of the power of these functions. The main limitation of the LOOP language, compared to aTuring-complete language, is that in the LOOP language the number of times that each loop will run is specified before the loop begins to run.
An example of a primitive recursive programming language is one that contains basic arithmetic operators (e.g. + and −, or ADD and SUBTRACT), conditionals and comparison (IF-THEN, EQUALS, LESS-THAN), and bounded loops, such as the basicfor loop, where there is a known or calculable upper bound to all loops (FOR i FROM 1 TO n, with neither i nor n modifiable by the loop body). No control structures of greater generality, such aswhile loopsor IF-THEN plusGOTO, are admitted in a primitive recursive language.
TheLOOP language, introduced in a 1967 paper byAlbert R. MeyerandDennis M. Ritchie,[10]is such a language. Its computing power coincides with the primitive recursive functions. A variant of the LOOP language isDouglas Hofstadter'sBlooPinGödel, Escher, Bach. Adding unbounded loops (WHILE, GOTO) makes the languagegeneral recursiveandTuring-complete, as are all real-world computer programming languages.
The definition of primitive recursive functions implies that their computation halts on every input (after a finite number of steps). On the other hand, thehalting problemisundecidablefor general recursive functions.
The primitive recursive functions are closely related to mathematicalfinitism, and are used in several contexts in mathematical logic where a particularly constructive system is desired.Primitive recursive arithmetic(PRA), a formal axiom system for the natural numbers and the primitive recursive functions on them, is often used for this purpose.
PRA is much weaker thanPeano arithmetic, which is not a finitistic system. Nevertheless, many results innumber theoryand inproof theorycan be proved in PRA. For example,Gödel's incompleteness theoremcan be formalized into PRA, giving the following theorem:
Similarly, many of the syntactic results in proof theory can be proved in PRA, which implies that there are primitive recursive functions that carry out the corresponding syntactic transformations of proofs.
In proof theory andset theory, there is an interest in finitisticconsistency proofs, that is, consistency proofs that themselves are finitistically acceptable. Such a proof establishes that the consistency of a theoryTimplies the consistency of a theorySby producing a primitive recursive function that can transform any proof of an inconsistency fromSinto a proof of an inconsistency fromT. One sufficient condition for a consistency proof to be finitistic is the ability to formalize it in PRA. For example, many consistency results in set theory that are obtained byforcingcan be recast as syntactic proofs that can be formalized in PRA.
Recursive definitionshad been used more or less formally in mathematics before, but the construction of primitive recursion is traced back toRichard Dedekind's theorem 126 of hisWas sind und was sollen die Zahlen?(1888). This work was the first to give a proof that a certain recursive construction defines a unique function.[11][12][13]
Primitive recursive arithmeticwas first proposed byThoralf Skolem[14]in 1923.
The current terminology was coined byRózsa Péter(1934) afterAckermannhad proved in 1928 that the function which today is named after him was not primitive recursive, an event which prompted the need to rename what until then were simply called recursive functions.[12][13]
|
https://en.wikipedia.org/wiki/Primitive_recursive_function
|
The following is a list ofweb serviceprotocols.
|
https://en.wikipedia.org/wiki/List_of_web_service_protocols
|
Indecision theory, theodds algorithm(orBruss algorithm) is a mathematical method for computing optimal strategies for a class of problems that belong to the domain ofoptimal stoppingproblems. Their solution follows from theodds strategy, and the importance of the odds strategy lies in its optimality, as explained below.
The odds algorithm applies to a class of problems calledlast-successproblems. Formally, the objective in these problems is to maximize the probability of identifying in a sequence of sequentially observed independent events the last event satisfying a specific criterion (a "specific event"). This identification must be done at the time of observation. No revisiting of preceding observations is permitted. Usually, a specific event is defined by the decision maker as an event that is of true interest in the view of "stopping" to take a well-defined action. Such problems are encountered in several situations.
Two different situations exemplify the interest in maximizing the probability to stop on a last specific event.
Consider a sequence ofn{\displaystyle n}independent events. Associate with this sequence another sequence of independent eventsI1,I2,…,In{\displaystyle I_{1},\,I_{2},\,\dots ,\,I_{n}}with values 1 or 0. HereIk=1{\displaystyle \,I_{k}=1}, called a success, stands for
the event that the kth observation is interesting (as defined by the decision maker), andIk=0{\displaystyle \,I_{k}=0}for non-interesting.
These random variablesI1,I2,…,In{\displaystyle I_{1},\,I_{2},\,\dots ,\,I_{n}}are observed sequentially and the goal is to correctly select the last success when it is observed.
Letpk=P(Ik=1){\displaystyle \,p_{k}=P(\,I_{k}\,=1)}be the probability that the kth event is interesting. Further letqk=1−pk{\displaystyle \,q_{k}=\,1-p_{k}}andrk=pk/qk{\displaystyle \,r_{k}=p_{k}/q_{k}}. Note thatrk{\displaystyle \,r_{k}}represents theoddsof the kth event turning out to be interesting, explaining the name of the odds algorithm.
The odds algorithm sums up the odds in reverse order
until this sum reaches or exceeds the value 1 for the first time. If this happens at indexs, it savessand the corresponding sum
If the sum of the odds does not reach 1, it setss= 1. At the same time it computes
The output is
The odds strategy is the rule to observe the events one after the other and to stop on the first interesting event from indexsonwards (if any), wheresis the stopping threshold of output a.
The importance of the odds strategy, and hence of the odds algorithm, lies in the following odds theorem.
The odds theorem states that
The odds algorithm computes the optimalstrategyand the optimalwin probabilityat the same time. Also, the number of operations of the odds algorithm is (sub)linear in n. Hence no quicker algorithm can possibly
exist for all sequences, so that the odds algorithm is, at the same time, optimal as an algorithm.
Bruss 2000devised the odds algorithm, and coined its name. It is also known as Bruss algorithm (strategy). Free implementations can be found on the web.
Applications reach from medical questions inclinical trialsover sales problems,secretary problems,portfolioselection, (one way) search strategies, trajectory problems and theparking problemto problems in online maintenance and others.
There exists, in the same spirit, an Odds Theorem for continuous-time arrival processes withindependent incrementssuch as thePoisson process(Bruss 2000). In some cases, the odds are not necessarily known in advance (as in Example 2 above) so that the application of the odds algorithm is not directly possible. In this case each step can usesequential estimatesof the odds. This is meaningful, if the number of unknown parameters is not large compared with the number n of observations. The question of optimality is then more complicated, however, and requires additional studies. Generalizations of the odds algorithm allow for different rewards for failing to stop
and wrong stops as well as replacing independence assumptions by weaker ones (Ferguson 2008).
Bruss & Paindaveine 2000discussed a problem of selecting the lastk{\displaystyle k}successes.
Tamaki 2010proved a multiplicative odds theorem which deals with a problem of stopping at any of the lastℓ{\displaystyle \ell }successes.
A tight lower bound of win probability is obtained byMatsui & Ano 2014.
Matsui & Ano 2017discussed a problem of selectingk{\displaystyle k}out of the lastℓ{\displaystyle \ell }successes and obtained a tight lower bound of win probability. Whenℓ=k=1,{\displaystyle \ell =k=1,}the problem is equivalent to Bruss' odds problem. Ifℓ=k≥1,{\displaystyle \ell =k\geq 1,}the problem is equivalent to that inBruss & Paindaveine 2000. A problem discussed byTamaki 2010is obtained by settingℓ≥k=1.{\displaystyle \ell \geq k=1.}
A player is allowedr{\displaystyle r}choices, and he wins if any choice is the last success.
For classical secretary problem,Gilbert & Mosteller 1966discussed the casesr=2,3,4{\displaystyle r=2,3,4}.
The odds problem withr=2,3{\displaystyle r=2,3}is discussed byAno, Kakinuma & Miyoshi 2010.
For further cases of odds problem, seeMatsui & Ano 2016.
An optimal strategy for this problem belongs to the class of strategies defined by a set of threshold numbers(a1,a2,...,ar){\displaystyle (a_{1},a_{2},...,a_{r})}, wherea1>a2>⋯>ar{\displaystyle a_{1}>a_{2}>\cdots >a_{r}}.
Specifically, imagine that you haver{\displaystyle r}letters of acceptance labelled from1{\displaystyle 1}tor{\displaystyle r}. You would haver{\displaystyle r}application officers, each holding one letter. You keep interviewing the candidates and rank them on a chart that every application officer can see. Now officeri{\displaystyle i}would send their letter of acceptance to the first candidate that is better than all candidates1{\displaystyle 1}toai{\displaystyle a_{i}}. (Unsent letters of acceptance are by default given to the last applicants, the same as in the standard secretary problem.)
Whenr=2{\displaystyle r=2},Ano, Kakinuma & Miyoshi 2010showed that the tight lower bound of win probability is equal toe−1+e−32.{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}.}For general positive integerr{\displaystyle r},Matsui & Ano 2016proved that the tight lower bound of win probability is the win probability of thesecretary problem variant where one must pick the top-k candidates using just k attempts.
Whenr=3,4,5{\displaystyle r=3,4,5}, tight lower bounds of win probabilities are equal toe−1+e−32+e−4724{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}},e−1+e−32+e−4724+e−27611152{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}+e^{-{\frac {2761}{1152}}}}ande−1+e−32+e−4724+e−27611152+e−41626371474560,{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}+e^{-{\frac {2761}{1152}}}+e^{-{\frac {4162637}{1474560}}},}respectively.
For further numerical cases forr=6,...,10{\displaystyle r=6,...,10}, and an algorithm for general cases, seeMatsui & Ano 2016.
|
https://en.wikipedia.org/wiki/Odds_algorithm
|
ARMInstruction Set Simulator, also known asARMulator, is one of the software development tools provided by the development systems business unit ofARM Limitedto all users of ARM-based chips. It owes its heritage to the early development of the instruction set bySophie Wilson. Part of this heritage is still visible in the provision of aTube BBC Micromodel in ARMulator.
ARMulator is written inCand provides more than just an instruction set simulator, it provides a virtual platform for system emulation. It comes ready to emulate an ARM processor and certain ARMcoprocessors. If the processor is part of anembedded system, then licensees may extend ARMulator to add their own implementations of the additional hardware to the ARMulator model. ARMulator provides a number of services to help with the time-based behaviour and event scheduling and ships with examples of memory mapped and co-processor expansions. This way, they can use ARMulator to emulate their entireembedded system. A key limitation for ARMulator is that it can only simulate a single ARM CPU at one time, although almost all ARM cores up toARM11are available.
Performance of ARMulator is good for the technology employed, it's about 1000 host (PC) instructions per ARM instruction. This means that emulated speeds of 1 MHz were normal for PCs of the mid to late 90s. Accuracy is good too, although it is classed as cycle count accurate rather than cycle accurate, this is because the ARM pipeline isn't fully modeled (although register interlocks are). Resolution is to an instruction, as a consequence when single stepping the register interlocks are ignored and different cycle counts are returned than if the program had simply run, this was unavoidable.
Testing ARMulator was always a time-consuming challenge, the full ARM architecture validation suites being employed. At over 1 million lines of C code it was a fairly hefty product.
ARMulator allows runtime debugging using either armsd (ARM Symbolic Debugger), or either of the graphical debuggers that were shipped in SDT and the later ADS products. ARMulator suffered from being an invisible tool with a text file configuration (armul.conf) that many found complex to configure.
ARMulator II formed the basis for the high accuracy, cycle callable co-verification models of ARM processors, these CoVs models (seeCycle Accurate Simulator) were the basis of many CoVerification systems for ARM processors.
ARMulator was available on a very broad range of platforms through its life, includingMac,RISC OSplatforms,DEC Alpha,HP-UX,Solaris,SunOS,Windows,Linux. In the mid-1990s there was reluctance to support Windows platforms; pre-Windows 95 it was a relatively challenging platform. Through the late 1990s and early 2000s support was removed for all but Solaris, Windows and Linux - although undoubtedly the code base remains littered with pragmas such as #ifdef RISCOS.
ARMulator II shipped in early ARM toolkits as well as the later SDT 2.5, SDT 2.5.1, ADS 1.0, ADS 1.1, ADS 1.2, RCVT 1.0 and also separately as RVISS.
Special models were produced during the development of CPUs, notably theARM9E, ARM10 andARM11, these models helped with architectural decisions such as Thumb-2 and TrustZone.
ARMulator has been gradually phased out and has been replaced byJust-in-time compilation-based high performance CPU and system models (See FastSim link below).
ARMulator I was made open source and is the basis for the GNU version of ARMulator. Key differences are in the memory interface and services, also the instruction decode is done differently. The GNU ARMulator is available as part of theGDBdebugger in the ARM GNU Tools.
ARMulator II formed the basis for the high accuracy, cycle callable co-verification models of ARM processors, these CoVs models (see Cycle Accurate Simulator) were the basis of many CoVerification systems for ARM processors. Mentor Graphic's Seamless have the market leading CoVs system that supports many ARM cores, and many other CPUs.
ARMulator II shipped in early ARM toolkits as well as the later SDT 2.5, SDT 2.5.1, ADS 1.0, ADS 1.1, ADS 1.2, RVCT 1.0 and also separately as RVISS.
Key contributors to ARMulator II were Mike Williams, Louise Jameson, Charles Lavender, Donald Sinclair, Chris Lamb and Rebecca Bryan (who worked on ARMulator as both an engineer and later as product manager). Significant input was also made by Allan Skillman, who was working on ARM CoVerification models at the time.
A key contributor to ARMulator I wasDave Jaggar.
Special models were produced during the development of CPUs, notably the ARM9E, ARM10 and ARM11, these models helped with architectural decisions such as Thumb-2 and TrustZone.
|
https://en.wikipedia.org/wiki/ARMulator
|
Reed's lawis the assertion ofDavid P. Reedthat theutilityof largenetworks, particularlysocial networks, canscale exponentiallywith the size of the network.[1]
The reason for this is that the number of possible sub-groups of network participants is 2N−N− 1, whereNis the number of participants. This grows much more rapidly than either
so that even if the utility of groups available to be joined is very small on a per-group basis, eventually thenetwork effectof potential group membership can dominate the overall economics of the system.
Given asetAofNpeople, it has 2Npossible subsets. This is not difficult to see, since we can form each possible subset by simply choosing for each element ofAone of two possibilities: whether to include that element, or not.
However, this includes the (one) empty set, andNsingletons, which are not properly subgroups. So 2N−N− 1 subsets remain, which is exponential, like 2N.
From David P. Reed's, "The Law of the Pack" (Harvard Business Review, February 2001, pp 23–4):
Reed's Law is often mentioned when explaining competitive dynamics of internet platforms. As the law states that a network becomes more valuable when people can easily form subgroups to collaborate, while this value increases exponentially with the number of connections, business platform that reaches a sufficient number of members can generatenetwork effectsthat dominate the overall economics of the system.[2]
Other analysts of network value functions, includingAndrew Odlyzko, have argued that both Reed's Law and Metcalfe's Law[3]overstate network value because they fail to account for the restrictive impact of human cognitive limits on network formation. According to this argument, the research aroundDunbar's numberimplies a limit on the number of inbound and outbound connections a human in a group-forming network can manage, so that the actual maximum-value structure is much sparser than the set-of-subsets measured by Reed's law or the complete graph measured by Metcalfe's law.
|
https://en.wikipedia.org/wiki/Reed%27s_law
|
ThePact of Forgetting(Spanish:Pacto del Olvido) is the political decision by both leftist and rightist parties of Spain to avoid confronting directly the legacy ofFrancoismafter the death ofFrancisco Francoin 1975.[1]The Pact of Forgetting was an attempt to move on from theCivil Warand subsequent repression and to concentrate on the future of Spain.[2]In making a smooth transition from autocracy and totalitarianism to democracy, the Pact ensured that there were no prosecutions for persons responsible for human rights violations or similar crimes committed during the Francoist period.
On the other hand, Francoist public memorials, such as the mausoleum of theValley of the Fallen, fell into disuse for official occasions.[3]Also, the celebration of "Day of Victory" during the Franco era was changed to "Armed Forces Day" so respect was paid to bothNationalistandRepublicanparties of the Civil War.
The pact underpinned thetransition to democracyof the 1970s[4]and ensured that difficult questions about the recent past were suppressed for fear of endangering 'national reconciliation' and the restoration of liberal-democratic freedoms. Responsibility for the Spanish Civil War, and for the repression that followed, was not to be placed upon any particular social or political group. "In practice, this presupposed suppressing painful memories derived from the post civil war division of the population into 'victors' and 'vanquished'".[5]While many historians accept that the pact served a purpose at the time of transition,[6]there is more controversy as to whether it should still be adhered to. Paul Preston takes the view that Franco had time to impose his own version of history, which still prevents contemporary Spain from "looking upon its recent violent past in an open and honest way".[7]In 2006, two-thirds of Spaniards favored a "fresh investigation into the war".[8]
"It is estimated that 400,000 people spent time in prisons, camps, or forced labor battalions".[9]Some historians believe that the repression committed by the Francoist State was most severe and prevalent in the immediate years after theSpanish Civil Warand through the 1940s. During this time of the repression, there was an escalation of torture, illegal detention, and execution. This style of repression remained frequent until the end of theSpanish State. Especially during 1936–1939, Nationalist Forces seized control of cities and towns in the Franco-led military coup and would hunt down any protesters or those who were labeled as a threat to the government and believed to sympathize with the Republican cause.[10]"Waves of these individuals were condemned on mere hearsay without trial, loaded onto trucks, taken to deserted areas outside city boundaries, summarily shot, and buried in mass, shallow graves that began dotting the Spanish countryside in the wake of the advancing Nationalist."[11]
Advances in DNA technology gave scope for the identification of the remains of Republicans executed by Franco supporters.
The year 2000 saw the foundation of theAssociation for the Recovery of Historical Memorywhich grew out of the quest by a sociologist,Emilio Silva-Barrera, to locate and identify the remains of his grandfather, who was shot by Franco's forces in 1936.
Such projects have been the subject of political debate in Spain, and are referenced for example in the 2021 filmParallel Mothers.
There have been other notable references to the Civil War in the arts since the year 2000 (for example,Javier Cercas' 2001 novelSoldiers of Salamis). However, the subject of the Civil War had not been "off limits" in the arts in previous decades; for example, Francoist repression is referenced in the 1973 filmSpirit of the Beehive,[citation needed]and arguably[by whom?]the pact is mainly a political construct.
The clearest and most explicit expression of the Pact is theSpanish 1977 Amnesty Law.[12]
The Pact was challenged by the socialist government elected in 2004, which under prime ministerJose Luis Rodriguez Zapateropassed theHistorical Memory Law. Among other measures, the Historical Memory Law rejected the legitimacy of laws passed and trials conducted by the Francoist regime. The Law repealed some Francoist laws and ordered the removal of remainingsymbols of Francoismfrom public buildings.[8]
The Historical Memory Law has been criticised by some on the left (for not going far enough) and also by some on the right (for example, as a form of "vengeance").[13]After thePartido Populartook power in 2011 it did not repeal the Historical Memory Law, but it closed the government office dedicated to the exhumation of victims of Francoist repression.[14]UnderMariano Rajoy, the government was not willing to spend public money on exhumations in Spain,[15]although the Partido Popular supported the repatriation of the remains of Spanish soldiers who fought in theBlue Divisionfor Hitler.
In 2010 there was a judicial controversy pertaining to the 1977 Spanish Amnesty Law. Spanish judgeBaltasar Garzónchallenged the Pact of Forgetting by saying that those who committedcrimes against humanityduring theSpanish Stateare not subject to the amnesty law or statutes of limitation. Relatives of those who were executed or went missing during the Franco regime demanded justice for their loved ones. Some of those who were targeted and buried in mass graves during the Franco regime were teachers, farmers, shop owners, women who did not marry in church and those on the losing side of war.[16]However, the Spanish Supreme Court challenged the investigations by Garzón. They investigated the judge for alleged abuse of power, knowingly violating the amnesty law, following a complaint from Miguel Bernard, the secretary general of a far-right group in Spain called "Manos Limpias". Bernard had criticized Garzón by saying:[17]
[Garzón] cannot prosecute Francoism. It's already history, and only historians can judge that period. He uses justice for his own ego. He thought that, by prosecuting Francoism, he could become the head of the International Criminal Court and even win the Nobel Peace Prize.
Although Garzón was eventually cleared of abuse of power in this instance, the Spanish judiciary upheld the Amnesty Law, discontinuing his investigations into Francoist crimes.[7]
In 2022 theDemocratic Memory Lawenacted by the government ofPedro Sánchezfurther dealt with the legacy of Francoism and included measures such as to make the government responsible for exhuming and identifying the bodies of those killed by the fascist regime and buried in unmarked graves, to create an official register of victims and to remove a number of remaining Francoist symbols from the country.
TheUnited Nationshas repeatedly urged Spain to repeal the amnesty law, for example in 2012,[18]and most recently in 2013.[19]This is on the basis that under international law amnesties do not apply to crimes against humanity.
According to theInternational Covenant on Civil and Political Rights, Article 7, "no one shall be subjected to torture or to cruel,inhuman or degrading treatmentor punishment".[20]Furthermore, Judge Garzón had drawn attention to Article 15, which does not admit political exceptions to punishing individuals for criminal acts.
It has also been argued that crimes during the Franco era, or at least those of the Civil War period, were not yet illegal. This is because international law regarding crimes of humanity developed in the aftermath of the Second World War and for crimes prior to that period the principle ofnullum crimen sine lege, or "no crime without a law", could be said to apply.[20]
In 2013, an Argentinian judge was investigating Franco-era crimes under the international legal principle ofuniversal justice.[19][21]
In Poland, which underwent a laterdemocratic transition, the Spanish agreement not to prosecute politically-motivated wrongdoing juridically and not to use the past in daily politics was seen as the example to follow.[22]In the 1990s theprogressivemedia hailed the Spanish model, which reportedly refrained from revanchism and from the vicious circle of "settling accounts".[23]The issue was highly related to the debate on "decommunization" in general and on "lustration" in particular; the latter was about measures intended against individuals involved in the pre-1989 regime. Liberal and left-wing media firmly opposed any such plan, and they referred the Spanish pattern as the civilized way of moving from one political system to another.[24]In a debate about transition from communism, held by two opinion leadersVaclav HavelandAdam Michnik, the Spanish model was highly recommended.[25]Later, the policies of prime minister Zapatero were viewed as dangerous "playing with fire",[26]and pundits ridiculed him as the one who was "rattling with skeletons pulled from cupboards" and "winning the civil war lost years ago"; they compared him toJarosław Kaczyński[27]and leaders of allegedly sectarian, fanatically anti-communist, nationalistic, Catholic groupings.[28]However, during the 2010s the left-wing media were gradually abandoning their early criticism of prime minister Zapatero;[29]they were rather agonizing about Rajoy and his strategy to park the "historical memory" politics in obscurity.[30]With the threat of "lustration" now gone, progressist authors have effectively made a U-turn; currently they are rather skeptical about the alleged "pact of forgetting"[31]and advocate the need to make further legislative steps advanced by theSánchezgovernment on the path towards "democratic memory".[32]The Polish right, which in the 1990s was rather muted about the solution adopted in Spain, since then remains consistently highly critical about the "historical memory" politics of bothPSOEandPPgovernments.[33]
|
https://en.wikipedia.org/wiki/Pact_of_forgetting
|
In common usage,randomnessis the apparent or actual lack of definitepatternorpredictabilityin information.[1][2]A random sequence of events,symbolsor steps often has noorderand does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if there is a knownprobability distribution, the frequency of different outcomes over repeated events (or "trials") is predictable.[note 1]For example, when throwing twodice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4. In this view, randomness is not haphazardness; it is a measure of uncertainty of an outcome. Randomness applies to concepts of chance,probability, andinformation entropy.
The fields of mathematics, probability, and statistics use formal definitions of randomness, typically assuming that there is some 'objective' probability distribution. In statistics, arandom variableis an assignment of a numerical value to each possible outcome of anevent space. This association facilitates the identification and the calculation of probabilities of the events. Random variables can appear inrandom sequences. Arandom processis a sequence of random variables whose outcomes do not follow adeterministicpattern, but follow an evolution described byprobability distributions. These and other constructs are extremely useful inprobability theoryand the variousapplications of randomness.
Randomness is most often used instatisticsto signify well-defined statistical properties.Monte Carlo methods, which rely on random input (such as fromrandom number generatorsorpseudorandom number generators), are important techniques in science, particularly in the field ofcomputational science.[3]By analogy,quasi-Monte Carlo methodsusequasi-random number generators.
Random selection, when narrowly associated with asimple random sample, is a method of selecting items (often called units) from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, with a bowl containing just 10 red marbles and 90 blue marbles, a random selection mechanism would choose a red marble with probability 1/10. A random selection mechanism that selected 10 marbles from this bowl would not necessarily result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable, a random selection mechanism requires equal probabilities for any item to be chosen. That is, if the selection process is such that each member of a population, say research subjects, has the same probability of being chosen, then we can say the selection process is random.[2]
According toRamsey theory, pure randomness (in the sense of there being no discernible pattern) is impossible, especially for large structures. MathematicianTheodore Motzkinsuggested that "while disorder is more probable in general, complete disorder is impossible".[4]Misunderstanding this can lead to numerousconspiracy theories.[5]Cristian S. Caludestated that "given the impossibility of true randomness, the effort is directed towards studying degrees of randomness".[6]It can be proven that there is infinite hierarchy (in terms of quality or strength) of forms of randomness.[6]
In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threwdiceto determine fate, and this later evolved into games of chance. Most ancient cultures used various methods ofdivinationto attempt to circumvent randomness and fate.[7][8]Beyondreligionandgames of chance, randomness has been attested forsortitionsince at least ancientAthenian democracyin the form of akleroterion.[9]
The formalization of odds and chance was perhaps earliest done by the Chinese of 3,000 years ago. The Greek philosophers discussed randomness at length, but only in non-quantitative forms. It was only in the 16th century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention ofcalculushad a positive impact on the formal study of randomness. In the 1888 edition of his bookThe Logic of Chance,John Vennwrote a chapter onThe conception of randomnessthat included his view of the randomness of the digits ofpi(π), by using them to construct arandom walkin two dimensions.[10]
The early part of the 20th century saw a rapid growth in the formal analysis of randomness, as various approaches to the mathematical foundations of probability were introduced. In the mid-to-late-20th century, ideas ofalgorithmic information theoryintroduced new dimensions to the field via the concept ofalgorithmic randomness.
Although randomness had often been viewed as an obstacle and a nuisance for many centuries, in the 20th century computer scientists began to realize that thedeliberateintroduction of randomness into computations can be an effective tool for designing better algorithms. In some cases, suchrandomized algorithmseven outperform the best deterministic methods.[11]
Many scientific fields are concerned with randomness:
In the 19th century, scientists used the idea of random motions of molecules in the development ofstatistical mechanicsto explain phenomena inthermodynamicsandthe properties of gases.
According to several standard interpretations ofquantum mechanics, microscopic phenomena are objectively random.[12]That is, in an experiment that controls all causally relevant parameters, some aspects of the outcome still vary randomly. For example, if a single unstableatomis placed in a controlled environment, it cannot be predicted how long it will take for the atom to decay—only the probability of decay in a given time.[13]Thus, quantum mechanics does not specify the outcome of individual experiments, but only the probabilities.Hidden variable theoriesreject the view that nature contains irreducible randomness: such theories posit that in the processes that appear random, properties with a certain statistical distribution are at work behind the scenes, determining the outcome in each case.
Themodern evolutionary synthesisascribes the observed diversity of life to random geneticmutationsfollowed bynatural selection. The latter retains some random mutations in thegene pooldue to the systematically improved chance for survival and reproduction that those mutated genes confer on individuals who possess them. The location of the mutation is not entirely random however as e.g. biologically important regions may be more protected from mutations.[14][15][16]
Several authors also claim that evolution (and sometimes development) requires a specific form of randomness, namely the introduction of qualitatively new behaviors. Instead of the choice of one possibility among several pre-given ones, this randomness corresponds to the formation of new possibilities.[17][18]
The characteristics of an organism arise to some extent deterministically (e.g., under the influence of genes and the environment), and to some extent randomly. For example, thedensityoffrecklesthat appear on a person's skin is controlled by genes and exposure to light; whereas the exact location ofindividualfreckles seems random.[19]
As far as behavior is concerned, randomness is important if an animal is to behave in a way that is unpredictable to others. For instance, insects in flight tend to move about with random changes in direction, making it difficult for pursuing predators to predict their trajectories.
The mathematical theory ofprobabilityarose from attempts to formulate mathematical descriptions of chance events, originally in the context ofgambling, but later in connection with physics.Statisticsis used to infer an underlyingprobability distributionof a collection of empirical observations. For the purposes ofsimulation, it is necessary to have a large supply ofrandom numbers—or means to generate them on demand.
Algorithmic information theorystudies, among other topics, what constitutes arandom sequence. The central idea is that a string ofbitsis random if and only if it is shorter than any computer program that can produce that string (Kolmogorov randomness), which means that random strings are those that cannot becompressed. Pioneers of this field includeAndrey Kolmogorovand his studentPer Martin-Löf,Ray Solomonoff, andGregory Chaitin. For the notion of infinite sequence, mathematicians generally acceptPer Martin-Löf's semi-eponymous definition: An infinite sequence is random if and only if it withstands all recursively enumerable null sets.[20]The other notions of random sequences include, among others, recursive randomness and Schnorr randomness, which are based on recursively computable martingales. It was shown byYongge Wangthat these randomness notions are generally different.[21]
Randomness occurs in numbers such aslog(2)andpi. The decimal digits of pi constitute an infinite sequence and "never repeat in a cyclical fashion." Numbers like pi are also considered likely to benormal:
Pi certainly seems to behave this way. In the first six billion decimal places of pi, each of the digits from 0 through 9 shows up about six hundred million times. Yet such results, conceivably accidental, do not prove normality even in base 10, much less normality in other number bases.[22]
In statistics, randomness is commonly used to createsimple random samples. This allows surveys of completely random groups of people to provide realistic data that is reflective of the population. Common methods of doing this include drawing names out of a hat or using a random digit chart (a large table of random digits).
In information science, irrelevant or meaningless data is considered noise. Noise consists of numerous transient disturbances, with a statistically randomized time distribution.
Incommunication theory, randomness in a signal is called "noise", and is opposed to that component of its variation that is causally attributable to the source, the signal.
In terms of the development of random networks, for communication randomness rests on the two simple assumptions ofPaul ErdősandAlfréd Rényi, who said that there were a fixed number of nodes and this number remained fixed for the life of the network, and that all nodes were equal and linked randomly to each other.[clarification needed][23]
Therandom walk hypothesisconsiders that asset prices in an organizedmarketevolve at random, in the sense that the expected value of their change is zero but the actual value may turn out to be positive or negative. More generally, asset prices are influenced by a variety of unpredictable events in the general economic environment.
Random selection can be an official method to resolvetiedelections in some jurisdictions.[24]Its use in politics originates long ago. Many offices inancient Athenswere chosen by lot instead of modern voting.
Randomness can be seen as conflicting with thedeterministicideas of some religions, such as those where the universe is created by an omniscient deity who is aware of all past and future events. If the universe is regarded to have a purpose, then randomness can be seen as impossible. This is one of the rationales for religious opposition toevolution, which states thatnon-randomselection is applied to the results of random genetic variation.
HinduandBuddhistphilosophies state that any event is the result of previous events, as is reflected in the concept ofkarma. As such, this conception is at odds with the idea of randomness, and any reconciliation between both of them would require an explanation.[25]
In some religious contexts, procedures that are commonly perceived as randomizers are used for divination.Cleromancyuses the casting of bones or dice to reveal what is seen as the will of the gods.
In most of its mathematical, political, social and religious uses, randomness is used for its innate "fairness" and lack of bias.
Politics:Athenian democracywas based on the concept ofisonomia(equality of political rights), and used complex allotment machines to ensure that the positions on the ruling committees that ran Athens were fairly allocated.Allotmentis now restricted to selecting jurors in Anglo-Saxon legal systems, and in situations where "fairness" is approximated byrandomization, such as selectingjurorsand militarydraftlotteries.
Games: Random numbers were first investigated in the context ofgambling, and many randomizing devices, such asdice,shuffling playing cards, androulettewheels, were first developed for use in gambling. The ability to produce random numbers fairly is vital to electronic gambling, and, as such, the methods used to create them are usually regulated by governmentGaming Control Boards. Random drawings are also used to determinelotterywinners. In fact, randomness has been used for games of chance throughout history, and to select out individuals for an unwanted task in a fair way (seedrawing straws).
Sports: Some sports, includingAmerican football, usecoin tossesto randomly select starting conditions for games orseedtied teams forpostseason play. TheNational Basketball Associationuses a weightedlotteryto order teams in its draft.
Mathematics: Random numbers are also employed where their use is mathematically important, such as sampling foropinion pollsand for statistical sampling inquality controlsystems. Computational solutions for some types of problems use random numbers extensively, such as in theMonte Carlo methodand ingenetic algorithms.
Medicine: Random allocation of a clinical intervention is used to reduce bias in controlled trials (e.g.,randomized controlled trials).
Religion: Although not intended to be random, various forms ofdivinationsuch ascleromancysee what appears to be a random event as a means for a divine being to communicate their will (see alsoFree willandDeterminismfor more).
It is generally accepted that there exist three mechanisms responsible for (apparently) random behavior in systems:
The manyapplications of randomnesshave led to many different methods for generating random data. These methods may vary as to how unpredictable orstatistically randomthey are, and how quickly they can generate random numbers.
Before the advent of computationalrandom number generators, generating large amounts of sufficiently random numbers (which is important in statistics) required a lot of work. Results would sometimes be collected and distributed asrandom number tables.
There are many practical measures of randomness for a binary sequence. These include measures based on frequency,discrete transforms,complexity, or a mixture of these, such as the tests by Kak, Phillips, Yuen, Hopkins, Beth and Dai, Mund, and Marsaglia and Zaman.[26]
Quantum nonlocalityhas been used to certify the presence of genuine or strong form of randomness in a given string of numbers.[27]
Popular perceptions of randomness are frequently mistaken, and are often based on fallacious reasoning or intuitions.
This argument is, "In a random selection of numbers, since all numbers eventually appear, those that have not come up yet are 'due', and thus more likely to come up soon." This logic is only correct if applied to a system where numbers that come up are removed from the system, such as whenplaying cardsare drawn and not returned to the deck. In this case, once a jack is removed from the deck, the next draw is less likely to be a jack and more likely to be some other card. However, if the jack is returned to the deck, and the deck is thoroughly reshuffled, a jack is as likely to be drawn as any other card. The same applies in any other process where objects are selected independently, and none are removed after each event, such as the roll of a die, a coin toss, or mostlotterynumber selection schemes. Truly random processes such as these do not have memory, which makes it impossible for past outcomes to affect future outcomes. In fact, there is no finite number of trials that can guarantee a success.
In a random sequence of numbers, a number may be said to be cursed because it has come up less often in the past, and so it is thought that it will occur less often in the future. A number may be assumed to be blessed because it has occurred more often than others in the past, and so it is thought likely to come up more often in the future. This logic is valid only if the randomisation might be biased, for example if a die is suspected to be loaded then its failure to roll enough sixes would be evidence of that loading. If the die is known to be fair, then previous rolls can give no indication of future events.
In nature, events rarely occur with a frequency that is knowna priori, so observing outcomes to determine which events are more probable makes sense. However, it is fallacious to apply this logic to systems designed and known to make all outcomes equally likely, such as shuffled cards, dice, and roulette wheels.
In the beginning of a scenario, one might calculate the probability of a certain event. However, as soon as one gains more information about the scenario, one may need to re-calculate the probability accordingly.
For example, when being told that a woman has two children, one might be interested in knowing if either of them is a girl, and if yes, the probability that the other child is also a girl. Considering the two events independently, one might expect that the probability that the other child is female is ½ (50%), but by building aprobability spaceillustrating all possible outcomes, one would notice that the probability is actually only ⅓ (33%).
To be sure, the probability space does illustrate four ways of having these two children: boy-boy, girl-boy, boy-girl, and girl-girl. But once it is known that at least one of the children is female, this rules out the boy-boy scenario, leaving only three ways of having the two children: boy-girl, girl-boy, girl-girl. From this, it can be seen only ⅓ of these scenarios would have the other child also be a girl[28](seeBoy or girl paradoxfor more).
In general, by using a probability space, one is less likely to miss out on possible scenarios, or to neglect the importance of new information. This technique can be used to provide insights in other situations such as theMonty Hall problem, a game show scenario in which a car is hidden behind one of three doors, and two goats are hidden asbooby prizesbehind the others. Once the contestant has chosen a door, the host opens one of the remaining doors to reveal a goat, eliminating that door as an option. With only two doors left (one with the car, the other with another goat), the player must decide to either keep their decision, or to switch and select the other door. Intuitively, one might think the player is choosing between two doors with equal probability, and that the opportunity to choose another door makes no difference. However, an analysis of the probability spaces would reveal that the contestant has received new information, and that changing to the other door would increase their chances of winning.[28]
|
https://en.wikipedia.org/wiki/Randomness
|
Thesuffix-onym(fromAncient Greek:ὄνυμα,lit.'name') is abound morpheme, that is attached to the end of aroot word, thus forming a newcompound wordthat designates a particularclassofnames. Inlinguisticterminology, compound words that are formed with suffix -onym are most commonly used as designations for variousonomasticclasses. Most onomastic terms that are formed with suffix -onym areclassical compounds, whose word roots are taken fromclassical languages(Greek and Latin).[1][2]
For example, onomastic terms liketoponymandlinguonymare typical classical (or neoclassical) compounds, formed from suffix-onymand classical (Greek and Latin) root words (Ancient Greek:τόπος/ place;Latin:lingua/ language). In some compounds, the-onymmorpheme has been modified by replacing (or dropping) the "o". In the compounds likeananymandmetanym, the correct forms (anonymandmetonym) were pre-occupied by other meanings. Other, late 20th century examples, such ashypernymandcharacternym, are typically redundantneologisms, for which there are more traditional words formed with the full-onym(hyperonymandcharactonym).
The English suffix-onymis from theAncient Greeksuffix-ώνυμον(ōnymon), neuter of the suffixώνυμος(ōnymos), having a specified kind of name, from the Greekὄνομα(ónoma),Aeolic Greekὄνυμα (ónyma), "name". The form-ōnymosis that taken byónomawhen it is the end component of abahuvrihicompound, but in English its use is extended totatpuruṣacompounds.
The suffix is found in many modern languages with various spellings. Examples are:Dutchsynoniem,GermanSynonym,Portuguesesinónimo,Russianсиноним (sinonim),Polishsynonim,Finnishsynonyymi,Indonesiansinonim,Czechsynonymum.
According to a 1988 study[3]of words ending in-onym, there are four discernible classes of-onymwords: (1) historic, classic, or, for want of better terms, naturally occurring or common words; (2) scientific terminology, occurring in particular in linguistics, onomastics, etc.; (3) language games; and (4)nonce words. Older terms are known to gain new, sometimes contradictory, meanings (e.g.,eponymandcryptonym). In many cases, two or more words describe the same phenomenon, but no precedence is discernible (e.g.,necronymandpenthonym). New words are sometimes created, the meaning of which duplicating existing terms. On occasion, new words are formed with little regard to historical principles.
|
https://en.wikipedia.org/wiki/-onym
|
Condorcet methods
Positional voting
Cardinal voting
Quota-remainder methods
Approval-based committees
Fractional social choice
Semi-proportional representation
By ballot type
Pathological response
Strategic voting
Paradoxes ofmajority rule
Positive results
This article discusses the methods and results of comparing differentelectoral systems. There are two broad ways to compare voting systems:
Voting methods can be evaluated by measuring their accuracy under random simulated elections aiming to be faithful to the properties of elections in real life. The first such evaluation was conducted by Chamberlin and Cohen in 1978, who measured the frequency with which certain non-Condorcet systems elected Condorcet winners.[1]
TheMarquis de Condorcetviewed elections as analogous to jury votes where each member expresses an independent judgement on the quality of candidates. Candidates differ in terms of their objective merit, but voters have imperfect information about the relative merits of the candidates. Such jury models are sometimes known asvalence models. Condorcet and his contemporaryLaplacedemonstrated that, in such a model, voting theory could be reduced to probability by finding theexpected qualityof each candidate.[2]
The jury model implies several natural concepts of accuracy for voting systems under different models:
However, Condorcet's model is based on the extremely strong assumption ofindependent errors, i.e. voters will not be systematically biased in favor of one group of candidates or another. This is usually unrealistic: voters tend to communicate with each other, form parties or political ideologies, and engage in other behaviors that can result incorrelated errors.
Duncan Blackproposed a one-dimensional spatial model of voting in 1948, viewing elections as ideologically driven.[4]His ideas were later expanded by Anthony Downs.[5]Voters' opinions are regarded as positions in a space of one or more dimensions; candidates have positions in the same space; and voters choose candidates in order of proximity (measured under Euclidean distance or some other metric).
Spatial models imply a different notion of merit for voting systems: the more acceptable the winning candidate may be as a location parameter for the voter distribution, the better the system. Apolitical spectrumis a one-dimensional spatial model.
Neutral voting models try to minimize the number of parameters and, as an example of thenothing-up-my-sleeve principle. The most common such model is theimpartial anonymous culturemodel (orDirichletmodel). These models assume voters assign each candidate a utility completely at random (from auniform distribution).
Tidemanand Plassmann conducted a study which showed that a two-dimensional spatial model gave a reasonable fit to 3-candidate reductions of a large set of electoral rankings. Jury models, neutral models, and one-dimensional spatial models were all inadequate.[6]They looked at Condorcet cycles in voter preferences (an example of which is A being preferred to B by a majority of voters, B to C and C to A) and found that the number of them was consistent with small-sample effects, concluding that "voting cycles will occur very rarely, if at all, in elections with many voters." The relevance of sample size had been studied previously byGordon Tullock, who argued graphically that although finite electorates will be prone to cycles, the area in which candidates may give rise to cycling shrinks as the number of voters increases.[7]
Autilitarianmodel views voters as ranking candidates in order of utility. The rightful winner, under this model, is the candidate who maximizes overall social utility. A utilitarian model differs from a spatial model in several important ways:
It follows from the last property that no voting system which gives equal influence to all voters is likely to achieve maximum social utility. Extreme cases of conflict between the claims of utilitarianism and democracy are referred to as the 'tyranny of the majority'. See Laslier's, Merlin's, and Nurmi's comments in Laslier's write-up.[8]
James Millseems to have been the first to claim the existence of ana prioriconnection between democracy and utilitarianism – see the Stanford Encyclopedia article.[9]
Suppose that theithcandidate in an election has meritxi(we may assume thatxi~N(0,σ2)[10]), and that voterj's level of approval for candidateimay be written asxi+ εij(we will assume that the εijareiid.N(0,τ2)). We assume that a voter ranks candidates in decreasing order of approval. We may interpret εijas the error in voterj's valuation of candidateiand regard a voting method as having the task of finding the candidate of greatest merit.
Each voter will rank the better of two candidates higher than the less good with a determinate probabilityp(which under the normal model outlined here is equal to12+1πtan−1στ{\displaystyle {\tfrac {1}{2}}\!+\!{\tfrac {1}{\pi }}{\textrm {tan}}^{-1}{\tfrac {\sigma }{\tau }}}, as can be confirmed from a standard formula for Gaussian integrals over a quadrant[citation needed]).Condorcet's jury theoremshows that so long asp>1⁄2, the majority vote of a jury will be a better guide to the relative merits of two candidates than is the opinion of any single member.
Peyton Youngshowed that three further properties apply to votes between arbitrary numbers of candidates, suggesting that Condorcet was aware of the first and third of them.[11]
Robert F. Bordley constructed a 'utilitarian' model which is a slight variant of Condorcet's jury model.[12]He viewed the task of a voting method as that of finding the candidate who has the greatest total approval from the electorate, i.e. the highest sum of individual voters' levels of approval. This model makes sense even with σ2= 0, in which caseptakes the value12+1πtan−11n−1{\displaystyle {\tfrac {1}{2}}\!+\!{\tfrac {1}{\pi }}{\textrm {tan}}^{-1}{\tfrac {1}{n-1}}}wherenis the number of voters. He performed an evaluation under this model, finding as expected that the Borda count was most accurate.
A simulated election can be constructed from a distribution of voters in a suitable space. The illustration shows voters satisfying a bivariate Gaussian distribution centred on O. There are 3 randomly generated candidates, A, B and C. The space is divided into 6 segments by 3 lines, with the voters in each segment having the same candidate preferences. The proportion of voters ordering the candidates in any way is given by the integral of the voter distribution over the associated segment.
The proportions corresponding to the 6 possible orderings of candidates determine the results yielded by different voting systems. Those which elect the best candidate, i.e. the candidate closest to O (who in this case is A), are considered to have given a correct result, and those which elect someone else have exhibited an error. By looking at results for large numbers of randomly generated candidates the empirical properties of voting systems can be measured.
The evaluation protocol outlined here is modelled on the one described by Tideman and Plassmann.[6]Evaluations of this type are commonest for single-winner electoral systems.Ranked votingsystems fit most naturally into the framework, but other types of ballot (such aFPTPandApproval voting) can be accommodated with lesser or greater effort.
The evaluation protocol can be varied in a number of ways:
One of the main uses of evaluations is to compare the accuracy of voting systems when voters vote sincerely. If an infinite number of voters satisfy a Gaussian distribution, then the rightful winner of an election can be taken to be the candidate closest to the mean/median, and the accuracy of a method can be identified with the proportion of elections in which the rightful winner is elected. Themedian voter theoremguarantees that all Condorcet systems will give 100% accuracy (and the same applies toCoombs' method[14]).
Evaluations published in research papers use multidimensional Gaussians, making the calculation numerically difficult.[1][15][16][17]The number of voters is kept finite and the number of candidates is necessarily small.
The computation is much more straightforward in a single dimension, which allows an infinite number of voters and an arbitrary numbermof candidates. Results for this simple case are shown in the first table, which is directly comparable with Table 5 (1000 voters, medium dispersion) of the cited paper by Chamberlin and Cohen. The candidates were sampled randomly from the voter distribution and a single Condorcet method (Minimax) was included in the trials for confirmation.
The relatively poor performance of theAlternative vote(IRV) is explained by the well known and common source of error illustrated by the diagram, in which the election satisfies a univariate spatial model and the rightful winner B will be eliminated in the first round. A similar problem exists in all dimensions.
An alternative measure of accuracy is the average distance of voters from the winner (in which smaller means better). This is unlikely to change the ranking of voting methods, but is preferred by people who interpret distance as disutility. The second table shows the average distance (in standard deviations)minus2π{\displaystyle {\sqrt {\tfrac {2}{\pi }}}}(which is the average distance of a variate from the centre of a standard Gaussian distribution) for 10 candidates under the same model.
James Green-Armytage et al. published a study in which they assessed the vulnerability of several voting systems to manipulation by voters.[18]They say little about how they adapted their evaluation for this purpose, mentioning simply that it "requires creative programming". An earlier paper by the first author gives a little more detail.[19]
The number of candidates in their simulated elections was limited to 3. This removes the distinction between certain systems; for instanceBlack's methodand theDasgupta-Maskin methodare equivalent on 3 candidates.
The conclusions from the study are hard to summarise, but theBorda countperformed badly;Minimaxwas somewhat vulnerable; and IRV was highly resistant. The authors showed that limiting any method to elections with no Condorcet winner (choosing the Condorcet winner when there was one) would never increase its susceptibility totactical voting. They reported that the 'Condorcet-Hare' system which uses IRV as a tie-break for elections not resolved by the Condorcet criterion was as resistant to tactical voting as IRV on its own and more accurate. Condorcet-Hare is equivalent toCopeland's methodwith an IRV tie-break in elections with 3 candidates.
Some systems, and the Borda count in particular, are vulnerable when the distribution of candidates is displaced relative to the distribution of voters. The attached table shows the accuracy of the Borda count (as a percentage) when an infinite population of voters satisfies a univariate Gaussian distribution andmcandidates are drawn from a similar distribution offset byxstandard distributions. Red colouring indicates figures which are worse than random. Recall that all Condorcet methods give 100% accuracy for this problem. (And notice that the reduction in accuracy asxincreases is not seen when there are only 3 candidates.)
Sensitivity to the distribution of candidates can be thought of as a matter either of accuracy or of resistance to manipulation. If one expects that in the course of things candidates will naturally come from the same distribution as voters, then any displacement will be seen as attempted subversion; but if one thinks that factors determining the viability of candidacy (such as financial backing) may be correlated with ideological position, then one will view it more in terms of accuracy.
Published evaluations take different views of the candidate distribution. Some simply assume that candidates are drawn from the same distribution as voters.[16][18]Several older papers assume equal means but allow the candidate distribution to be more or less tight than the voter distribution.[20][1]A paper by Tideman and Plassmann approximates the relationship between candidate and voter distributions based on empirical measurements.[15]This is less realistic than it may appear, since it makes no allowance for the candidate distribution to adjust to exploit any weakness in the voting system. A paper by James Green-Armytage looks at the candidate distribution as a separate issue, viewing it as a form of manipulation and measuring the effects of strategic entry and exit. Unsurprisingly he finds the Borda count to be particularly vulnerable.[19]
The task of a voting system under a spatial model is to identify the candidate whose position most accurately represents the distribution of voter opinions. This amounts to choosing a location parameter for the distribution from the set of alternatives offered by the candidates. Location parameters may be based on the mean, the median, or the mode; but since ranked preference ballots provide only ordinal information, the median is the only acceptable statistic.
This can be seen from the diagram, which illustrates two simulated elections with the same candidates but different voter distributions. In both cases the mid-point between the candidates is the 51st percentile of the voter distribution; hence 51% of voters prefer A and 49% prefer B. If we consider a voting method to be correct if it elects the candidate closest to themedianof the voter population, then since the median is necessarily slightly to the left of the 51% line, a voting method will be considered to be correct if it elects A in each case.
The mean of the teal distribution is also slightly to the left of the 51% line, but the mean of the orange distribution is slightly to the right. Hence if we consider a voting method to be correct if it elects the candidate closest to themeanof the voter population, then a method will not be able to obtain full marks unless it produces different winners from the same ballots in the two elections. Clearly this will impute spurious errors to voting methods. The same problem will arise for any cardinal measure of location; only the median gives consistent results.
The median is not defined for multivariate distributions but the univariate median has a property which generalizes conveniently. The median of a distribution is the position whose average distance from all points within the distribution is smallest. This definition generalizes to thegeometric medianin multiple dimensions. The distance is often defined as a voter'sdisutility function.
If we have a set of candidates and a population of voters, then it is not necessary to solve the computationally difficult problem of finding the geometric median of the voters and then identify the candidate closest to it; instead we can identify the candidate whose average distance from the voters is minimized. This is the metric which has been generally deployed since Merrill onwards;[20]see also Green-Armytage and Darlington.[19][16]
The candidate closest to the geometric median of the voter distribution may be termed the 'spatial winner'.
Data from real elections can be analysed to compare the effects of different systems, either by comparing between countries or by applying alternative electoral systems to the real election data. The electoral outcomes can be compared throughdemocracy indices, measures ofpolitical fragmentation,voter turnout,[21][22]political efficacyand various economic and judicial indicators. The practical criteria to assess real elections include the share ofwasted votes, the complexity ofvote counting,proportionalityof the representation elected based on parties' shares of votes, andbarriers to entryfor new political movements.[23]Additional opportunities for comparison of real elections arise throughelectoral reforms.
A Canadian example of such an opportunity is seen in the City of Edmonton (Canada), which went fromfirst-past-the-post votingin1917 Alberta general electionto five-memberplurality block votingin1921 Alberta general election, to five-membersingle transferable votingin1926 Alberta general election, then to FPTP again in1959 Alberta general election. One party swept all the Edmonton seats in 1917, 1921 and 1959. Under STV in 1926, two Conservatives, one Liberal, one Labour and one United Farmers MLA were elected.
Traditionally the merits of different electoral systems have been argued by reference to logical criteria. These have the form ofrules of inferencefor electoral decisions, licensing the deduction, for instance, that "ifEandE' are elections such thatR(E,E'), and ifAis the rightful winner ofE, thenAis the rightful winner ofE' ".
The absolute criteria state that, if the set of ballots is a certain way, a certain candidate must or must not win.
These are criteria that state that, if a certain candidate wins in one circumstance, the same candidate must (or must not) win in a related circumstance.
These are criteria which relate to the process of counting votes and determining a winner.
These are criteria that relate to a voter's incentive to use certain forms of strategy. They could also be considered as relative result criteria; however, unlike the criteria in that section, these criteria are directly relevant to voters; the fact that a method passes these criteria can simplify the process of figuring out one's optimal strategic vote.
Ballots are broadly distinguishable into two categories,cardinalandordinal, where cardinal ballots request individual measures of support for each candidate and ordinal ballots request relative measures of support. A few methods do not fall neatly into one category, such as STAR, which asks the voter to give independent ratings for each candidate, but uses both the absolute and relative ratings to determine the winner. Comparing two methods based on ballot type alone is mostly a matter of voter experience preference, unless the ballot type is connected back to one of the other mathematical criterion listed here.
Criterion A is "stronger" than B if satisfying A implies satisfying B. For instance, the Condorcet criterion is stronger than the majority criterion, because all majority winners are Condorcet winners. Thus, any voting method that satisfies the Condorcet criterion must satisfy the majority criterion.
The following table shows which of the above criteria are met by several single-winner methods. Not every criterion is listed.
type
The concerns raised above are used bysocial choice theoriststo devise systems that are accurate and resistant to manipulation. However, there are also practical reasons why one system may be more socially acceptable than another, which fall under the fields ofpublic choiceandpolitical science.[8][16]Important practical considerations include:
Other considerations includebarriers to entryto thepolitical competition[28]and likelihood ofgridlocked government.[29]
Multi-winner electoral systems at their best seek to produce assemblies representative in a broader sense than that of making the same decisions as would be made by single-winner votes. They can also be route to one-party sweeps of a city's seats, if a non-proportional system, such asplurality block votingorticket voting, is used.
Evaluating the performance of multi-winner voting methods requires different metrics than are used for single-winner systems. The following have been proposed.
The following table shows which of the above criteria are met by several multiple winner methods.
|
https://en.wikipedia.org/wiki/Voting_system_criterion
|
Radical democracyis a type ofdemocracythat advocates the radical extension ofequalityandliberty.[1]Radical democracy is concerned with a radical extension of equality andfreedom, following the idea that democracy is an unfinished, inclusive, continuous and reflexive process.[1]
Within radical democracy there are three distinct strands, as articulated by Lincoln Dahlberg.[1]These strands can be labeled as agonistic, deliberative and autonomist.
The first and most noted strand of radical democracy is theagonisticperspective, which is associated with the work of Laclau and Mouffe. Radical democracy was articulated byErnesto LaclauandChantal Mouffein their bookHegemony and Socialist Strategy: Towards a Radical Democratic Politics, written in 1985. They argue thatsocial movementswhich attempt to createsocial and political changeneed a strategy which challengesneoliberalandneoconservativeconcepts ofdemocracy.[2]This strategy is to expand theliberaldefinition of democracy, based onfreedomandequality, to includedifference.[2]
According to Laclau and Mouffe "Radical democracy" means "the root of democracy".[3]Laclau and Mouffe claim thatliberal democracyanddeliberative democracy, in their attempts to build consensus, oppress differing opinions, races, classes, genders, and worldviews.[2]In the world, in a country, and in a social movement there are many (a plurality of) differences which resist consensus. Radical democracy is not only accepting of difference,dissentand antagonisms, but is dependent on it.[2]Laclau and Mouffe argue based on the assumption that there are oppressivepowerrelations that exist in society and that those oppressive relations should be made visible, re-negotiated and altered.[4]By building democracy around difference and dissent, oppressive power relations existing in societies are able to come to the forefront so that they can be challenged.[2]
The second strand,deliberative, is mostly associated with the work ofJürgen Habermas. This strand of radical democracy is opposed to the agonistic perspective of Laclau and Mouffe. Habermas argues that political problems surrounding the organization of life can be resolved bydeliberation.[5]That is, people coming together and deliberating on the best possible solution. This type of radical democracy is in contrast with the agonistic perspective based on consensus and communicative means: there is a reflexive critical process of coming to the best solution.[5]Equality and freedom are at the root of Habermas' deliberative theory. The deliberation is established throughinstitutionsthat can ensure free and equal participation of all.[5]Habermas is aware of the fact that different cultures, world-views and ethics can lead to difficulties in the deliberative process. Despite this fact he argues that the communicative reason can create a bridge between opposing views and interests.[5]
The third strand of radical democracy is theautonomiststrand, which is associated with left-communist and post-Marxist ideas. The difference between this type of radical democracy and the two noted above is the focus on "the community".[1]Thecommunityis seen as the pure constituted power instead of the deliberative rational individuals or the agonistic groups as in the first two strands. The community resembles a "plural multitude" (of people) instead of theworking classin traditional Marxist theory.[1]This plural multitude is the pure constituted power and reclaims this power by searching and creating mutual understandings within the community.[1]This strand of radical democracy challenges the traditional thinking about equality and freedom in liberal democracies by stating that individual equality can be found in the singularities within the multitude, equality overall is created by an all-inclusive multitude and freedom is created by restoring the multitude in its pure constituted power.[1]This strand of radical democracy is often a term used to refer to the post-Marxist perspectives ofItalian radicalism– for examplePaolo Virno.
Laclau and Mouffe have argued for radical agonistic democracy, where different opinions and worldviews are not oppressed by the search for consensus in liberal and deliberative democracy. As this agonistic perspective has been most influential in academic literature, it has been subject to most criticisms on the idea of radical democracy. Brockelman for example argues that the theory of radical democracy is anUtopian idea.[15]Political theory, he argues, should not be used as offering a vision of a desirable society. In the same vein, it is argued that radical democracy might be useful at the local level, but does not offer a realistic perception ofdecision-makingon the national level.[16]For example, people might know what they want to see changing in their town and feel the urge to participate in the decision-making process of future local policy. Developing an opinion about issues at the local level often does not require specific skills or education. Deliberation in order to combat the problem ofgroupthink, in which the view of the majority dominates over the view of the minority, can be useful in this setting. However, people might not be skilled enough or willing to decide about national or international problems. A radical democracy approach for overcoming the flaws of democracy is, it is argued, not suitable for levels higher than the local one.
Habermas and Rawls have argued for radical deliberative democracy, where consensus and communicative means are at the root of politics. However, some scholars identify multiple tensions between participation and deliberation. Three of these tensions are identified byJoshua Cohen, a student of the philosopherJohn Rawls:[17]
However, the concept of radical democracy is seen in some circles as colonial in nature due to its reliance on a western notion of democracy.[18]It is argued that liberal democracy is viewed by the West as the only legitimate form of governance.[19]
Since Laclau and Mouffe argued for a radical democracy, many other theorists and practitioners have adapted and changed the term.[2]For example,bell hooksandHenry Girouxhave all written about the application of radical democracy in education. In Hook's bookTeaching to Transgress: Education as the practice of freedomshe argues for education where educators teach students to go beyond the limits imposed against racial, sexual and class boundaries in order to "achieve the gift of freedom".[20]Paulo Freire's work, although initiated decades before Laclau and Mouffe, can also be read through similar lenses.[21][22][23]Theorists such asPaul ChattertonandRichard JF Dayhave written about the importance of radical democracy within some of the autonomous movements in Latin America (namely the EZLN—Zapatista Army of National Liberationin Mexico, the MST—Landless Workers' Movementin Brazil, and thePiquetero—Unemployed Workers Movement in Argentina) although the term radical democracy is used differently in these contexts.[24][25]
With the rise of the internet in the years after the development of various strands of radical democracy theory, the relationship between the internet and the theory has been increasingly focused upon. The internet is regarded as an important aspect of radical democracy, as it provides a means for communication which is central to every approach to the theory.
The internet is believed to reinforce both the theory of radical democracy and the actual possibility of radical democracy through three distinct ways:[26]
This last point refers to the concept of aradical public spherewhere voice in thepolitical debateis given to otherwise oppressed ormarginalized groups.[27]Approached from the radical democracy theory, the expression of such views on the internet can be understood asonline activism. In current liberal representative democracies, certain voices and interests are always favored above others. Through online activism, excluded opinions and views can still be articulated. In this way, activists contribute to the ideal of a heterogeneity of positions. However, the digital age does not necessarily contribute to the notion of radical democracy. Social media platforms possess the opportunity of shutting down certain, often radical, voices. This is counterproductive to radical democracy[28]
|
https://en.wikipedia.org/wiki/Radical_democracy
|
In mathematics and computer science,optimal radix choiceis the problem of choosing the base, orradix, that is best suited for representing numbers. Various proposals have been made to quantify the relative costs of using different radices in representing numbers, especially in computer systems. One formula is the number ofdigitsneeded to express it in that base, multiplied by the base (the number of possible values each digit could have). This expression also arises in questions regarding organizational structure, networking, and other fields.
The cost of representing a numberNin a given basebcan be defined as
where we use thefloor function⌊⌋{\displaystyle \lfloor \rfloor }and the base-blogarithmlogb{\displaystyle \log _{b}}.
If bothbandNare positive integers, then the quantityE(b,N){\displaystyle E(b,N)}is equal to the number ofdigitsneeded to express the numberNin baseb, multiplied by baseb.[1]This quantity thus measures the cost of storing or processing the numberNin basebif the cost of each "digit" is proportional tob. A base with a lower averageE(b,N){\displaystyle E(b,N)}is therefore, in some senses, more efficient than a base with a higher average value.
For example,100indecimalhas three digits, so its cost of representation is 10×3 = 30, while its binary representation has seven digits (11001002), so the analogous calculation gives 2×7 = 14. Likewise, inbase 3its representation has five digits (102013), for a value of 3×5 = 15, and in base 36 (2S36) one finds 36×2 = 72.
If the number is imagined to be represented by acombination lockor atally counter, in which each wheel hasbdigit faces, from0,1,...,b−1{\displaystyle 0,1,...,b-1}and having⌊logb(N)+1⌋{\displaystyle \lfloor \log _{b}(N)+1\rfloor }wheels, thenE(b,N){\displaystyle E(b,N)}is the total number of digit faces needed to inclusively represent any integer from 0 toN.
The quantityE(b,N){\displaystyle E(b,N)}for largeNcan be approximated as follows:
The asymptotically best value is obtained for base 3, sincebln(b){\displaystyle b \over \ln(b)}attains a minimum forb=3{\displaystyle b=3}in the positive integers:
For base 10, we have:
The closely relatedcontinuous optimizationproblem of finding the maximum of the functionf(x)=x1/x,{\displaystyle f(x)=x^{1/x},}or equivalently, on taking logs and inverting, minimizingxlnx{\displaystyle {\tfrac {x}{\ln x}}}for continuous rather than integer values ofx{\displaystyle x}, was posed and solved byJakob Steinerin 1850.[2]The solution isEuler's numbere≈2.71828{\displaystyle e\approx 2.71828}, the base of thenatural logarithm, for whichelne=e≈2.71828.{\displaystyle {\frac {e}{\ln e}}=e\approx 2.71828\,.}Translating this solution back to Steiner's formulation,e1/e≈1.44467{\displaystyle e^{1/e}\approx 1.44467}is the unique maximum off(x)=x1/x{\displaystyle f(x)=x^{1/x}}.[3]
This analysis has sometimes been used to argue that, in some sense, "basee{\displaystyle e}is the most economical base for the representation and storage of numbers", despite the difficulty in understanding what that might mean in practice.[4]
This topic appears inUnderwood Dudley'sMathematical Cranks.One of the eccentrics discussed in the book argues thate{\displaystyle e}is the best base, based on a muddled understanding of Steiner's calculus problem, and with a greatly exaggerated sense of how important the choice of radix is.[5]
The values ofE(b,N){\displaystyle E(b,N)}of basesb1andb2may be compared for a large value ofN:
Choosinge{\displaystyle e}forb2{\displaystyle b_{2}}gives
The averageE(b,N){\displaystyle E(b,N)}of various bases up to several arbitrary numbers (avoiding proximity to powers of 2 through 12 ande) are given in the table below. Also shown are the values relative to that of basee.E(1,N){\displaystyle E(1,N)}of any numberN{\displaystyle N}is justN{\displaystyle N}, makingunarythe most economical for the first few integers, but this no longer holds asNclimbs to infinity.
N= 1 to 6
N= 1 to 43
N= 1 to 182
N= 1 to 5329
One result of the relative economy of base 3 is thatternary search treesoffer an efficient strategy for retrieving elements of a database.[6]A similar analysis suggests that the optimum design of a largetelephone menu systemto minimise the number of menu choices that the average customer must listen to (i.e. the product of the number of choices per menu and the number of menu levels) is to have three choices per menu.[1]
In ad-ary heap, apriority queuedata structure based ond-ary trees, the worst-case number of comparisons per operation in a heap containingn{\displaystyle n}elements isdlogdn{\displaystyle d\log _{d}n}(up to lower-order terms), the same formula used above. It has been suggested that choosingd=3{\displaystyle d=3}ord=4{\displaystyle d=4}may offer optimal performance in practice.[7]
Brian Hayessuggests thatE(b,N){\displaystyle E(b,N)}may be the appropriate measure for the complexity of anInteractive voice responsemenu: in a tree-structured phone menu withn{\displaystyle n}outcomes andr{\displaystyle r}choices per step, the time to traverse the menu is proportional to the product ofr{\displaystyle r}(the time to present the choices at each step) withlogrn{\displaystyle \log _{r}n}(the number of choices that need to be made to determine the outcome). From this analysis, the optimal number of choices per step in such a menu is three.[1]
The 1950 referenceHigh-Speed Computing Devicesdescribes a particular situation using contemporary technology. Each digit of a number would be stored as the state of aring countercomposed of severaltriodes. Whethervacuum tubesorthyratrons, the triodes were the most expensive part of a counter. For small radicesrless than about 7, a single digit requiredrtriodes.[8](Larger radices required 2rtriodes arranged asrflip-flops, as inENIAC's decimal counters.)[9]
So the number of triodes in a numerical register withndigits wasrn. In order to represent numbers up to 106, the following numbers of tubes were needed:
The authors conclude,
Under these assumptions, the radix 3, on the average, is the most economical choice, closely followed by radices 2 and 4. These assumptions are, of course, only approximately valid, and the choice of 2 as a radix is frequently justified on more complete analysis. Even with the optimistic assumption that 10 triodes will yield a decimal ring, radix 10 leads to about one and one-half times the complexity of radix 2, 3, or 4. This is probably significant despite the shallow nature of the argument used here.[10]
|
https://en.wikipedia.org/wiki/Radix_economy
|
Parallel tempering, inphysicsandstatistics, is a computer simulation method typically used to find the lowest energy state of a system of many interacting particles. It addresses the problem that at high temperatures, one may have a stable state different from low temperature, whereas simulations at low temperatures may become "stuck" in a metastable state. It does this by using the fact that the high temperature simulation may visit states typical of both stable and metastable low temperature states.
More specifically, parallel tempering (also known asreplica exchange MCMC sampling), is asimulationmethod aimed at improving the dynamic properties ofMonte Carlo methodsimulations of physical systems, and ofMarkov chain Monte Carlo(MCMC) sampling methods more generally. The replica exchange method was originally devised byRobert Swendsenand J. S. Wang,[1]then extended byCharles J. Geyer,[2]and later developed further byGiorgio Parisi,[3]Koji HukushimaandKoji Nemoto,[4]and others.[5][6]Y. Sugita and Y. Okamoto also formulated amolecular dynamicsversion of parallel tempering; this is usually known as replica-exchange molecular dynamics or REMD.[7]
Essentially, one runsNcopies of the system, randomly initialized, at different temperatures. Then, based on the Metropolis criterion one exchanges configurations at different temperatures. The idea of this method
is to make configurations at high temperatures available to the simulations at low temperatures and vice versa.
This results in a very robust ensemble which is able to sample both low and high energy configurations.
In this way, thermodynamical properties such as the specific heat, which is in general not well computed in the canonical ensemble, can be computed with great precision.
Typically aMonte Carlo simulationusing aMetropolis–Hastingsupdate consists of a singlestochastic processthat evaluates theenergyof the system and accepts/rejects updates based on thetemperatureT. At high temperatures updates that change the energy of the system are comparatively more probable. When the system is highly correlated, updates are rejected and the simulation is said to suffer from critical slowing down.
If we were to run two simulations at temperatures separated by a ΔT, we would find that if ΔTis small enough, then the energyhistogramsobtained by collecting the values of the energies over a set of Monte Carlo steps N will create two distributions that will somewhat overlap. The overlap can be defined by the area of the histograms that falls over the same interval of energy values, normalized by the total number of samples. For ΔT= 0 the overlap should approach 1.
Another way to interpret this overlap is to say that system configurations sampled at temperatureT1are likely to appear during a simulation atT2. Because theMarkov chainshould have no memory of its past, we can create a new update for the system composed of the two systems atT1andT2. At a given Monte Carlo step we can update the global system by swapping the configuration of the two systems, or alternatively trading the two temperatures. The update is accepted according to the Metropolis–Hastings criterion with probability
and otherwise the update is rejected. Thedetailed balancecondition has to be satisfied by ensuring that the reverse update has to be equally likely, all else being equal. This can be ensured by appropriately choosing regular Monte Carlo updates or parallel tempering updates with probabilities that are independent of the configurations of the two systems or of the Monte Carlo step.[8]
This update can be generalized to more than two systems.
By a careful choice of temperatures and number of systems one can achieve an improvement in the mixing properties of a set of Monte Carlo simulations that exceeds the extra computational cost of running parallel simulations.
Other considerations to be made: increasing the number of different temperatures can have a detrimental effect, as one can think of the 'lateral' movement of a given system across temperatures as a diffusion process.
Set up is important as there must be a practical histogram overlap to achieve a reasonable probability of lateral moves.
The parallel tempering method can be used as a supersimulated annealingthat does not need restart, since a system at high temperature can feed new local optimizers to a system at low temperature, allowing tunneling between metastable states and improving convergence to a global optimum.
|
https://en.wikipedia.org/wiki/Parallel_tempering
|
Hypertargetingrefers to the ability to deliver advertising content to specific interest-based segments in a network.MySpacecoined the term in November 2007[1]with the launch of their SelfServe advertising solution (later called myAds[2]), described on their site as "enabling online marketers to tap into self-expressed user information to target campaigns like never before."
Hypertargeting is also the ability on social network sites to target ads based on very specific criteria. This is an important step towards precision performance marketing.
The first MySpace HyperTarget release offered advertisers the ability to direct their ads to 10 categories self-identified by users in their profiles, including music, sports, and movies. In July 2007 the targeting options expanded to 100 subcategories. Rather than simply targeting movie lovers, for example, advertisers could send ads based on the preferred genres like horror, romance, or comedy. By January 2010, MySpace HyperTarget involved 5 algorithms across 1,000 segments.
According to an article by Harry Gold in online publisher ClickZ,[3]the general field of hypertageting draws information from 3 sources:
Facebook, a popular social network, offers an ad targeting service through their Social Ads platform. Ads can be hypertargeted to users based on keywords from their profiles, pages they're fans of, events they responded to, or applications used. Some of these examples involve the use ofbehavioral targeting.[4]
By 2009, hypertargeting became an accepted industry term.[5]In 2010, the InternationalConsumer Electronics Show(CES), the world's largest consumer technology tradeshow, dedicated three sessions to the topic:[citation needed]
|
https://en.wikipedia.org/wiki/Hypertargeting
|
Apotentially unwanted program(PUP) orpotentially unwanted application(PUA) is software that a user may perceive as unwanted or unnecessary. It is used as a subjective tagging criterion by security and parental control products. Such software may use an implementation that can compromise privacy or weaken the computer's security. Companies often bundle a wanted program download with a wrapper application and may offer to install an unwanted application, and in some cases without providing a clear opt-out method. Antivirus companies define the software bundled as potentially unwanted programs[1][2]which can include software that displaysintrusive advertising(adware), or tracks the user's Internet usage to sell information to advertisers (spyware), injects its own advertising into web pages that a user looks at, or uses premium SMS services to rack up charges for the user.[3][1]A growing number of open-source software projects have expressed dismay at third-party websites wrapping their downloads with unwanted bundles, without the project's knowledge or consent. Nearly every third-party free download site bundles their downloads with potentially unwanted software.[4]The practice is widely considered unethical because it violates the security interests of users without their informed consent. Some unwanted software bundles install aroot certificateon a user's device, which allows hackers to intercept private data such as banking details, without a browser giving security warnings. TheUnited States Department of Homeland Securityhas advised removing an insecure root certificate, because they make computers vulnerable to seriouscyberattacks.[5]Software developers and security experts recommend that people always download the latest version from the official project website, or a trusted package manager or app store.
Historically, the first big companies working with potentially unwanted programs for creating revenue came up in the US in the mid-2000s, such asZango. These activities declined after the companies were investigated, and in some cases indicted, by authorities for invasive and harmful installs.[6]
A major industry, dedicated to creating revenue by foisting potentially unwanted programs, has grown among the Israeli software industry and is frequently referred to asDownload Valley. These companies are responsible for a large part of the download and install tools,[7]which place unwanted, additional software on users' systems.[8][9][10]
Unwanted programs have increased in recent years, and one study in 2014 classified unwanted programs as comprising 24.77% of totalmalwareinfections.[11]This malware includes adware according toGoogle.[12][13]Many programs include unwanted browser add-ons that track which websites a user goes to in order to sell this information to advertisers, or add advertising into web pages.[14]Five percent of computer browser visits to Google-owned websites are altered by computer programs that inject their own ads into pages.[15][16][17]Researchers have identified 50,870 Google Chrome extensions and 34,407 programs that inject ads. Thirty-eight percent of extensions and 17 percent of programs were catalogued asmalicious software, the rest being potentially unwantedadware-type applications. Some Google Chrome extension developers have sold extensions they made to third-party companies who silently push unwanted updates that incorporate previously non-existent adware into the extensions.[18][19][20]
Spywareprograms install aproxy serveron a person's computer that monitors all web traffic passing through it, tracking user interests to build up a profile and sell that profile to advertisers.
Superfishis an advertising injector that creates its ownroot certificatein a computer operating system, allowing the tool to inject advertising into encrypted Google search pages and track the history of a user's search queries.
In February 2015, theUnited States Department of Homeland Securityadvised uninstalling Superfish and its associatedroot certificatefromLenovocomputers, because they make computers vulnerable to serious cyberattacks, including interception of passwords and sensitive data being transmitted through browsers.[5][21]Heise Securityrevealed that the Superfish certificate is included in bundled downloads with a number of applications from companies includingSAY MediaandLavasoft'sAd-Aware Web Companion.[22]
Many companies usebrowser hijackingto modify a user's home page and search page, to force Internet hits to a particular website and make money from advertisers.[citation needed]Some companies steal the cookies in a user's browser,hijackingtheir connections to websites they are logged into, and performing actions using their account, without the user's knowledge or consent (like installing Android apps).
Users withdial-up Internet accessuse modems in their computer to connect to the Internet, and these have been targeted by fraudulent applications that usedsecurity holesin theoperating systemto dial premium numbers.
ManyAndroiddevices are targeted by malware that usepremium SMSservices to rack up charges for users.[23][24][25]
A few classes of software are usually installed knowingly by the user and do not show any automated abusive behavior. However, the Enterprise controlling the computer or the antivirus vendor may consider the program unwanted due to the activities they allow.
Peer-to-peer file sharingprograms are sometimes labelled as PUA and deleted due to their alleged links to piracy. In March 2021, Windows Defender started removinguTorrentandqBittorrent, causing widespread user confusion. Microsoft has since updated the PUA database to flag torrent clients on enterprise installations only.[26]
Keygensnot tainted by actual malware are also commonly tagged as PUA due to piracy.[27]
In 2015, research byEmsisoftsuggested that all free download providers bundled their downloads with potentially unwanted software, and that Download.com was the worst offender.[4]Lowell Heddings expressed dismay that "Sadly, even on Google all the top results for most open source and freeware are just ads for really terrible sites that are bundling crapware,adware, andmalwareon top of the installer."[28]
In December 2011Gordon Lyonpublished his strong dislike of the wayDownload.comhad started bundlinggraywarewith their installation managers and concerns over the bundled software, causing many people to spread the post on social networks, and a few dozen media reports. The main problem is the confusion between Download.com-offered content[29][30]and software offered by original authors; the accusations included deception as well as copyright and trademark violation.[30]
In 2014,The RegisterandUS-CERTwarned that via Download.com's "foistware", an "attacker may be able to download and execute arbitrary code".[31]
Manyopen-source softwaredevelopers have expressed frustration and dismay that their work is being packaged by companies that profit from their work by usingsearch advertisingto occupy the first result on a search page. Increasingly, these pages are offering bundled installers that include unwanted software, and confuse users by presenting the bundled software as an official download page endorsed by the open source project.
As of early 2016 this is no longer the case.[32]Ownership of SourceForge transferred to SourceForge Media, LLC, a subsidiary of BIZX, LLC (BIZX).[33]After the sale they removed the DevShare program, which means bundled installers are no longer available.
In November 2013,GIMP, a free image manipulation program, removed its download fromSourceForge, citing misleading download buttons that can potentially confuse customers, as well as SourceForge's own Windows installer, which bundles third-party offers. In a statement, GIMP called SourceForge a once "useful and trustworthy place to develop and host FLOSS applications" that now faces "a problem with the ads they allow on their sites ..."[34]In May 2015, the GIMP for Windows SourceForge project was transferred to the ownership of the "SourceForge Editorial Staff" account and adware downloads were re-enabled.[35]The same happened to the developers ofnmap.[36][37]
In May 2015 SourceForge took control of projects which had migrated to other hosting sites and replaced the project downloads with adware-laden downloads.[38]
Gordon Lyonhas lost control of theNmapSourceForgepage, with SourceForge taking over the project's page. Lyon stated "So far they seem to be providing just the official Nmap files (as long as
you don't click on the fake download buttons) and we haven't caught them
trojaning Nmap the way they did with GIMP. But we certainly don't trust
them one bit! Sourceforge is pulling the same scheme that CNet
Download.com tried back when they started circling the drain".[36][37]
VideoLANhas expressed dismay that users searching for their product see search advertising from websites that offer "bundled" downloads that includeunwanted programs, while VideoLAN lacks resources to sue the many companies abusing their trademarks.[28][39][40][41][42]
|
https://en.wikipedia.org/wiki/Potentially_unwanted_program
|
printfis ashellcommandthat formats and outputs text like thesame-named C function. It is available in a variety ofUnixandUnix-likesystems. Some shells implement the command asbuiltinand some provide it as autilityprogram[2]
The command has similarsyntaxandsemanticsas the library function. The command outputs text tostandard output[3]as specified by a format string and a list of values.Charactersof the format string are copied to the output verbatim except when a format specifier is found which causes a value to be output per the specifier.
The command has some aspects unlike the library function. In addition to the library function format specifiers,%bcauses the command to expand backslashescape sequences(for example\nfornewline), and%qoutputs an item that can be used asshellinput.[3]The value used for an unmatched specifier (too few values) is an empty string for%sor 0 for a numeric specifier. If there are more values than specifiers, then the command restarts processing the format string from its beginning,
The command is part of theX/OpenPortability Guide since issue 4 of 1992. It was inherited into the first version of POSIX.1 and theSingle Unix Specification.[4]It first appeared in4.3BSD-Reno.[5]
The implementation bundled inGNU Core Utilitieswas written by David MacKenzie. It has an extension%qfor escaping strings in POSIX-shell format.[3]
This prints a list of numbers:
This produces output for a directory's content similar tols:
|
https://en.wikipedia.org/wiki/Printf_(Unix)
|
End of preview. Expand
in Data Studio
A document collection for the summer 2025 EPFL CS-552 MNLP project by artdev99.
Contains Wikipedia pages.
97'459 chunks from 3300 documents (chunk size: 512, overlap: 64).
- Downloads last month
- -