text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
QUIC(/kwɪk/) is a general-purposetransport layernetwork protocolinitially designed byJim RoskindatGoogle.[1][2][3]It was first implemented and deployed in 2012[4]and was publicly announced in 2013 as experimentation broadened. It was also described at anIETFmeeting.[5][6][7][8]TheChrome web browser,[9]Microsoft Edge,[10][11]Firefox,[12]andSafariall support it.[13]In Chrome, QUIC is used by more than half of all connections to Google's servers.[9]
QUIC improves performance of connection-orientedweb applicationsthat before QUIC usedTransmission Control Protocol(TCP).[2][9]It does this by establishing a number ofmultiplexedconnections between two endpoints usingUser Datagram Protocol(UDP), and is designed to obsolete TCP at the transport layer for many applications. Although its name was initially proposed as an acronym forQuick UDP Internet Connections, in IETF's use of the word, QUIC is not an acronym; it is simply the name of the protocol.[3][8][1]
QUIC works hand-in-hand withHTTP/3's multiplexed connections, allowing multiple streams of data to reach all the endpoints independently, and hence independent ofpacket lossesinvolving other streams. In contrast, HTTP/2 carried over TCP can sufferhead-of-line-blockingdelays if multiple streams are multiplexed on a TCP connection and any of the TCP packets on that connection are delayed or lost.
QUIC's secondary goals include reduced connection and transportlatency, andbandwidthestimation in each direction to avoidcongestion. It also movescongestion controlalgorithms into theuser spaceat both endpoints, rather than thekernel space, which it is claimed[14]will allow these algorithms to improve more rapidly. Additionally, the protocol can be extended withforward error correction(FEC) to further improve performance when errors are expected. It is designed with the intention of avoidingprotocol ossification.
In June 2015, anInternet Draftof a specification for QUIC was submitted to theIETFfor standardization.[15][16]A QUIC working group was established in 2016.[17]In October 2018, the IETF's HTTP and QUIC Working Groups jointly decided to call the HTTP mapping over QUIC "HTTP/3" in advance of making it a worldwide standard.[18]In May 2021, the IETF standardized QUIC inRFC9000, supported byRFC8999,RFC9001andRFC9002.[19]DNS-over-QUICis another application.
Transmission Control Protocol, or TCP, aims to provide an interface for sending streams of data between two endpoints. Data is sent to the TCP system, which ensures it reaches the other end in the exact same form; if any discrepancies occur, the connection will signal an error condition.[20]
To do this, TCP breaks up the data intonetwork packetsand adds small amounts of data to each packet. This additional data includes a sequence number that is used to detect packets that are lost or arrive out of order, and achecksumthat allows the errors within packet data to be detected. When either problem occurs, TCP usesautomatic repeat request(ARQ) to ask the sender to re-send the lost or damaged packet.[20]
In most implementations, TCP will see any error on a connection as a blocking operation, stopping further transfers until the error is resolved or the connection is considered failed. If a single connection is being used to send multiple streams of data, as is the case in theHTTP/2protocol, all of these streams are blocked although only one of them might have a problem. For instance, if a single error occurs while downloading a GIF image used for afavicon, the entire rest of the page will wait while that problem is resolved.[20]This phenomenon is known ashead-of-line blocking.
As the TCP system is designed to look like a "data pipe", or stream, it deliberately has little information regarding the data it transmits. If that data has additional requirements, likeencryptionusingTLS, this must be set up by systems running on top of TCP, using TCP to communicate with similar software on the other end of the connection. Each of these sorts of setup tasks requires its ownhandshakeprocess. This often requires several round-trips of requests and responses until the connection is established. Due to the inherentlatencyof long-distance communications, this can add significant delay to the overall transmission.[20]
TCP has suffered fromprotocol ossification,[21]due to itswire imagebeing incleartextand hence visible to and malleable bymiddleboxes.[22]One measurement found that a third of paths across the Internet encounter at least one intermediary that modifies TCP metadata, and 6.5% of paths encounter harmful ossifying effects from intermediaries.[23]Extensions to TCP have been affected: the design ofMultipath TCP(MPTCP) was constrained by middlebox behaviour,[24][25]and the deployment ofTCP Fast Openhas been likewise hindered.[26][21]
In the context of supportingencryptedHTTPtraffic, QUIC serves a role similar to that of TCP, but with reducedlatencyduring connection setup and more efficient loss recovery when multiple HTTP streams are multiplexed over a single connection. It does this primarily through two changes that rely on the understanding of the behaviour of HTTP traffic.[20]
The first change is to greatly reduce overhead during connection setup. As most HTTP connections will demandTLS, QUIC makes the exchange of setup keys and listing of supported protocols part of the initialhandshake process. When a client opens a connection, the response packet includes the data needed for future packets to use encryption. This eliminates the need to set up an unencryptedpipeand then negotiate the security protocol as separate steps. Other protocols can be serviced in the same way, combining multiple steps into a single request–response pair. This data can then be used both for following requests in the initial setup and future requests that would otherwise be negotiated as separate connections.[20]
The second change is to useUDPrather than TCP as its basis, which does not includelossrecovery. Instead, each QUIC stream is separately flow-controlled, and lost data is retransmitted at the level of QUIC, not UDP. This means that if an error occurs in one stream, like the favicon example above, theprotocol stackcan continue servicing other streams independently. This can be very useful in improving performance on error-prone links, as in most cases considerable additional data may be received before TCP notices a packet is missing or broken, and all of this data is blocked or even flushed while the error is corrected. In QUIC, this data is free to be processed while the single multiplexed stream is repaired.[27]
QUIC includes a number of other changes that improve overall latency and throughput. For instance, the packets are encrypted individually, so that they do not result in the encrypted data waiting for partial packets. This is not generally possible under TCP, where the encryption records are in abytestreamand the protocol stack is unaware of higher-layer boundaries within this stream. These can be negotiated by the layers running on top, but QUIC aims to do all of this in a single handshake process.[8]
Another goal of the QUIC system was to improve performance during network-switching events, like what happens when a user of a mobile device moves from a localWi-Fi hotspotto amobile network. When this occurs on TCP, a lengthy process starts where every existing connection times out one-by-one and is then re-established on demand. To solve this problem, QUIC includes a connection identifier to uniquely identify the connection to the server regardless of source. This allows the connection to be re-established simply by sending a packet, which always contains this ID, as the original connection ID will still be valid even if the user'sIP addresschanges.[28]
QUIC can be implemented in the application space, as opposed to being in theoperating system kernel. This generally invokes additional overhead due tocontext switchesas data is moved between applications. However, in the case of QUIC, the protocol stack is intended to be used by a single application, with each application using QUIC having its own connections hosted on UDP. Ultimately the difference could be very small because much of the overall HTTP/2 stack is already in the applications (or their libraries, more commonly). Placing the remaining parts in those libraries, essentially the error correction, has little effect on the HTTP/2 stack's size or overall complexity.[8]
This organization allows future changes to be made more easily as it does not require changes to thekernelfor updates. One of QUIC's longer-term goals is to add new systems forforward error correction(FEC) and improved congestion control.[28]
One concern about the move from TCP to UDP is that TCP is widely adopted and many of the "middleboxes" in the Internet infrastructure are tuned for TCP and rate-limit or even block UDP. Google carried out a number of exploratory experiments to characterize this and found that only a small number of connections were blocked in this manner.[3]This led to the use of a system for rapid fallback to TCP;Chromium's network stack starts both a QUIC and a conventional TCP connection at the same time, which allows it to fall back with negligible latency.[29]
QUIC has been specifically designed to be deployable and evolvable and to have anti-ossification properties;[30]it is the firstIETFtransport protocol to deliberately minimise its wire image for these ends.[31]Beyond encrypted headers, it is 'greased'[32]and it has protocol invariants explicitly specified.[33]
The security layer of QUIC is based on TLS 1.2 or TLS 1.3.[34]Earlier insecure protocol like TLS 1.0 is not allowed in QUIC stack.
The protocol that was created by Google and taken to the IETF under the name QUIC (already in 2012 around QUIC version 20) is now quite different from the QUIC that has continued to evolve and be refined within the IETF. The original Google QUIC (gQUIC) was designed to be a general purpose web protocol, though it was initially deployed as a protocol to support HTTP(S) in Chromium. The current evolution of the IETF QUIC (iQUIC) protocol is a general purpose transport protocol. Chromium developers continued to track the evolution of IETF QUIC's standardization efforts to adopt and fully comply with the most recent internet standards for QUIC in Chromium.
QUIC was developed with HTTP in mind, and HTTP/3 was its first application.[35][36]DNS-over-QUICis an application of QUIC to name resolution, providing security for data transferred between resolvers similar toDNS-over-TLS.[37]The IETF is developing applications of QUIC for securenetwork tunnelling[36]andstreaming mediadelivery.[38]XMPPhas experimentally been adapted to use QUIC.[39]Another application isSMBover QUIC, which, according to Microsoft, can offer an "SMB VPN" without affecting the user experience.[40]SMB clients use TCP by default and will attempt QUIC if the TCP attempt fails or if intentionally requiring QUIC.
The QUIC code was experimentally developed inGoogle Chromestarting in 2012,[4]and was announced as part of Chromium version 29 (released on August 20, 2013).[18]It is currently enabled by default in Chromium and Chrome.[41]
Support inFirefoxarrived in May 2021.[42][12]
Appleadded experimental support in theWebKit enginethrough the Safari Technology Preview 104 in April 2020.[43]Official support was added inSafari14, included inmacOS Big SurandiOS 14,[44]but the feature needed to be turned on manually.[45]It was later enabled by default in Safari 16.[13]
The cronet library for QUIC and other protocols is available to Android applications as a module loadable viaGoogle Play Services.[46]
cURL7.66, released 11 September 2019, supports HTTP/3 (and thus QUIC).[47][48]
In October 2020, Facebook announced[49]that it has successfully migrated its apps, includingInstagram, and server infrastructure to QUIC, with already 75% of its Internet traffic using QUIC. All mobile apps from Google support QUIC, includingYouTubeandGmail.[50][51]Uber's mobile app also uses QUIC.[51]
As of 2017[update], there are several actively maintained implementations. Google servers support QUIC and Google has published a prototype server.[52]Akamai Technologieshas been supporting QUIC since July 2016.[53][54]AGoimplementation called quic-go[55]is also available, and powers experimental QUIC support in theCaddy server.[56]On July 11, 2017, LiteSpeed Technologies officially began supporting QUIC in their load balancer (WebADC)[57]andLiteSpeed Web Serverproducts.[58]As of October 2019[update], 88.6% of QUIC websites used LiteSpeed and 10.8% usedNginx.[59]Although at first only Google servers supported HTTP-over-QUIC connections,Facebookalso launched the technology in 2018,[18]andCloudflarehas been offering QUIC support on a beta basis since 2018.[60]TheHAProxyload balancer added experimental support for QUIC in March 2022[61]and declared it production-ready in March 2023.[62]As of April 2023[update], 8.9% of all websites use QUIC,[63]up from 5% in March 2021.Microsoft Windows Server 2022supports both HTTP/3[64]and SMB over QUIC[65][10]protocols viaMsQuic. The Application Delivery Controller ofCitrix(Citrix ADC, NetScaler) can function as a QUIC proxy since version 13.[66][67]
In addition, there are several stale community projects: libquic[68]was created by extracting the Chromium implementation of QUIC and modifying it to minimize dependency requirements, and goquic[69]providesGobindings of libquic. Finally, quic-reverse-proxy[70]is aDocker imagethat acts as areverse proxyserver, translating QUIC requests into plain HTTP that can be understood by the origin server.
.NET 5introduces experimental support for QUIC using theMsQuiclibrary.[71]
|
https://en.wikipedia.org/wiki/QUIC
|
Cache only memory architecture(COMA) is acomputer memoryorganization for use inmultiprocessorsin which the local memories (typicallyDRAM) at each node are used as cache. This is in contrast to using the local memories as actual main memory, as inNUMAorganizations.
In NUMA, each address in the global address space is typically assigned a fixed home node. When processors access some data, a copy is made in their local cache, but space remains allocated in the home node. Instead, with COMA, there is no home. An access from a remote node may cause that data to migrate. Compared to NUMA, this reduces the number of redundant copies and may allow more efficient use of the memory resources. On the other hand, it raises problems of how to find a particular data (there is no longer a home node) and what to do if a local memory fills up (migrating some data into the local memory then needs to evict some other data, which doesn't have a home to go to). Hardwarememory coherencemechanisms are typically used to implement the migration.
A huge body of research has explored these issues. Various forms of directories, policies for maintaining free space in the local memories, migration policies, and policies for read-only copies have been developed. Hybrid NUMA-COMA organizations have also been proposed, such as Reactive NUMA, which allows pages to start in NUMA mode and switch to COMA mode if appropriate and is implemented in the Sun Microsystems's WildFire.[1][2]A software-based Hybrid NUMA-COMA implementation was proposed and implemented by ScaleMP,[3]allowing for the creation of a shared-memory multiprocessor system out of a cluster of commodity nodes.
Thiscomputer-storage-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Cache-only_memory_architecture
|
Thesoftware release life cycleis the process of developing, testing, and distributing a software product (e.g., anoperating system). It typically consists of several stages, such as pre-alpha, alpha, beta, and release candidate, before the final version, or "gold", is released to the public.
Pre-alpha refers to the early stages of development, when the software is still being designed and built. Alpha testing is the first phase of formal testing, during which the software is tested internally usingwhite-box techniques. Beta testing is the next phase, in which the software is tested by a larger group of users, typically outside of the organization that developed it. The beta phase is focused on reducing impacts on users and may include usability testing.
After beta testing, the software may go through one or more release candidate phases, in which it is refined and tested further, before the final version is released.
Some software, particularly in the internet and technology industries, is released in a perpetual beta state, meaning that it is continuously being updated and improved, and is never considered to be a fully completed product. This approach allows for a more agile development process and enables the software to be released and used by users earlier in the development cycle.
Pre-alpha refers to all activities performed during the software project before formal testing. These activities can includerequirements analysis,software design,software development, andunit testing. In typicalopen sourcedevelopment, there are several types of pre-alpha versions.Milestoneversions include specific sets of functions and are released as soon as the feature is complete.[citation needed]
The alpha phase of the release life cycle is the first phase ofsoftware testing(alpha is the first letter of theGreek alphabet, used as the number 1). In this phase, developers generally test the software usingwhite-box techniques. Additional validation is then performed usingblack-boxorgray-boxtechniques, by another testing team. Moving to black-box testing inside the organization is known asalpha release.[1][2]
Alpha software is not thoroughly tested by the developer before it is released to customers. Alpha software may contain serious errors, and any resulting instability could cause crashes or data loss.[3]Alpha software may not contain all of the features that are planned for the final version.[4]In general, external availability of alpha software is uncommon forproprietary software, whileopen source softwareoften has publicly available alpha versions. The alpha phase usually ends with afeature freeze, indicating that no more features will be added to the software. At this time, the software is said to befeature-complete. A beta test is carried out followingacceptance testingat the supplier's site (the alpha test) and immediately before the general release of the software as a product.[5]
Afeature-complete(FC) version of a piece ofsoftwarehas all of its planned or primaryfeaturesimplemented but is not yet final due tobugs,performanceorstabilityissues.[6]This occurs at the end of alpha testing indevelopment.
Usually, feature-complete software still has to undergobeta testingandbug fixing, as well as performance or stability enhancement before it can go torelease candidate, and finallygoldstatus.
Beta, named afterthe second letter of the Greek alphabet, is the software development phase following alpha. A beta phase generally begins when the software is feature-complete but likely to contain several known or unknown bugs.[7]Software in the beta phase will generally have many more bugs in it than completed software and speed or performance issues, and may still cause crashes or data loss. The focus of beta testing is reducing impacts on users, often incorporatingusability testing. The process of delivering a beta version to the users is calledbeta releaseand is typically the first time that the software is available outside of the organization that developed it. Software beta releases can be eitheropen or closed, depending on whether they are openly available or only available to a limited audience. Beta version software is often useful for demonstrations and previews within an organization and to prospective customers. Some developers refer to this stage as apreview,preview release,prototype,technical previewortechnology preview(TP),[8]orearly access.
Beta testersare people who actively report issues with beta software. They are usually customers or representatives of prospective customers of the organization that develops the software. Beta testers tend to volunteer their services free of charge but often receive versions of the product they test, discounts on the release version, or other incentives.[9][10]
Some software is kept in so-calledperpetual beta, where new features are continually added to the software without establishing a final "stable" release. As theInternethas facilitated the rapid and inexpensive distribution of software, companies have begun to take a looser approach to the use of the wordbeta.[11]
Developers may release either aclosed beta, or anopen beta; closed beta versions are released to a restricted group of individuals for a user test by invitation, while open beta testers are from a larger group, or anyone interested. Private beta could be suitable for the software that is capable of delivering value but is not ready to be used by everyone either due to scaling issues, lack of documentation or still missing vital features. The testers report any bugs that they find, and sometimes suggest additional features they think should be available in the final version.
Open betas serve the dual purpose of demonstrating a product to potential consumers, and testing among a wide user base is likely to bring to light obscure errors that a much smaller testing team might not find.[citation needed]
Arelease candidate(RC), also known as gamma testing or "going silver", is a beta version with the potential to be a stable product, which is ready to release unless significantbugsemerge. In this stage of product stabilization, all product features have been designed, coded, and tested through one or more beta cycles with no known showstopper-class bugs. A release is calledcode completewhen the development team agrees that no entirely new source code will be added to this release. There could still be source code changes to fix defects, changes to documentation and data files, and peripheral code for test cases or utilities.[citation needed]
Also calledproduction release, thestable releaseis the lastrelease candidate(RC) which has passed all stages of verification and tests. Any known remaining bugs are considered acceptable. This release goes toproduction.
Some software products (e.g.Linux distributionslikeDebian) also havelong-term support(LTS) releases which are based on full releases that have already been tried and tested and receive only security updates.[citation needed]
Once released, the software is generally known as a "stable release". The formal term often depends on the method of release: physical media, online release, or a web application.[12]
The term "release to manufacturing" (RTM), also known as "going gold", is a term used when a software product is ready to be delivered. This build may be digitally signed, allowing the end user to verify the integrity and authenticity of the software purchase. The RTM build is known as the "gold master" or GM[13]is sent for mass duplication or disc replication if applicable. The terminology is taken from the audio record-making industry, specifically the process ofmastering. RTM precedes general availability (GA) when the product is released to the public. A golden master build (GM) is typically the final build of a piece of software in the beta stages for developers. Typically, foriOS, it is the final build before a major release, however, there have been a few exceptions.
RTM is typically used in certain retail mass-production software contexts—as opposed to a specialized software production or project in a commercial or government production and distribution—where the software is sold as part of a bundle in a related computer hardware sale and typically where the software and related hardware is ultimately to be available and sold on mass/public basis at retail stores to indicate that the software has met a defined quality level and is ready for mass retail distribution. RTM could also mean in other contexts that the software has been delivered or released to a client or customer for installation or distribution to the related hardware end user computers or machines. The term doesnotdefine the delivery mechanism or volume; it only states that the quality is sufficient for mass distribution. The deliverable from the engineering organization is frequently in the form of a golden master media used for duplication or to produce the image for the web.
General availability(GA) is the marketing stage at which all necessarycommercializationactivities have been completed and a software product is available for purchase, depending, however, on language, region, and electronic vs. media availability.[14]Commercialization activities could include security and compliance tests, as well as localization and worldwide availability. The time between RTM and GA can take from days to months before a generally available release can be declared, due to the time needed to complete all commercialization activities required by GA. At this stage, the software has "gone live".
Release to the Web(RTW) orWeb releaseis a means of software delivery that utilizes the Internet for distribution. No physical media are produced in this type of release mechanism by the manufacturer. Web releases have become more common as Internet usage has grown.[citation needed]
During its supported lifetime, the software is sometimes subjected to service releases,patchesorservice packs, sometimes also called "interim releases" or "maintenance releases" (MR). For example, Microsoft released three major service packs for the32-biteditions ofWindows XPand two service packs for the64-biteditions.[15]Such service releases contain a collection of updates, fixes, and enhancements, delivered in the form of a single installable package. They may also implement new features. Some software is released with the expectation of regular support. Classes of software that generally involve protracted support as the norm includeanti-virus suitesandmassively multiplayer online games. Continuing with this Windows XP example, Microsoft did offer paid updates for five more years after the end of extended support. This means that support ended on April 8, 2019.[16]
When software is no longer sold or supported, the product is said to have reached end-of-life, to be discontinued, retired, deprecated, abandoned, or obsolete, but user loyalty may continue its existence for some time, even long after its platform is obsolete—e.g., theCommon Desktop Environment[17]and SinclairZX Spectrum.[18]
After the end-of-life date, the developer will usually not implement any new features, fix existing defects, bugs, or vulnerabilities (whether known before that date or not), or provide any support for the product. If the developer wishes, they may release the source code, so that the platform may be maintained by volunteers.
Usage of the "alpha/beta" test terminology originated atIBM.[citation needed]Similar terminologies for IBM's software development were used by people involved with IBM from at least the 1950s (and probably earlier). "A" test was theverificationof a new product before the public announcement. The "B" test was the verification before releasing the product to be manufactured. The "C" test was the final test before the general availability of the product. As software became a significant part of IBM's offerings, the alpha test terminology was used to denote the pre-announcement test and the beta test was used to show product readiness for general availability. Martin Belsky, a manager on some of IBM's earlier software projects claimed to have invented the terminology. IBM dropped the alpha/beta terminology during the 1960s, but by then it had received fairly wide notice. The usage of "beta test" to refer to testing done by customers was not done in IBM. Rather, IBM used the term "field test".
Major public betas developed afterward, with early customers having purchased a "pioneer edition" of the WordVision word processor for theIBM PCfor $49.95. In 1984,Stephen Maneswrote that "in a brilliant marketing coup, Bruce and James Program Publishers managed to get people topayfor the privilege of testing the product."[19]In September 2000, aboxed versionofApple'sMac OS X Public Betaoperating system was released.[20]Between September 2005 and May 2006, Microsoft releasedcommunity technology previews (CTPs) forWindows Vista.[21]From 2009 to 2011,Minecraftwas in public beta.
In February 2005,ZDNetpublished an article about the phenomenon of a beta version often staying for years and being used as if it were at the production level.[22]It noted thatGmailandGoogle News, for example, had been in beta for a long time although widely used; Google News left beta in January 2006, followed by Google Apps (now namedGoogle Workspace), including Gmail, in July 2009.[12]Since the introduction ofWindows 8,Microsofthas called pre-release software apreviewrather thanbeta. All pre-release builds released through theWindows Insider Programlaunched in 2014 are termed "Insider Preview builds". "Beta" may also indicate something more like arelease candidate, or as a form of time-limited demo, or marketing technique.[23]
|
https://en.wikipedia.org/wiki/Software_release_life_cycle
|
Emergent evolutionis thehypothesisthat, in the course ofevolution, some entirely new properties, such asmindandconsciousness, appear at certain critical points, usually because of an unpredictable rearrangement of the already existing entities. The term was originated by the psychologistC. Lloyd Morganin 1922 in hisGifford Lecturesat St. Andrews, which would later be published as the 1923 bookEmergent Evolution.[1][2]
The hypothesis has been widely criticized for providing no mechanism to how entirely new properties emerge, and for its historical roots inteleology.[2][3][4]Historically, emergent evolution has been described as an alternative tomaterialismandvitalism.[5]Interest in emergent evolution was revived by biologist Robert G. B. Reid in 1985.[6][7][8]
Emergent evolution is distinct from the hypothesis of Emergent Evolutionary Potential (EEP) which was introduced in 2019 by Gene Levinson. In EEP, the scientific mechanism of Darwinian natural selection tends to preserve new, more complex entities that arise from interactions between previously existing entities, when those interactions prove useful, by trial-and error, in the struggle for existence. Biological organization arising via EEP is complementary to organization arising via gradual accumulation of incremental variation.[9]
The termemergentwas first used to describe the concept byGeorge Lewesin volume two of his 1875 bookProblems of Life and Mind(p. 412).Henri Bergsoncovered similar themes in his popular 1907 bookCreative Evolutionon theÉlan vital. Emergence was further developed bySamuel Alexanderin hisGifford LecturesatGlasgowduring 1916–18 and published asSpace, Time, and Deity(1920). The related termemergent evolutionwas coined byC. Lloyd Morganin his own Gifford lectures of 1921–22 atSt. Andrewsand published asEmergent Evolution(1923). In an appendix to a lecture in his book, Morgan acknowledged the contributions ofRoy Wood Sellars'sEvolutionary Naturalism(1922).
Charles DarwinandAlfred Russel Wallace's presentation ofnatural selection, coupled to the idea of evolution in Western thought, had gained acceptance due to the wealth of observational data provided and the seeming replacement of divine law with natural law in the affairs of men.[10]However, the mechanism ofnatural selectiondescribed at the time only explained how organisms adapted to variation. The cause of genetic variation was unknown at the time.
Darwin knew that nature had to produce variations before natural selection could act …The problem had been caught by other evolutionists almost as soon asThe Origin of Specieswas first published.Sir Charles Lyellsaw it clearly in 1860 before he even became an evolutionist…(Reid, p.3)[10]
St. George Jackson Mivart'sOn the Genesis of Species(1872) andEdward Cope'sOrigin of the Fittest(1887) raised the need to address the origin of variation between members of a species.William Batesonin 1884 distinguished between the origin of novel variations and the action of natural selection (Materials for the Study of Variation Treated with Especial Regard to Discontinuity in the Origin of Species).[10]
Wallace throughout his life continued to support and extend the scope of Darwin's theory of evolution via the mechanism of natural selection. One of his works,Darwinism, was often cited in support of Darwin's theory. He also worked to elaborate and extend Darwin and his ideas on natural selection. However, Wallace also realized that, as Darwin himself had admitted, the scope and claim of the theory was limited:
the most prominent feature is that I enter into popular yet critical examination of those underlying fundamental problems which Darwin purposely excluded from his works as being beyond the scope of his enquiry. Such are the nature and cause of Life itself, and more especially of its most fundamental and mysterious powers - growth and reproduction ...
Darwin always ... adduced the "laws of Growth with Reproduction," and of "Inheritance with Variability," as being fundamental facts of nature, without which Natural Selection would be powerless or even non-existent ...
... even if it were proved to be an exact representation of the facts, it would not be an explanation... because it would not account for the forces, the directive agency, and the organising power which are essential features of growth …[11]
In examining this aspect, excludedab initioby Darwin, Wallace came to the conclusion that Life in its essence cannot be understood except through "an organising and directive Life-Principle." These necessarily involve a "Creative Power" possessed of a "directive Mind" working toward "an ultimate Purpose" (the development of Man). It supports the view ofJohn Hunterthat "life is the cause, not the consequence" of the organisation of matter. Thus, life precedes matter and infuses it to form living matter (protoplasm).
A very well-founded doctrine, and one which was often advocated by John Hunter, that life is the cause and not the consequence of organisation ... if so, life must be antecedent to organisation, and can only be conceived as indissolubly connected with spirit and with thought, and with the cause of the directive energy everywhere manifested in the growth of living things ... endowed with the mysterious organising power we term life ...[11]
Wallace then refers to the operation of another power called "mind" that utilizes the power of life and is connected with a higher realm than life or matter:
evidence of a foreseeing mind which...so directed and organised that life, in all its myriad forms, as, in the far-off future, to provide all that was most essential for the growth and development of man's spiritual nature ...[11]
Proceeding from Hunter's view that Life is the directive power above and behind living matter, Wallace argues that logically, Mind is the cause ofconsciousness, which exists in different degrees and kinds in living matter.
If, as John Hunter, T.H. Huxley, and other eminent thinkers have declared, "life is the cause, not the consequence, of organisation," so we may believe that mind is the cause, not the consequence, of brain development.
... So there are undoubtedly different degrees and probably also different kinds of mind in various grades of animal life ... And ... so the mind-giver ... enables each class or order of animals to obtain the amount of mind requisite for its place in nature ...[11]
The issue of how order emerged from primordial chaos, by chance or necessity, can be found in classical Greek thought.Aristotleasserted that a whole can be greater than the sum of its parts because of emergent properties. The second-century anatomist and physiologistGalenalso distinguished between the resultant and emergent qualities of wholes. (Reid, p. 72)[10]
Hegelspoke of the revolutionary progression of life from non-living to conscious and then to the spiritual and Kant perceived that simple parts of an organism interact to produce a progressively complex series of emergences of functional forms, a distinction that carried over toJohn Stuart Mill(1843), who stated that even chemical compounds have novel features that cannot be predicted from their elements. [Reid, p. 72][10]
The idea of an entirely novel emergent quality was further taken up byGeorge Henry Lewes(1874–1875), who reiterated Galen's distinction between evolutionary "emergent" qualities and adaptive, additive "resultants."Henry DrummondinThe Descent of Man(1894) stated that emergence can be seen in the fact that the laws of nature are different for the organic or vital compared to inert inorganic matter.
When we pass from the inorganic to the organic we come upon a new set of laws - but the reason why the lower set do not seem to operate in the higher sphere is not that they are annhilated, but that they are overruled. (Drummond 1883, p. 405, quoted in Reid)[10]
As Reid points out, Drummond also realized that greater complexity brought greater adaptability. (Reid. p. 73)[10]
Samuel Alexandertook up the idea that emergences had properties that overruled the demands of the lower levels of organization. And more recently, this theme is taken up by John Holland (1998):
If we turn reductionism on its head we add levels. More carefully, we add new laws that satisfy the constraints imposed by laws already in place. Moreover these new laws apply to complex phenomena that are consequences of the original laws; they are at a new level.[12]
Another major scientist to question natural selection as the motive force of evolution wasC. Lloyd Morgan, a zoologist and student ofT.H. Huxley, who had a strong influence on Samuel Alexander. HisEmergent Evolution(1923) established the central idea that an emergence might have the appearance ofsaltationbut was best regarded as "a qualitative change of direction or critical turning point."(quoted in Reid, p. 73-74)[10]Morgan, due to his work in animal psychology, had earlier (1894) questioned the continuity view of mental evolution, and held that there were various discontinuities in cross-species mental abilities. To offset any attempt to readanthropomorphisminto his view, he created the famous, but often misunderstood methodological canon:
In no case may we interpret an action as the outcome of the exercise of a higher psychical faculty, if it can be interpreted as the outcome of the exercise of one which stands lower in the psychological scale.
However, Morgan realizing that this was being misused to advocate reductionism (rather than as a general methodological caution), introduced a qualification into the second edition of hisAn Introduction to Comparative Psychology(1903):
To this, however, it should be added, lest the range of the principle be misunderstood, that the canon by no means excludes the interpretation of a particular activity in terms of the higher processes, if we already have independent evidence of the occurrence of these higher processes in the animal under observation.
As Reid observes,
While the so-called historiographical "rehabilitation of the canon" has been underway for some time now, Morgan's emergent evolutionist position (which was the highest expression of his attempt to place the study of mind back into such a "wider" natural history) is seldom mentioned in more than passing terms even within contemporary history of psychology textbooks.[10]
Morgan also fought against thebehaviorist schooland clarified even more his emergent views on evolution:
An influential school of 'behaviorists' roundly deny that mental relations, if such there be, are in any sense or in any manner effective... My message is that one may speak of mental relations as effective no less 'scientifically' than... physical relations...
HisAnimal Conduct(1930) explicitly distinguishes between three "grades" or "levels of mentality" which he labeled: 'percipient, perceptive, and reflective.' (p. 42)
Morgan's idea of a polaric relationship between lower and higher, was taken up by Samuel Alexander, who argued that the mental process is not reducible to the neural processes on which it depends at the physical-material level. Instead, they are two poles of a unity of function. Further, the neural process that expressed mental process itself possesses a quality (mind) that the other neural processes don’t. At the same time, the mental process, because it is functionally identical to this particular neural process, is also a vital one.[13]
And mental process is also "something new, "a fresh creation", which precludes a psycho-physiological parallelism. Reductionism is also contrary to empirical fact. At the same time Alexander stated that his view was not one of animism or vitalism, where the mind is an independent entity action on the brain, or conversely, acted upon by the brain. Mental activity is an emergent, new "thing" not reducible to its initial neural parts.
All the available evidence of fact leads to the conclusion that the mental element is essential to the neural process which it is said to accompany...and is not accidental to it, nor is it in turn indifferent to the mental feature. Epiphenomenalism is a mere fallacy of observation.[13]
For Alexander, the world unfolds in space-time, which has the inherent quality of motion. This motion through space-time results in new “complexities of motion” in the form of a new quality or emergent. The emergent retains the qualities of the prior “complexities of motion” but also has something new that was not there before. This something new comes with its own laws of behavior. Time is the quality that creates motion through Space, and matter is simply motion expressed in forms in Space, or as Alexander says a little later, “complexes of motion.” Matter arises out of the basic ground of Space-Time continuity and has an element of “body” (lower order) and an element of “mind” (higher order), or “the conception that a secondary quality is the mind of its primary substrate.”
Mind is an emergent from life and life itself is an emergent from matter. Each level contains and is interconnected with the level and qualities below it, and to the extent that it contains lower levels, these aspects are subject to the laws of that level. All mental functions are living, but not all living functions are mental; all living functions are physico-chemical, but not all physico-chemical processes are living - just as we could say that all people living in Ohio are Americans, but not all Americans live in Ohio. Thus, there are levels of existence, or natural jurisdictions, within a given higher level such that the higher level contains elements of each of the previous levels of existence. The physical level contains the pure dimensionality of Space-Time in addition to the emergent of physico-chemical processes; the next emergent level, life, also contains Space-Time as well as the physico-chemical in addition to the quality of life; the level of mind contains all of the previous three levels, plus consciousness. As a result of this nesting and inter-action of emergents, like fluid Russian dolls, higher emergents cannot be reduced to lower ones, and different laws and methods of inquiry are required for each level.
Life is not an epiphenomenon of matter but an emergent from it ... The new character or quality which the vital physico-chemical complex possesses stands to it as soul or mind to the neural basis.[13]
For Alexander, the "directing agency" or entelechy is found "in the principle or plan".
a given stage of material complexity is characterised by such and such special features…By accepting this we at any rate confine ourselves to noting the facts…and do not invent entities for which there seems to be no other justification than that something is done in life which is not done in matter.[13]
While an emergent is a higher complexity, it also results in a new simplicity as it brings a higher order into what was previously less ordered (a new simplex out of a complex). This new simplicity does not carry any of the qualities or aspects of that emergent level prior to it, but as noted, does still carry within it such lower levels so can be understood to that extent through the science of such levels, yet not itself be understood except by a science that is able to reveal the new laws and principles applicable to it.
Ascent takes place, it would seem, through complexity.[increasing order] But at each change of quality the complexity as it were gathers itself together and is expressed in a new simplicity.
Within a given level of emergence, there are degrees of development.
... There are on one level degrees of perfection or development; and at the same time there is affinity by descent between the existents belonging to the level. This difference of perfection is not the same thing as difference of order or rank such as subsists between matter and life or life and mind ...[13]
The concept or idea of mind, the highest emergent known to us, being at our level, extends all the way down to pure dimensionality or Space-Time. In other words, time is the “mind” of motion, materialising is the “mind” of matter, living the “mind” of life. Motion through pure time (or life astronomical, mind ideational) emerges as matter “materialising” (geological time, life geological, mind existential), and this emerges as life “living” (biological time, life biological, mind experiential), which in turn give us mind “minding” (historical time, life historical, mind cognitional). But there is also an extension possible upwards of mind to what we call Deity.
let us describe the empirical quality of any kind of finite which performs to it the office of consciousness or mind as its 'mind.' Yet at the same time let us remember that the 'mind' of a living thing is not conscious mind but is life, and has not the empirical character of consciousness at all, and that life is not merely a lower degree of mind or consciousness, but something different. We are using 'mind' metaphorically by transference from real minds and applying it to the finites on each level in virtue of their distinctive quality; down to Space-Time itself whose existent complexes of bare space-time have for their mind bare time in its empirical variations.[13]
Alexander goes back to the Greek idea of knowledge being “out there” in the object being contemplated. In that sense, there is not mental object (concept) “distinct” (that is, different in state of being) from the physical object, but only an apparent split between the two, which can then be brought together by proper compresence or participation of the consciousness in the object itself.
There is no consciousness lodged, as I have supposed, in the organism as a quality of the neural response; consciousness belongs to the totality of objects, of what are commonly called the objects of consciousness or the field of consciousness ... Consciousness is therefore "out there" where the objects are, by a new version of Berkleyanism ... Obviously for this doctrine as for mine there is no mental object as distinct from a physical object: the image of a tree is a tree in an appropriate form...[13]
Because of the interconnectedness of the universe by virtue of Space-Time, and because the mind apprehends space, time and motion through a unity of sense and mind experience, there is a form of knowing that is intuitive (participative) - sense and reason are outgrowths from it.
In being conscious of its own space and time, the mind is conscious of the space and time of external things and vice versa. This is a direct consequence of the continuity of Space-Time in virtue of which any point-instant is connected sooner or later, directly or indirectly, with every other...
The mind therefore does not apprehend the space of its objects, that is their shape, size and locality, by sensation, for it depends for its character on mere spatio-temporal conditions, though it is not to be had as consciousness in the absence of sensation (or else of course ideation). It is clear without repeating these considerations that the same proposition is true of Time; and of motion ... I shall call this mode of apprehension in its distinction from sensation, intuition. ... Intuition is different from reason, but reason and sense alike are outgrowths from it, empirical determinations of it...[13]
In a sense, the universe is a participative one and open to participation by mind as well so that mind can intuitively know an object, contrary to what Kant asserted. Participation (togetherness) is something that is “enjoyed” (experienced) not contemplated, though in the higher level of consciousness, it would be contemplated.
The universe for Alexander is essentially in process, with Time as its ongoing aspect, and the ongoing process consists in the formation of changing complexes of motions. These complexes become ordered in repeatable ways displaying what he calls "qualities." There is a hierarchy of kinds of organized patterns of motions, in which each level depends on the subvening level, but also displays qualities not shown at the subvening level nor predictable from it… On this there sometimes supervenes a further level with the quality called "life"; and certain subtle syntheses which carry life are the foundation for a further level with a new quality. "mind." This is the highest level known to us, but not necessarily the highest possible level. The universe has a forward thrust, called its "nisus" (broadly to be identified with the Time aspect) in virtue of which further levels are to be expected...[14]
Emergent evolution was revived by Robert G. B. Reid (March 20, 1939 - May 28, 2016), a biology professor at theUniversity of Victoria(in British Columbia, Canada). In his bookEvolutionary Theory: The Unfinished Synthesis(1985), he stated that themodern evolutionary synthesiswith its emphasis onnatural selectionis an incomplete picture of evolution, and emergent evolution can explain the origin of genetic variation.[6][7][8]BiologistErnst Mayrheavily criticized the book claiming it was a misinformed attack on natural selection. Mayr commented that Reid was working from an "obsolete conceptual framework", provided no solid evidence and that he was arguing for ateleological process of evolution.[15]In 2004, biologist Samuel Scheiner stated that Reid's "presentation is both a caricature of evolutionary theory and severely out of date."[16]
Reid later published the bookBiological Emergences(2007) with a theory on how emergent novelties are generated in evolution.[17][18]According toMassimo Pigliucci"Biological Emergences by Robert Reid is an interesting contribution to the ongoing debate on the status of evolutionary theory, but it is hard to separate the good stuff from the more dubious claims." Pigliucci noted a dubious claim in the book is that natural selection has no role in evolution.[19]It was positively reviewed by biologist Alexander Badyaev who commented that "the book succeeds in drawing attention to an under appreciated aspect of the evolutionary process".[20]Others have criticized Reid's unorthodox views on emergence and evolution.
|
https://en.wikipedia.org/wiki/Emergent_evolution
|
Inmathematics, theKronecker delta(named afterLeopold Kronecker) is afunctionof twovariables, usually just non-negativeintegers. The function is 1 if the variables are equal, and 0 otherwise:δij={0ifi≠j,1ifi=j.{\displaystyle \delta _{ij}={\begin{cases}0&{\text{if }}i\neq j,\\1&{\text{if }}i=j.\end{cases}}}or with use ofIverson brackets:δij=[i=j]{\displaystyle \delta _{ij}=[i=j]\,}For example,δ12=0{\displaystyle \delta _{12}=0}because1≠2{\displaystyle 1\neq 2}, whereasδ33=1{\displaystyle \delta _{33}=1}because3=3{\displaystyle 3=3}.
The Kronecker delta appears naturally in many areas of mathematics, physics, engineering and computer science, as a means of compactly expressing its definition above.
Inlinear algebra, then×n{\displaystyle n\times n}identity matrixI{\displaystyle \mathbf {I} }has entries equal to the Kronecker delta:Iij=δij{\displaystyle I_{ij}=\delta _{ij}}wherei{\displaystyle i}andj{\displaystyle j}take the values1,2,⋯,n{\displaystyle 1,2,\cdots ,n}, and theinner productofvectorscan be written asa⋅b=∑i,j=1naiδijbj=∑i=1naibi.{\displaystyle \mathbf {a} \cdot \mathbf {b} =\sum _{i,j=1}^{n}a_{i}\delta _{ij}b_{j}=\sum _{i=1}^{n}a_{i}b_{i}.}Here theEuclidean vectorsare defined asn-tuples:a=(a1,a2,…,an){\displaystyle \mathbf {a} =(a_{1},a_{2},\dots ,a_{n})}andb=(b1,b2,...,bn){\displaystyle \mathbf {b} =(b_{1},b_{2},...,b_{n})}and the last step is obtained by using the values of the Kronecker delta to reduce the summation overj{\displaystyle j}.
It is common foriandjto be restricted to a set of the form{1, 2, ...,n}or{0, 1, ...,n− 1}, but the Kronecker delta can be defined on an arbitrary set.
The following equations are satisfied:∑jδijaj=ai,∑iaiδij=aj,∑kδikδkj=δij.{\displaystyle {\begin{aligned}\sum _{j}\delta _{ij}a_{j}&=a_{i},\\\sum _{i}a_{i}\delta _{ij}&=a_{j},\\\sum _{k}\delta _{ik}\delta _{kj}&=\delta _{ij}.\end{aligned}}}Therefore, the matrixδcan be considered as an identity matrix.
Another useful representation is the following form:δnm=limN→∞1N∑k=1Ne2πikN(n−m){\displaystyle \delta _{nm}=\lim _{N\to \infty }{\frac {1}{N}}\sum _{k=1}^{N}e^{2\pi i{\frac {k}{N}}(n-m)}}This can be derived using the formula for thegeometric series.
Using theIverson bracket:δij=[i=j].{\displaystyle \delta _{ij}=[i=j].}
Often, a single-argument notationδi{\displaystyle \delta _{i}}is used, which is equivalent to settingj=0{\displaystyle j=0}:δi=δi0={0,ifi≠01,ifi=0{\displaystyle \delta _{i}=\delta _{i0}={\begin{cases}0,&{\text{if }}i\neq 0\\1,&{\text{if }}i=0\end{cases}}}
Inlinear algebra, it can be thought of as atensor, and is writtenδji{\displaystyle \delta _{j}^{i}}. Sometimes the Kronecker delta is called the substitution tensor.[1]
In the study ofdigital signal processing(DSP), the Kronecker delta function sometimes means the unit sample functionδ[n]{\displaystyle \delta [n]}, which represents a special case of the 2-dimensional Kronecker delta functionδij{\displaystyle \delta _{ij}}where the Kronecker indices include the number zero, and where one of the indices is zero:δ[n]≡δn0≡δ0nwhere−∞<n<∞{\displaystyle \delta [n]\equiv \delta _{n0}\equiv \delta _{0n}~~~{\text{where}}-\infty <n<\infty }
Or more generally where:δ[n−k]≡δ[k−n]≡δnk≡δknwhere−∞<n<∞,−∞<k<∞{\displaystyle \delta [n-k]\equiv \delta [k-n]\equiv \delta _{nk}\equiv \delta _{kn}{\text{where}}-\infty <n<\infty ,-\infty <k<\infty }
For discrete-time signals, it is conventional to place a single integer index in square braces; in contrast the Kronecker delta,δij{\displaystyle \delta _{ij}}, can have any number of indexes. InLTI systemtheory, the discrete unit sample function is typically used as an input to a discrete-time system for determining theimpulse responsefunction of the system which characterizes the system for any general imput. In contrast, the typical purpose of the Kronecker delta function is for filtering terms from anEinstein summation convention.
The discrete unit sample function is more simply defined as:δ[n]={1n=00nis another integer{\displaystyle \delta [n]={\begin{cases}1&n=0\\0&n{\text{ is another integer}}\end{cases}}}
In comparison, incontinuous-time systemstheDirac delta functionis often confused for both the Kronecker delta function and the unit sample function. The Dirac delta is defined as:{∫−ε+εδ(t)dt=1∀ε>0δ(t)=0∀t≠0{\displaystyle {\begin{cases}\int _{-\varepsilon }^{+\varepsilon }\delta (t)dt=1&\forall \varepsilon >0\\\delta (t)=0&\forall t\neq 0\end{cases}}}
Unlike the Kronecker delta functionδij{\displaystyle \delta _{ij}}and the unit sample functionδ[n]{\displaystyle \delta [n]}, the Dirac delta functionδ(t){\displaystyle \delta (t)}does not have an integer index, it has a single continuous non-integer valuet.
In continuous-time systems, the term "unit impulse function" is used to refer to theDirac delta functionδ(t){\displaystyle \delta (t)}or, in discrete-time systems, the Kronecker delta functionδ[n]{\displaystyle \delta [n]}.
The Kronecker delta has the so-calledsiftingproperty that forj∈Z{\displaystyle j\in \mathbb {Z} }:∑i=−∞∞aiδij=aj.{\displaystyle \sum _{i=-\infty }^{\infty }a_{i}\delta _{ij}=a_{j}.}and if the integers are viewed as ameasure space, endowed with thecounting measure, then this property coincides with the defining property of theDirac delta function∫−∞∞δ(x−y)f(x)dx=f(y),{\displaystyle \int _{-\infty }^{\infty }\delta (x-y)f(x)\,dx=f(y),}and in fact Dirac's delta was named after the Kronecker delta because of this analogous property.[2]In signal processing it is usually the context (discrete or continuous time) that distinguishes the Kronecker and Dirac "functions". And by convention,δ(t){\displaystyle \delta (t)}generally indicates continuous time (Dirac), whereas arguments likei{\displaystyle i},j{\displaystyle j},k{\displaystyle k},l{\displaystyle l},m{\displaystyle m}, andn{\displaystyle n}are usually reserved for discrete time (Kronecker). Another common practice is to represent discrete sequences with square brackets; thus:δ[n]{\displaystyle \delta [n]}. The Kronecker delta is not the result of directly sampling the Dirac delta function.
The Kronecker delta forms the multiplicativeidentity elementof anincidence algebra.[3]
Inprobability theoryandstatistics, the Kronecker delta andDirac delta functioncan both be used to represent adiscrete distribution. If thesupportof a distribution consists of pointsx={x1,⋯,xn}{\displaystyle \mathbf {x} =\{x_{1},\cdots ,x_{n}\}}, with corresponding probabilitiesp1,⋯,pn{\displaystyle p_{1},\cdots ,p_{n}}, then theprobability mass functionp(x){\displaystyle p(x)}of the distribution overx{\displaystyle \mathbf {x} }can be written, using the Kronecker delta, asp(x)=∑i=1npiδxxi.{\displaystyle p(x)=\sum _{i=1}^{n}p_{i}\delta _{xx_{i}}.}
Equivalently, theprobability density functionf(x){\displaystyle f(x)}of the distribution can be written using the Dirac delta function asf(x)=∑i=1npiδ(x−xi).{\displaystyle f(x)=\sum _{i=1}^{n}p_{i}\delta (x-x_{i}).}
Under certain conditions, the Kronecker delta can arise from sampling a Dirac delta function. For example, if a Dirac delta impulse occurs exactly at a sampling point and is ideally lowpass-filtered (with cutoff at the critical frequency) per theNyquist–Shannon sampling theorem, the resulting discrete-time signal will be a Kronecker delta function.
If it is considered as a type(1,1){\displaystyle (1,1)}tensor, the Kronecker tensor can be writtenδji{\displaystyle \delta _{j}^{i}}with acovariantindexj{\displaystyle j}andcontravariantindexi{\displaystyle i}:δji={0(i≠j),1(i=j).{\displaystyle \delta _{j}^{i}={\begin{cases}0&(i\neq j),\\1&(i=j).\end{cases}}}
This tensor represents:
Thegeneralized Kronecker deltaormulti-index Kronecker deltaof order2p{\displaystyle 2p}is a type(p,p){\displaystyle (p,p)}tensor that is completelyantisymmetricin itsp{\displaystyle p}upper indices, and also in itsp{\displaystyle p}lower indices.
Two definitions that differ by a factor ofp!{\displaystyle p!}are in use. Below, the version is presented has nonzero components scaled to be±1{\displaystyle \pm 1}. The second version has nonzero components that are±1/p!{\displaystyle \pm 1/p!}, with consequent changes scaling factors in formulae, such as the scaling factors of1/p!{\displaystyle 1/p!}in§ Properties of the generalized Kronecker deltabelow disappearing.[4]
In terms of the indices, the generalized Kronecker delta is defined as:[5][6]δν1…νpμ1…μp={−1ifν1…νpare distinct integers and are an even permutation ofμ1…μp−1ifν1…νpare distinct integers and are an odd permutation ofμ1…μp−0in all other cases.{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}={\begin{cases}{\phantom {-}}1&\quad {\text{if }}\nu _{1}\dots \nu _{p}{\text{ are distinct integers and are an even permutation of }}\mu _{1}\dots \mu _{p}\\-1&\quad {\text{if }}\nu _{1}\dots \nu _{p}{\text{ are distinct integers and are an odd permutation of }}\mu _{1}\dots \mu _{p}\\{\phantom {-}}0&\quad {\text{in all other cases}}.\end{cases}}}
LetSp{\displaystyle \mathrm {S} _{p}}be thesymmetric groupof degreep{\displaystyle p}, then:δν1…νpμ1…μp=∑σ∈Spsgn(σ)δνσ(1)μ1⋯δνσ(p)μp=∑σ∈Spsgn(σ)δν1μσ(1)⋯δνpμσ(p).{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}=\sum _{\sigma \in \mathrm {S} _{p}}\operatorname {sgn}(\sigma )\,\delta _{\nu _{\sigma (1)}}^{\mu _{1}}\cdots \delta _{\nu _{\sigma (p)}}^{\mu _{p}}=\sum _{\sigma \in \mathrm {S} _{p}}\operatorname {sgn}(\sigma )\,\delta _{\nu _{1}}^{\mu _{\sigma (1)}}\cdots \delta _{\nu _{p}}^{\mu _{\sigma (p)}}.}
Usinganti-symmetrization:δν1…νpμ1…μp=p!δ[ν1μ1…δνp]μp=p!δν1[μ1…δνpμp].{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}=p!\delta _{[\nu _{1}}^{\mu _{1}}\dots \delta _{\nu _{p}]}^{\mu _{p}}=p!\delta _{\nu _{1}}^{[\mu _{1}}\dots \delta _{\nu _{p}}^{\mu _{p}]}.}
In terms of ap×p{\displaystyle p\times p}determinant:[7]δν1…νpμ1…μp=|δν1μ1⋯δνpμ1⋮⋱⋮δν1μp⋯δνpμp|.{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}={\begin{vmatrix}\delta _{\nu _{1}}^{\mu _{1}}&\cdots &\delta _{\nu _{p}}^{\mu _{1}}\\\vdots &\ddots &\vdots \\\delta _{\nu _{1}}^{\mu _{p}}&\cdots &\delta _{\nu _{p}}^{\mu _{p}}\end{vmatrix}}.}
Using theLaplace expansion(Laplace's formula) of determinant, it may be definedrecursively:[8]δν1…νpμ1…μp=∑k=1p(−1)p+kδνkμpδν1…νˇk…νpμ1…μk…μˇp=δνpμpδν1…νp−1μ1…μp−1−∑k=1p−1δνkμpδν1…νk−1νpνk+1…νp−1μ1…μk−1μkμk+1…μp−1,{\displaystyle {\begin{aligned}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}&=\sum _{k=1}^{p}(-1)^{p+k}\delta _{\nu _{k}}^{\mu _{p}}\delta _{\nu _{1}\dots {\check {\nu }}_{k}\dots \nu _{p}}^{\mu _{1}\dots \mu _{k}\dots {\check {\mu }}_{p}}\\&=\delta _{\nu _{p}}^{\mu _{p}}\delta _{\nu _{1}\dots \nu _{p-1}}^{\mu _{1}\dots \mu _{p-1}}-\sum _{k=1}^{p-1}\delta _{\nu _{k}}^{\mu _{p}}\delta _{\nu _{1}\dots \nu _{k-1}\,\nu _{p}\,\nu _{k+1}\dots \nu _{p-1}}^{\mu _{1}\dots \mu _{k-1}\,\mu _{k}\,\mu _{k+1}\dots \mu _{p-1}},\end{aligned}}}where the caron,ˇ{\displaystyle {\check {}}}, indicates an index that is omitted from the sequence.
Whenp=n{\displaystyle p=n}(the dimension of the vector space), in terms of theLevi-Civita symbol:δν1…νnμ1…μn=εμ1…μnεν1…νn.{\displaystyle \delta _{\nu _{1}\dots \nu _{n}}^{\mu _{1}\dots \mu _{n}}=\varepsilon ^{\mu _{1}\dots \mu _{n}}\varepsilon _{\nu _{1}\dots \nu _{n}}\,.}More generally, form=n−p{\displaystyle m=n-p}, using theEinstein summation convention:δν1…νpμ1…μp=1m!εκ1…κmμ1…μpεκ1…κmν1…νp.{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}={\tfrac {1}{m!}}\varepsilon ^{\kappa _{1}\dots \kappa _{m}\mu _{1}\dots \mu _{p}}\varepsilon _{\kappa _{1}\dots \kappa _{m}\nu _{1}\dots \nu _{p}}\,.}
Kronecker Delta contractions depend on the dimension of the space. For example,δμ1ν1δν1ν2μ1μ2=(d−1)δν2μ2,{\displaystyle \delta _{\mu _{1}}^{\nu _{1}}\delta _{\nu _{1}\nu _{2}}^{\mu _{1}\mu _{2}}=(d-1)\delta _{\nu _{2}}^{\mu _{2}},}wheredis the dimension of the space. From this relation the full contracted delta is obtained asδμ1μ2ν1ν2δν1ν2μ1μ2=2d(d−1).{\displaystyle \delta _{\mu _{1}\mu _{2}}^{\nu _{1}\nu _{2}}\delta _{\nu _{1}\nu _{2}}^{\mu _{1}\mu _{2}}=2d(d-1).}The generalization of the preceding formulas is[citation needed]δμ1…μnν1…νnδν1…νpμ1…μp=n!(d−p+n)!(d−p)!δνn+1…νpμn+1…μp.{\displaystyle \delta _{\mu _{1}\dots \mu _{n}}^{\nu _{1}\dots \nu _{n}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}=n!{\frac {(d-p+n)!}{(d-p)!}}\delta _{\nu _{n+1}\dots \nu _{p}}^{\mu _{n+1}\dots \mu _{p}}.}
The generalized Kronecker delta may be used foranti-symmetrization:1p!δν1…νpμ1…μpaν1…νp=a[μ1…μp],1p!δν1…νpμ1…μpaμ1…μp=a[ν1…νp].{\displaystyle {\begin{aligned}{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}a^{\nu _{1}\dots \nu _{p}}&=a^{[\mu _{1}\dots \mu _{p}]},\\{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}a_{\mu _{1}\dots \mu _{p}}&=a_{[\nu _{1}\dots \nu _{p}]}.\end{aligned}}}
From the above equations and the properties ofanti-symmetric tensors, we can derive the properties of the generalized Kronecker delta:1p!δν1…νpμ1…μpa[ν1…νp]=a[μ1…μp],1p!δν1…νpμ1…μpa[μ1…μp]=a[ν1…νp],1p!δν1…νpμ1…μpδκ1…κpν1…νp=δκ1…κpμ1…μp,{\displaystyle {\begin{aligned}{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}a^{[\nu _{1}\dots \nu _{p}]}&=a^{[\mu _{1}\dots \mu _{p}]},\\{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}a_{[\mu _{1}\dots \mu _{p}]}&=a_{[\nu _{1}\dots \nu _{p}]},\\{\frac {1}{p!}}\delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}\delta _{\kappa _{1}\dots \kappa _{p}}^{\nu _{1}\dots \nu _{p}}&=\delta _{\kappa _{1}\dots \kappa _{p}}^{\mu _{1}\dots \mu _{p}},\end{aligned}}}which are the generalized version of formulae written in§ Properties. The last formula is equivalent to theCauchy–Binet formula.
Reducing the order via summation of the indices may be expressed by the identity[9]δν1…νsμs+1…μpμ1…μsμs+1…μp=(n−s)!(n−p)!δν1…νsμ1…μs.{\displaystyle \delta _{\nu _{1}\dots \nu _{s}\,\mu _{s+1}\dots \mu _{p}}^{\mu _{1}\dots \mu _{s}\,\mu _{s+1}\dots \mu _{p}}={\frac {(n-s)!}{(n-p)!}}\delta _{\nu _{1}\dots \nu _{s}}^{\mu _{1}\dots \mu _{s}}.}
Using both the summation rule for the casep=n{\displaystyle p=n}and the relation with the Levi-Civita symbol,the summation rule of the Levi-Civita symbolis derived:δν1…νpμ1…μp=1(n−p)!εμ1…μpκp+1…κnεν1…νpκp+1…κn.{\displaystyle \delta _{\nu _{1}\dots \nu _{p}}^{\mu _{1}\dots \mu _{p}}={\frac {1}{(n-p)!}}\varepsilon ^{\mu _{1}\dots \mu _{p}\,\kappa _{p+1}\dots \kappa _{n}}\varepsilon _{\nu _{1}\dots \nu _{p}\,\kappa _{p+1}\dots \kappa _{n}}.}The 4D version of the last relation appears in Penrose'sspinor approach to general relativity[10]that he later generalized, while he was developing Aitken's diagrams,[11]to become part of the technique ofPenrose graphical notation.[12]Also, this relation is extensively used inS-dualitytheories, especially when written in the language ofdifferential formsandHodge duals.
For any integersj{\displaystyle j}andk{\displaystyle k}, the Kronecker delta can be written as a complexcontour integralusing a standardresiduecalculation. The integral is taken over theunit circlein thecomplex plane, oriented counterclockwise. An equivalent representation of the integral arises by parameterizing the contour by an angle around the origin.δjk=12πi∮|z|=1zj−k−1dz=12π∫02πei(j−k)φdφ{\displaystyle \delta _{jk}={\frac {1}{2\pi i}}\oint _{|z|=1}z^{j-k-1}\,dz={\frac {1}{2\pi }}\int _{0}^{2\pi }e^{i(j-k)\varphi }\,d\varphi }
The Kronecker comb function with periodN{\displaystyle N}is defined (usingDSPnotation) as:ΔN[n]=∑k=−∞∞δ[n−kN],{\displaystyle \Delta _{N}[n]=\sum _{k=-\infty }^{\infty }\delta [n-kN],}whereN≠0{\displaystyle N\neq 0}andn{\displaystyle n}are integers. The Kronecker comb thus consists of an infinite series of unit impulses that areNunits apart, aligned so one of the impulses occurs at zero. It may be considered to be the discrete analog of theDirac comb.
|
https://en.wikipedia.org/wiki/Kronecker_delta
|
Surveillance capitalismis a concept inpolitical economicswhich denotes the widespread collection andcommodificationofpersonal databy corporations. This phenomenon is distinct fromgovernment surveillance, although the two can be mutually reinforcing. The concept of surveillance capitalism, as described byShoshana Zuboff, is driven by aprofit-makingincentive, and arose as advertising companies, led by Google'sAdWords, saw the possibilities of using personal data to target consumers more precisely.[1]
Increased data collection may have various benefits for individuals and society, such asself-optimization(thequantified self),[2]societal optimizations (e.g., bysmart cities) and optimized services (including variousweb applications). However, ascapitalismfocuses on expanding the proportion of social life that is open todata collectionanddata processing,[2]this can have significant implications for vulnerability and control of society, as well as forprivacy.
The economic pressures of capitalism are driving the intensification of online connection andmonitoring, with spaces of social life opening up to saturation by corporate actors, directed at making profits and/or regulating behavior. Therefore, personal data points increased in value after the possibilities oftargeted advertisingwere known.[3]As a result, the increasing price of data has limited access to the purchase of personaldata pointsto the richest in society.[4]
Shoshana Zuboff writes that "analysing massive data sets began as a way to reduce uncertainty by discovering the probabilities of future patterns in the behavior of people and systems".[5]In 2014, Vincent Mosco referred to the marketing of information about customers and subscribers to advertisers assurveillance capitalismand made note of thesurveillance statealongside it.[6]Christian Fuchsfound that the surveillance state fuses with surveillance capitalism.[7]
Similarly, Zuboff informs that the issue is further complicated by highly invisible collaborative arrangements with state security apparatuses. According to Trebor Scholz, companies recruit people as informants for this type of capitalism.[8]Zuboff contrasts themass productionofindustrial capitalismwith surveillance capitalism, where the former was interdependent with its populations, who were its consumers and employees, and the latter preys on dependent populations, who are neither its consumers nor its employees and largely ignorant of its procedures.[9]
Their research shows that the capitalist addition to the analysis of massive amounts of data has taken its original purpose in an unexpected direction.[1]Surveillancehas been changing power structures in the information economy, potentially shifting the balance of power further from nation-states and towards large corporations employing the surveillance capitalist logic.[10]
Zuboff notes that surveillance capitalism extends beyond the conventional institutional terrain of the private firm, accumulating not only surveillance assets and capital but also rights, and operating without meaningful mechanisms of consent.[9]In other words, analysing massive data sets was at some point not only executed by the state apparatuses but also companies. Zuboff claims that bothGoogleandFacebookhave invented surveillance capitalism and translated it into "a new logic of accumulation".[1][11][12]
This mutation resulted in both companies collecting very large numbers of data points about their users, with the core purpose of making a profit. By selling these data points to external users (particularly advertisers), it has become an economic mechanism. The combination of the analysis of massive data sets and the use of these data sets as a market mechanism has shaped the concept of surveillance capitalism. Surveillance capitalism has been heralded as the successor toneoliberalism.[13][14]
Oliver Stone, creator of the filmSnowden, pointed to thelocation-based gamePokémon Goas the "latest sign of the emerging phenomenon and demonstration of surveillance capitalism". Stone criticized that the location of its users was used not only for game purposes, but also to retrieve more information about its players. By tracking users' locations, the game collected far more information than just users' names and locations: "it can access the contents of your USB storage, your accounts, photographs, network connections, and phone activities, and can even activate your phone, when it is in standby mode". This data can then be analysed and commodified by companies such as Google (which significantly invested in the game's development) to improve the effectiveness oftargeted advertisement.[15][16]
Another aspect of surveillance capitalism is its influence onpolitical campaigning. Personal data retrieved bydata minerscan enable various companies (most notoriouslyCambridge Analytica) to improve the targeting ofpoliticaladvertising, a step beyond the commercial aims of previous surveillance capitalist operations. In this way, it is possible that political parties will be able to produce far more targeted political advertising to maximise its impact on voters. However,Cory Doctorowwrites that the misuse of these data sets "will lead us towards totalitarianism".[17][better source needed]This may resemble acorporatocracy, andJoseph Turowwrites that "the centrality ofcorporate poweris a direct reality at the very heart of thedigital age".[2][18]: 17
The terminology "surveillance capitalism" was popularized by Harvard Professor Shoshana Zuboff.[19]: 107In Zuboff's theory, surveillance capitalism is a novel market form and a specific logic ofcapitalistaccumulation. In her 2014 essayA Digital Declaration: Big Data as Surveillance Capitalism, she characterized it as a "radically disembedded and extractive variant of information capitalism" based on thecommodificationof "reality" and its transformation into behavioral data for analysis and sales.[20][21][22][23]
In a subsequent article in 2015, Zuboff analyzed the societal implications of thismutationof capitalism. She distinguished between "surveillance assets", "surveillance capital", and "surveillance capitalism" and their dependence on a global architecture of computer mediation that she calls "Big Other", a distributed and largely uncontested new expression of power that constitutes hidden mechanisms of extraction, commodification, and control that threatens core values such asfreedom,democracy, andprivacy.[24][2]
According to Zuboff, surveillance capitalism was pioneered by Google and later Facebook, just asmass-productionand managerial capitalism were pioneered byFordandGeneral Motorsa century earlier, and has now become the dominant form of information capitalism.[9]Zuboff emphasizes that behavioral changes enabled by artificial intelligence have become aligned with the financial goals of American internet companies such as Google, Facebook, and Amazon.[19]: 107
In herOxford Universitylecture published in 2016, Zuboff identified the mechanisms and practices of surveillance capitalism, including the production of "prediction products" for sale in new "behavioral futures markets." She introduced the concept "dispossession bysurveillance", arguing that it challenges the psychological and political bases ofself-determinationby concentrating rights in the surveillance regime. This is described as a "coup from above."[25]
Zuboff's bookThe Age of Surveillance Capitalism[26]is a detailed examination of the unprecedented power of surveillance capitalism and the quest by powerful corporations to predict and control human behavior.[26]Zuboff identifies four key features in the logic of surveillance capitalism and explicitly follows the four key features identified by Google's chief economist,Hal Varian:[27]
Zuboff compares demanding privacy from surveillance capitalists or lobbying for an end to commercial surveillance on the Internet to askingHenry Fordto make eachModel Tby hand and states that such demands are existential threats that violate the basic mechanisms of the entity's survival.[9]
Zuboff warns that principles of self-determination might be forfeited due to "ignorance, learned helplessness, inattention, inconvenience, habituation, or drift" and states that "we tend to rely on mental models, vocabularies, and tools distilled from past catastrophes," referring to the twentieth century'stotalitariannightmares or themonopolisticpredations ofGilded Agecapitalism, with countermeasures that have been developed to fight those earlier threats not being sufficient or even appropriate to meet the novel challenges.[9]
She also poses the question: "will we be the masters of information, or will we be its slaves?" and states that "if the digital future is to be our home, then it is we who must make it so".[28]
In her book, Zuboff discusses the differences between industrial capitalism and surveillance capitalism. Zuboff writes that as industrial capitalism exploited nature, surveillance capitalism exploits human nature.[29]
The term "surveillance capitalism" has also been used bypolitical economistsJohn Bellamy FosterandRobert W. McChesney, although with a different meaning. In an article published inMonthly Reviewin 2014, they apply it to describe the manifestation of the "insatiable need for data" offinancialization, which they explain is "the long-term growth speculation on financial assets relative to GDP" introduced in the United States by industry and government in the 1980s that evolved out of themilitary-industrial complexand the advertising industry.[30]
Numerous organizations have been struggling forfree speechandprivacy rightsin the new surveillance capitalism[31]and various national governments have enactedprivacy laws. It is also conceivable that new capabilities and uses for mass-surveillance require structural changes towards a new system to create accountability and prevent misuse.[32]Government attention towards the dangers of surveillance capitalism especially increased after the exposure of theFacebook-Cambridge Analytica data scandalthat occurred in early 2018.[4]In response to the misuse of mass-surveillance multiple states have taken preventive measures. TheEuropean Union, for example, has reacted to these events and restricted its rules and regulations on misusing big data.[33]Surveillance-Capitalism has become a lot harder under these rules, known as theGeneral Data Protection Regulations.[33]However, implementing preventive measures against misuse of mass-surveillance is hard for many countries as it requires structural change of the system.[34]
Bruce Sterling's 2014 lecture atStrelka Institute"The epic struggle of theinternet of things"[35]explained how consumer products could become surveillance objects that track people's everyday life. In his talk, Sterling highlights the alliances between multinational corporations who develop Internet of Things-based surveillance systems which feeds surveillance capitalism.[35][36][37]
In 2015, Tega Brain and Surya Mattu's satirical artworkUnfit Bitsencourages users to subvert fitness data collected byFitbits. They suggested ways to fake datasets by attaching the device, for example to a metronome or on a bicycle wheel.[38][39]In 2018, Brain created a project withSam LavignecalledNew Organswhich collect people's stories of being monitored online and offline.[40][41]
The 2019 documentary filmThe Great Hacktells the story of how a company named Cambridge Analytica used Facebook to manipulate the2016 U.S. presidential election. Extensive profiling of users and news feeds that are ordered by black box algorithms were presented as the main source of the problem, which is also mentioned in Zuboff's book.[42]The usage of personal data to subject individuals to categorization and potentially politically influence individuals highlights how individuals can become voiceless in the face of data misusage. This highlights the crucial role surveillance capitalism can have on social injustice as it can affect all aspects of life.[43]
|
https://en.wikipedia.org/wiki/Surveillance_capitalism
|
Wafflesis a collection of command-line tools for performingmachine learningoperations developed atBrigham Young University. These tools are written inC++, and are available under theGNU Lesser General Public License.
The Waffles machine learning toolkit[1]contains command-line tools for performing various operations related tomachine learning,data mining, andpredictive modeling. The primary focus of Waffles is to provide tools that are simple to use in scripted experiments or processes. For example, the supervised learning algorithms included in Waffles are all designed to support multi-dimensional labels,classificationandregression, automatically impute missing values, and automatically apply necessary filters to transform the data to a type that the algorithm can support, such that arbitrary learning algorithms can be used with arbitrary data sets. Many other machine learning toolkits provide similar functionality, but require the user to explicitly configure data filters and transformations to make it compatible with a particular learning algorithm. The algorithms provided in Waffles also have the ability to automatically tune their own parameters (with the cost of additional computational overhead).
Because Waffles is designed for script-ability, it deliberately avoids presenting its tools in a graphical environment. It does, however, include a graphical "wizard" tool that guides the user to generate a command that will perform a desired task. This wizard does not actually perform the operation, but requires the user to paste the command that it generates into a command terminal or a script. The idea motivating this design is to prevent the user from becoming "locked in" to a graphical interface.
All of the Waffles tools are implemented as thin wrappers around functionality in a C++ class library. This makes it possible to convert scripted processes into native applications with minimal effort.
Waffles was first released as an open source project in 2005. Since that time, it has been developed atBrigham Young University, with a new version having been released approximately every 6–9 months. Waffles is not an acronym—the toolkit was named after the food for historical reasons.
Some of the advantages of Waffles in contrast with other popular open source machine learning toolkits include:
|
https://en.wikipedia.org/wiki/Waffles_(machine_learning)
|
Layerorlayeredmay refer to:
|
https://en.wikipedia.org/wiki/Layer_(disambiguation)
|
AUnix shellis acommand-line interpreterorshellthat provides a command lineuser interfaceforUnix-likeoperating systems. The shell is both an interactivecommand languageand ascripting language, and is used by the operating system to control the execution of the system usingshell scripts.[2]
Users typically interact with a Unix shell using aterminal emulator; however, direct operation via serial hardware connections orSecure Shellare common for server systems. All Unix shells provide filenamewildcarding,piping,here documents,command substitution,variablesandcontrol structuresforcondition-testinganditeration.
Generally, ashellis a program that executes other programs in response to text commands. A sophisticated shell can also change the environment in which other programs execute by passingnamed variables, a parameter list, or an input source.
In Unix-like operating systems, users typically have many choices of command-line interpreters for interactive sessions. When a userlogs intothe system interactively, a shell program is automatically executed for the duration of the session. The type of shell, which may be customized for each user, is typically stored in the user's profile, for example in the localpasswdfile or in a distributed configuration system such asNISorLDAP; however, the user may execute any other available shell interactively.
On operating systems with awindowing system, such asmacOSand desktopLinux distributions, some users may never use the shell directly. On Unix systems, the shell has historically been the implementation language of system startup scripts, including the program that starts a windowing system, configures networking, and many other essential functions. However, some system vendors have replaced the traditional shell-based startup system (init) with different approaches, such assystemd.
The first Unix shell was theThompson shell,sh, written byKen ThompsonatBell Labsand distributed with Versions 1 through 6 of Unix, from 1971 to 1975.[3]Though rudimentary by modern standards, it introduced many of the basic features common to all later Unix shells, including piping, simple control structures usingifandgoto, and filename wildcarding. Though not in current use, it is still available as part of someAncient UNIXsystems.
It was modeled after theMulticsshell, developed in 1965 by American software engineerGlenda Schroeder. Schroeder's Multics shell was itself modeled after theRUNCOMprogramLouis Pouzinshowed to the Multics Team. The "rc" suffix on some Unix configuration files (e.g. ''.bashrc" or ".vimrc"), is a remnant of the RUNCOM ancestry of Unix shells.[1][4]
ThePWB shellor Mashey shell,sh, was an upward-compatible version of the Thompson shell, augmented byJohn Masheyand others and distributed with theProgrammer's Workbench UNIX, circa 1975–1977. It focused on making shell programming practical, especially in large shared computing centers. It added shell variables (precursors ofenvironment variables, including the search path mechanism that evolved into $PATH), user-executable shell scripts, and interrupt-handling. Control structures were extended from if/goto to if/then/else/endif, switch/breaksw/endsw, and while/end/break/continue. As shell programming became widespread, these external commands were incorporated into the shell itself for performance.
But the most widely distributed and influential of the early Unix shells were theBourne shelland theC shell. Both shells have been used as the coding base and model for many derivative and work-alike shells with extended feature sets.[5]
TheBourne shell,sh, was a new Unix shell byStephen Bourneat Bell Labs.[6]Distributed as the shell for UNIX Version 7 in 1979, it introduced the rest of the basic features considered common to all the later Unix shells, includinghere documents,command substitution, more genericvariablesand more extensive builtincontrol structures. The language, including the use of a reversed keyword to mark the end of a block, was influenced byALGOL 68.[7]Traditionally, the Bourne shell program name isshand its path in the Unix file system hierarchy is/bin/sh. But a number of compatible work-alikes are also available with various improvements and additional features. On many systems, sh may be asymbolic linkorhard linkto one of these alternatives:
ThePOSIXstandard specifies its standard shell as a strict subset of theKorn shell, an enhanced version of the Bourne shell. From a user's perspective the Bourne shell was immediately recognized when active by its characteristic default command line prompt character, the dollar sign ($).
TheC shell,csh, was modeled on the C programming language, including the control structures and the expression grammar. It was written byBill Joyas a graduate student atUniversity of California, Berkeley, and was widely distributed withBSD Unix.[9][better source needed]
The C shell also introduced many features for interactive work, including thehistoryandeditingmechanisms,aliases,directory stacks,tilde notation,cdpath,job controlandpath hashing. On many systems, csh may be asymbolic linkorhard linktoTENEX C shell(tcsh), an improved version of Joy's original version. Although the interactive features of csh have been copied to most other shells, the language structure has not been widely copied. The only work-alike isHamilton C shell, written by Nicole Hamilton, first distributed onOS/2in 1988 and onWindowssince 1992.[10]
Shells read configuration files in various circumstances. These files usually contain commands for the shell and are executed when loaded; they are usually used to set important variables used to find executables, like$PATH, and others that control the behavior and appearance of the shell. The table in this section shows the configuration files for popular shells.[11]
Explanation:
Variations on the Unix shell concept that don't derive from Bourne shell or C shell include the following:[15]
|
https://en.wikipedia.org/wiki/Unix_shell
|
Hyperelliptic curve cryptographyis similar toelliptic curve cryptography(ECC) insofar as theJacobianof ahyperelliptic curveis anabelian groupin which to do arithmetic, just as we use thegroupof points on an elliptic curve in ECC.
An(imaginary) hyperelliptic curveofgenusg{\displaystyle g}over a fieldK{\displaystyle K}is given by the equationC:y2+h(x)y=f(x)∈K[x,y]{\displaystyle C:y^{2}+h(x)y=f(x)\in K[x,y]}whereh(x)∈K[x]{\displaystyle h(x)\in K[x]}is a polynomial of degree not larger thang{\displaystyle g}andf(x)∈K[x]{\displaystyle f(x)\in K[x]}is a monic polynomial of degree2g+1{\displaystyle 2g+1}. From this definition it follows that elliptic curves are hyperelliptic curves of genus 1. In hyperelliptic curve cryptographyK{\displaystyle K}is often afinite field. The Jacobian ofC{\displaystyle C}, denotedJ(C){\displaystyle J(C)}, is aquotient group, thus the elements of the Jacobian are not points, they are equivalence classes ofdivisorsof degree 0 under the relation oflinear equivalence. This agrees with the elliptic curve case, because it can be shown that the Jacobian of an elliptic curve is isomorphic with the group of points on the elliptic curve.[1]The use of hyperelliptic curves in cryptography came about in 1989 fromNeal Koblitz. Although introduced only 3 years after ECC, not many cryptosystems implement hyperelliptic curves because the implementation of the arithmetic isn't as efficient as with cryptosystems based on elliptic curves or factoring (RSA). The efficiency of implementing the arithmetic depends on the underlying finite fieldK{\displaystyle K}, in practice it turns out that finite fields ofcharacteristic2 are a good choice for hardware implementations while software is usually faster in odd characteristic.[2]
The Jacobian on a hyperelliptic curve is an Abelian group and as such it can serve as group for thediscrete logarithm problem(DLP). In short, suppose we have an Abelian groupG{\displaystyle G}andg{\displaystyle g}an element ofG{\displaystyle G}, the DLP onG{\displaystyle G}entails finding the integera{\displaystyle a}given two elements ofG{\displaystyle G}, namelyg{\displaystyle g}andga{\displaystyle g^{a}}. The first type of group used was the multiplicative group of a finite field, later also Jacobians of (hyper)elliptic curves were used. If the hyperelliptic curve is chosen with care, thenPollard's rho methodis the most efficient way to solve DLP. This means that, if the Jacobian hasn{\displaystyle n}elements, that the running time is exponential inlog(n){\displaystyle \log(n)}. This makes it possible to use Jacobians of a fairly smallorder, thus making the system more efficient. But if the hyperelliptic curve is chosen poorly, the DLP will become quite easy to solve. In this case there are known attacks which are more efficient than generic discrete logarithm solvers[3]or even subexponential.[4]Hence these hyperelliptic curves must be avoided. Considering various attacks on DLP, it is possible to list the features of hyperelliptic curves that should be avoided.
Allgeneric attackson thediscrete logarithm problemin finite abelian groups such as thePohlig–Hellman algorithmandPollard's rho methodcan be used to attack the DLP in the Jacobian of hyperelliptic curves. The Pohlig-Hellman attack reduces the difficulty of the DLP by looking at the order of the group we are working with. Suppose the groupG{\displaystyle G}that is used hasn=p1r1⋯pkrk{\displaystyle n=p_{1}^{r_{1}}\cdots p_{k}^{r_{k}}}elements, wherep1r1⋯pkrk{\displaystyle p_{1}^{r_{1}}\cdots p_{k}^{r_{k}}}is the prime factorization ofn{\displaystyle n}. Pohlig-Hellman reduces the DLP inG{\displaystyle G}to DLPs in subgroups of orderpi{\displaystyle p_{i}}fori=1,...,k{\displaystyle i=1,...,k}. So forp{\displaystyle p}the largest prime divisor ofn{\displaystyle n}, the DLP inG{\displaystyle G}is just as hard to solve as the DLP in the subgroup of orderp{\displaystyle p}. Therefore, we would like to chooseG{\displaystyle G}such that the largest prime divisorp{\displaystyle p}of#G=n{\displaystyle \#G=n}is almost equal ton{\displaystyle n}itself. Requiringnp≤4{\textstyle {\frac {n}{p}}\leq 4}usually suffices.
Theindex calculus algorithmis another algorithm that can be used to solve DLP under some circumstances. For Jacobians of (hyper)elliptic curves there exists an index calculus attack on DLP. If the genus of the curve becomes too high, the attack will be more efficient than Pollard's rho. Today it is known that even a genus ofg=3{\displaystyle g=3}cannot assure security.[5]Hence we are left with elliptic curves and hyperelliptic curves of genus 2.
Another restriction on the hyperelliptic curves we can use comes from the Menezes-Okamoto-Vanstone-attack / Frey-Rück-attack. The first, often called MOV for short, was developed in 1993, the second came about in 1994. Consider a (hyper)elliptic curveC{\displaystyle C}over a finite fieldFq{\displaystyle \mathbb {F} _{q}}whereq{\displaystyle q}is the power of a prime number. Suppose the Jacobian of the curve hasn{\displaystyle n}elements andp{\displaystyle p}is the largest prime divisor ofn{\displaystyle n}. Fork{\displaystyle k}the smallest positive integer such thatp|qk−1{\displaystyle p|q^{k}-1}there exists a computableinjectivegroup homomorphismfrom the subgroup ofJ(C){\displaystyle J(C)}of orderp{\displaystyle p}toFqk∗{\displaystyle \mathbb {F} _{q^{k}}^{*}}. Ifk{\displaystyle k}is small, we can solve DLP inJ(C){\displaystyle J(C)}by using the index calculus attack inFqk∗{\textstyle \mathbb {F} _{q^{k}}^{*}}. For arbitrary curvesk{\displaystyle k}is very large (around the size ofqg{\displaystyle q^{g}}); so even though the index calculus attack is quite fast for multiplicative groups of finite fields this attack is not a threat for most curves. The injective function used in this attack is apairingand there are some applications in cryptography that make use of them. In such applications it is important to balance the hardness of the DLP inJ(C){\displaystyle J(C)}andFqk∗{\textstyle \mathbb {F} _{q^{k}}^{*}}; depending on thesecurity levelvalues ofk{\displaystyle k}between 6 and 12 are useful.
The subgroup ofFqk∗{\textstyle \mathbb {F} _{q^{k}}^{*}}is atorus. There exists some independent usage intorus based cryptography.
We also have a problem, ifp{\displaystyle p}, the largest prime divisor of the order of the Jacobian, is equal to the characteristic ofFq.{\displaystyle \mathbb {F} _{q}.}By a different injective map we could then consider the DLP in the additive groupFq{\displaystyle \mathbb {F} _{q}}instead of DLP on the Jacobian. However, DLP in this additive group is trivial to solve, as can easily be seen. So also these curves, called anomalous curves, are not to be used in DLP.
Hence, in order to choose a good curve and a good underlying finite field, it is important to know the order of the Jacobian. Consider a hyperelliptic curveC{\textstyle C}of genusg{\textstyle g}over the fieldFq{\textstyle \mathbb {F} _{q}}whereq{\textstyle q}is the power of a prime number and defineCk{\textstyle C_{k}}asC{\textstyle C}but now over the fieldFqk{\textstyle \mathbb {F} _{q^{k}}}. It can be shown that the order of the Jacobian ofCk{\textstyle C_{k}}lies in the interval[(qk−1)2g,(qk+1)2g]{\textstyle [({\sqrt {q}}^{k}-1)^{2g},({\sqrt {q}}^{k}+1)^{2g}]}, called the Hasse-Weil interval.[6]
But there is more, we can compute the order using the zeta-function on hyperelliptic curves. LetAk{\textstyle A_{k}}be the number of points onCk{\textstyle C_{k}}. Then we define the zeta-function ofC=C1{\textstyle C=C_{1}}asZC(t)=exp(∑i=1∞Aitii){\textstyle Z_{C}(t)=\exp(\sum _{i=1}^{\infty }{A_{i}{\frac {t^{i}}{i}}})}. For this zeta-function it can be shown thatZC(t)=P(t)(1−t)(1−qt){\textstyle Z_{C}(t)={\frac {P(t)}{(1-t)(1-qt)}}}whereP(t){\textstyle P(t)}is a polynomial of degree2g{\textstyle 2g}with coefficients inZ{\textstyle \mathbb {Z} }.[7]FurthermoreP(t){\textstyle P(t)}factors asP(t)=∏i=1g(1−ait)(1−ai¯t){\textstyle P(t)=\prod _{i=1}^{g}{(1-a_{i}t)(1-{\bar {a_{i}}}t)}}whereai∈C{\textstyle a_{i}\in \mathbb {C} }for alli=1,...,g{\textstyle i=1,...,g}. Herea¯{\textstyle {\bar {a}}}denotes thecomplex conjugateofa{\displaystyle a}. Finally we have that the order ofJ(Ck){\textstyle J(C_{k})}equals∏i=1g|1−aik|2{\textstyle \prod _{i=1}^{g}{|1-a_{i}^{k}|^{2}}}. Hence orders of Jacobians can be found by computing the roots ofP(t){\textstyle P(t)}.
|
https://en.wikipedia.org/wiki/Hyperelliptic_curve_cryptography
|
Themathematicalconcept of afunctiondates from the 17th century in connection with the development ofcalculus; for example, the slopedy/dx{\displaystyle dy/dx}of agraphat a point was regarded as a function of thex-coordinate of the point. Functions were not explicitly considered in antiquity, but some precursors of the concept can perhaps be seen in the work of medieval philosophers and mathematicians such asOresme.
Mathematicians of the 18th century typically regarded a function as being defined by ananalytic expression. In the 19th century, the demands of the rigorous development ofanalysisbyKarl Weierstrassand others, the reformulation ofgeometryin terms of analysis, and the invention ofset theorybyGeorg Cantor, eventually led to the much more general modern concept of a function as a single-valued mapping from onesetto another.
In the 12th century, mathematicianSharaf al-Din al-Tusianalyzed the equationx3+d=b⋅x2in the formx2⋅ (b–x) =d,stating that the left hand side must at least equal the value ofdfor the equation to have a solution. He then determined the maximum value of this expression. It is arguable that the isolation of this expression is an early approach to the notion of a "function". A value less thandmeans no positive solution; a value equal todcorresponds to one solution, while a value greater thandcorresponds to two solutions. Sharaf al-Din's analysis of this equation was a notable development inIslamic mathematics, but his work was not pursued any further at that time, neither in the Muslim world nor in Europe.[1]
According toJean Dieudonné[2]and Ponte,[3]the concept of a function emerged in the 17th century as a result of the development ofanalytic geometryand theinfinitesimal calculus. Nevertheless, Medvedev suggests that the implicit concept of a function is one with an ancient lineage.[4]Ponte also sees more explicit approaches to the concept in theMiddle Ages:
The development of analytical geometry around 1640 allowed mathematicians to go between geometric problems about curves and algebraic relations between "variable coordinatesxandy."[6]Calculus was developed using the notion of variables, with their associated geometric meaning, which persisted well into the eighteenth century.[7]However, the terminology of "function" came to be used in interactions between Leibniz and Bernoulli towards the end of the 17th century.[8]
The term "function" was literally introduced byGottfried Leibniz, in a 1673 letter, to describe a quantity related to points of acurve, such as acoordinateor curve'sslope.[9][10]Johann Bernoullistarted calling expressions made of a single variable "functions." In 1698, he agreed with Leibniz that any quantity formed "in an algebraic and transcendental manner" may be called a function ofx.[11]By 1718, he came to regard as a function "any expression made up of a variable and some constants."[12]Alexis Claude Clairaut(in approximately 1734) andLeonhard Eulerintroduced the familiar notationf(x){\displaystyle {f(x)}}for the value of a function.[13]
The functions considered in those times are called todaydifferentiable functions. For this type of function, one can talk aboutlimitsand derivatives; both are measurements of the output or the change in the output as it depends on the input or the change in the input. Such functions are the basis ofcalculus.
In the first volume of his fundamental textIntroductio in analysin infinitorum, published in 1748, Euler gave essentially the same definition of a function as his teacher Bernoulli, as anexpressionorformulainvolving variables and constants e.g.,x2+3x+2{\displaystyle {x^{2}+3x+2}}.[14]Euler's own definition reads:
Euler also allowed multi-valued functions whose values are determined by an implicit equation.
In 1755, however, in hisInstitutiones calculi differentialis,Euler gave a more general concept of a function:
Medvedev[17]considers that "In essence this is the definition that became known as Dirichlet's definition." Edwards[18]also credits Euler with a general concept of a function and says further that
In hisThéorie Analytique de la Chaleur,[19]Joseph Fourierclaimed that an arbitrary function could be represented by aFourier series.[20]Fourier had a general conception of a function, which included functions that were neithercontinuousnor defined by an analytical expression.[21]Related questions on the nature and representation of functions, arising from the solution of thewave equationfor a vibrating string, had already been the subject of dispute betweenJean le Rond d'Alembertand Euler, and they had a significant impact in generalizing the notion of a function.Luzinobserves that:
During the 19th century, mathematicians started to formalize all the different branches of mathematics. One of the first to do so wasAugustin-Louis Cauchy; his somewhat imprecise results were later made completely rigorous by Weierstrass, who advocated building calculus onarithmeticrather than ongeometry, which favoured Euler's definition over Leibniz's (seearithmetization of analysis). According to Smithies, Cauchy thought of functions as being defined by equations involvingrealorcomplex numbers, and tacitly assumed they were continuous:
Nikolai Lobachevsky[24]andPeter Gustav Lejeune Dirichlet[25]are traditionally credited with independently giving the modern "formal" definition of a function as arelationin which every first element has a unique second element.
Lobachevsky (1834) writes that
while Dirichlet (1837) writes
Eves asserts that "the student of mathematics usually meets the Dirichlet definition of function in his introductory course in calculus.[28]
Dirichlet's claim to this formalization has been disputed byImre Lakatos:
However, Gardiner says
"...it seems to me that Lakatos goes too far, for example, when he asserts that 'there is ample evidence that [Dirichlet] had no idea of [the modern function] concept'."[30]Moreover, as noted above, Dirichlet's paper does appear to include a definition along the lines of what is usually ascribed to him, even though (like Lobachevsky) he states it only for continuous functions of a real variable.
Similarly, Lavine observes that:
Because Lobachevsky and Dirichlet have been credited as among the first to introduce the notion of an arbitrary correspondence, this notion is sometimes referred to as the Dirichlet or Lobachevsky-Dirichlet definition of a function.[32]A general version of this definition was later used byBourbaki(1939), and some in the education community refer to it as the "Dirichlet–Bourbaki" definition of a function.
Dieudonné, who was one of the founding members of the Bourbaki group, credits a precise and general modern definition of a function to Dedekind in his workWas sind und was sollen die Zahlen,[33]which appeared in 1888 but had already been drafted in 1878. Dieudonné observes that instead of confining himself, as in previous conceptions, to real (or complex) functions, Dedekind defines a function as a single-valued mapping between any two sets:
Hardy 1908, pp. 26–28 defined a function as a relation between two variablesxandysuch that "to some values ofxat any rate correspond values ofy." He neither required the function to be defined for all values ofxnor to associate each value ofxto a single value ofy. This broad definition of a function encompasses more relations than are ordinarily considered functions in contemporary mathematics. For example, Hardy's definition includesmultivalued functionsand what incomputability theoryare calledpartial functions.
Logiciansof this time were primarily involved with analyzingsyllogisms(the 2000-year-old Aristotelian forms and otherwise), or asAugustus De Morgan(1847) stated it: "the examination of that part of reasoning which depends upon the manner in which inferences are formed,
and the investigation of general maxims and rules for constructing arguments".[35]At this time the notion of (logical) "function" is not explicit, but at least in the work of De Morgan andGeorge Booleit is implied: we see abstraction of the argument forms, the introduction of variables, the introduction of a symbolic algebra with respect to these variables, and some of the notions of set theory.
De Morgan's 1847 "FORMAL LOGIC OR, The Calculus of Inference, Necessary and Probable" observes that "[a]logical truthdepends upon thestructure of the statement, and not upon the particular matters spoken of"; he wastes no time (preface page i) abstracting: "In the form of the proposition, the copula is made as abstract as the terms". He immediately (p. 1) casts what he calls "the proposition" (present-day propositionalfunctionorrelation) into a form such as "X is Y", where the symbols X, "is", and Y represent, respectively, thesubject,copula, andpredicate.While the word "function" does not appear, the notion of "abstraction" is there, "variables" are there, the notion of inclusion in his symbolism "all of the Δ is in the О" (p. 9) is there, and lastly a new symbolism for logical analysis of the notion of "relation" (he uses the word with respect to this example " X)Y " (p. 75) ) is there:
In his 1848The Nature of LogicBoole asserts that "logic . . . is in a more especial sense the science of reasoning by signs", and he briefly discusses the notions of "belonging to" and "class": "An individual may possess a great variety of attributes and thus belonging to a great variety of different classes".[36]Like De Morgan he uses the notion of "variable" drawn from analysis; he gives an example of "represent[ing] the class oxen byxand that of horses byyand the conjunctionandby the sign + . . . we might represent the aggregate class oxen and horses byx+y".[37]
In the context of "the Differential Calculus" Boole defined (circa 1849) the notion of a function as follows:
Eves observes "that logicians have endeavored to push down further the starting level of the definitional development of mathematics and to derive the theory ofsets, orclasses, from a foundation in the logic of propositions and propositional functions".[39]But by the late 19th century the logicians' research into the foundations of mathematics was undergoing a major split. The direction of the first group, theLogicists, can probably be summed up best by Bertrand Russell1903– "to fulfil two objects, first, to show that all mathematics follows from symbolic logic, and secondly to discover, as far as possible, what are the principles of symbolic logic itself."
The second group of logicians, the set-theorists, emerged withGeorg Cantor's "set theory" (1870–1890) but were driven forward partly as a result of Russell's discovery of a paradox that could be derived from Frege's conception of "function", but also as a reaction against Russell's proposed solution.[40]Ernst Zermelo's set-theoretic response was his 1908Investigations in the foundations of set theory I– the firstaxiomatic set theory; here too the notion of "propositional function" plays a role.
In hisAn Investigation into the laws of thoughtBoole now defined a function in terms of a symbolxas follows:
Boole then usedalgebraicexpressions to define both algebraic andlogicalnotions, e.g., 1 −xis logical NOT(x),xyis the logical AND(x,y),x+yis the logical OR(x,y),x(x+y) isxx+xy, and "the special law"xx=x2=x.[42]
In his 1881Symbolic LogicVenn was using the words "logical function" and the contemporary symbolism (x=f(y),y=f−1(x), cf page xxi) plus the circle-diagrams historically associated withVennto describe "class relations",[43]the notions "'quantifying' our predicate", "propositions in respect of their extension", "the relation of inclusion and exclusion of two classes to one another", and "propositional function" (all on p. 10), the bar over a variable to indicate not-x(page 43), etc. Indeed he equated unequivocally the notion of "logical function" with "class" [modern "set"]: "... on the view adopted in this book,f(x) never stands for anything but a logical class. It may be a compound class aggregated of many simple classes; it may be a class indicated by certain inverse logical operations, it may be composed of two groups of classes equal to one another, or what is the same thing, their difference declared equal to zero, that is, a logical equation. But however composed or derived,f(x) with us will never be anything else than a general expression for such logical classes of things as may fairly find a place in ordinary Logic".[44]
Gottlob Frege'sBegriffsschrift(1879) precededGiuseppe Peano(1889), but Peano had no knowledge ofFrege 1879until after he had published his 1889.[45]Both writers strongly influencedRussell (1903). Russell in turn influenced much of 20th-century mathematics and logic through hisPrincipia Mathematica(1913) jointly authored withAlfred North Whitehead.
At the outset Frege abandons the traditional "conceptssubjectandpredicate", replacing them withargumentandfunctionrespectively, which he believes "will stand the test of time. It is easy to see how regarding a content as a function of an argument leads to the formation of concepts. Furthermore, the demonstration of the connection between the meanings of the wordsif, and, not, or, there is, some, all,and so forth, deserves attention".[46]
Frege begins his discussion of "function" with an example: Begin with the expression[47]"Hydrogen is lighter than carbon dioxide". Now remove the sign for hydrogen (i.e., the word "hydrogen") and replace it with the sign for oxygen (i.e., the word "oxygen"); this makes a second statement. Do this again (using either statement) and substitute the sign for nitrogen (i.e., the word "nitrogen") and note that "This changes the meaning in such a way that "oxygen" or "nitrogen" enters into the relations in which "hydrogen" stood before".[48]There are three statements:
Now observe in all three a "stable component, representing the totality of [the] relations";[49]call thisthe function, i.e.,
Frege calls theargumentof the function "[t]he sign [e.g., hydrogen, oxygen, or nitrogen], regarded as replaceable by others that denotes the object standing in these relations".[50]He notes that we could have derived the function as "Hydrogen is lighter than . . .." as well, with an argument position on theright; the exact observation is made by Peano (see more below). Finally, Frege allows for the case of two (or more) arguments. For example, remove "carbon dioxide" to yield the invariant part (the function) as:
The one-argument function Frege generalizes into the form Φ(A) where A is the argument and Φ( ) represents the function, whereas the two-argument function he symbolizes as Ψ(A, B) with A and B the arguments and Ψ( , ) the function and cautions that "in general Ψ(A, B) differs from Ψ(B, A)". Using his unique symbolism he translates for the reader the following symbolism:
Peano defined the notion of "function" in a manner somewhat similar to Frege, but without the precision.[52]First Peano defines the sign "K meansclass, or aggregate of objects",[53]the objects of which satisfy three simple equality-conditions,[54]a=a, (a=b) = (b=a), IF ((a=b) AND (b=c)) THEN (a=c). He then introduces φ, "a sign or an aggregate of signs such that ifxis an object of the classs, the expression φxdenotes a new object". Peano adds two conditions on these new objects: First, that the three equality-conditions hold for the objects φx; secondly, that "ifxandyare objects of classsand ifx=y, we assume it is possible to deduce φx= φy".[55]Given all these conditions are met, φ is a "function presign". Likewise he identifies a "function postsign". For example ifφis the function presigna+, then φxyieldsa+x, or if φ is the function postsign +athenxφ yieldsx+a.[54]
While the influence of Cantor and Peano was paramount,[56]in Appendix A "The Logical and Arithmetical Doctrines of Frege" ofThe Principles of Mathematics, Russell arrives at a discussion of Frege's notion offunction, "...a point in which Frege's work is very important, and requires careful examination".[57]In response to his 1902 exchange of letters with Frege about the contradiction he discovered in Frege'sBegriffsschriftRussell tacked this section on at the last moment.
For Russell the bedeviling notion is that ofvariable: "6. Mathematical propositions are not only characterized by the fact that they assert implications, but also by the fact that they containvariables. The notion of the variable is one of the most difficult with which logic has to deal. For the present, I openly wish to make it plain that there are variables in all mathematical propositions, even where at first sight they might seem to be absent. . . . We shall find always, in all mathematical propositions, that the wordsanyorsomeoccur; and these words are the marks of a variable and a formal implication".[58]
As expressed by Russell "the process of transforming constants in a proposition into variables leads to what is called generalization, and gives us, as it were, the formal essence of a proposition ... So long as any term in our proposition can be turned into a variable, our proposition can be generalized; and so long as this is possible, it is the business of mathematics to do it";[59]these generalizations Russell namedpropositional functions.[60]Indeed he cites and quotes from Frege'sBegriffsschriftand presents a vivid example from Frege's 1891Function und Begriff: That "the essence of the arithmetical function 2x3+xis what is left when thexis taken away, i.e., in the above instance 2( )3+ ( ). The argumentxdoes not belong to the function but the two taken together make the whole".[57]Russell agreed with Frege's notion of "function" in one sense: "He regards functions – and in this I agree with him – as more fundamental thanpredicatesandrelations" but Russell rejected Frege's "theory of subject and assertion", in particular "he thinks that, if a termaoccurs in a proposition, the proposition can always be analysed intoaand an assertion abouta".[57]
Russell would carry his ideas forward in his 1908Mathematical logical as based on the theory of typesand into his and Whitehead's 1910–1913Principia Mathematica. By the time ofPrincipia MathematicaRussell, like Frege, considered the propositional function fundamental: "Propositional functions are the fundamental kind from which the more usual kinds of function, such as "sinx" or logxor "the father ofx" are derived. These derivative functions . . . are called "descriptive functions". The functions of propositions . . . are a particular case of propositional functions".[61]
Propositional functions: Because his terminology is different from the contemporary, the reader may be confused by Russell's "propositional function". An example may help. Russell writes apropositional functionin its raw form, e.g., asφŷ: "ŷis hurt". (Observe the circumflex or "hat" over the variabley). For our example, we will assign just 4 values to the variableŷ: "Bob", "This bird", "Emily the rabbit", and "y". Substitution of one of these values for variableŷyields aproposition; this proposition is called a "value" of the propositional function. In our example there are four values of the propositional function, e.g., "Bob is hurt", "This bird is hurt", "Emily the rabbit is hurt" and "yis hurt." A proposition, if it issignificant—i.e., if its truth isdeterminate—has atruth-valueoftruthorfalsity. If a proposition's truth value is "truth" then the variable's value is said tosatisfythe propositional function. Finally, per Russell's definition, "aclass[set] is all objects satisfying some propositional function" (p. 23). Note the word "all" – this is how the contemporary notions of "For all ∀" and "there exists at least one instance ∃" enter the treatment (p. 15).
To continue the example: Suppose (from outside the mathematics/logic) one determines that the propositions "Bob is hurt" has a truth value of "falsity", "This bird is hurt" has a truth value of "truth", "Emily the rabbit is hurt" has an indeterminate truth value because "Emily the rabbit" doesn't exist, and "yis hurt" is ambiguous as to its truth value because the argumentyitself is ambiguous. While the two propositions "Bob is hurt" and "This bird is hurt" aresignificant(both have truth values), only the value "This bird" of thevariableŷsatisfiesthe propositional functionφŷ: "ŷis hurt". When one goes to form the class α:φŷ: "ŷis hurt", only "This bird" is included, given the four values "Bob", "This bird", "Emily the rabbit" and "y" for variableŷand their respective truth-values: falsity, truth, indeterminate, ambiguous.
Russell definesfunctions of propositions with arguments, andtruth-functionsf(p).[62]For example, suppose one were to form the "function of propositions with arguments"p1: "NOT(p) ANDq" and assign its variables the values ofp: "Bob is hurt" andq: "This bird is hurt". (We are restricted to the logical linkages NOT, AND, OR and IMPLIES, and we can only assign "significant" propositions to the variablespandq). Then the "function of propositions with arguments" isp1: NOT("Bob is hurt") AND "This bird is hurt". To determine the truth value of this "function of propositions with arguments" we submit it to a "truth function", e.g.,f(p1):f( NOT("Bob is hurt") AND "This bird is hurt" ), which yields a truth value of "truth".
The notion of a "many-one" functional relation": Russell first discusses the notion of "identity", then defines adescriptive function(pages 30ff) as theuniquevalueιxthat satisfies the (2-variable) propositional function (i.e., "relation")φŷ.
Russell symbolizes the descriptive function as "the object standing in relation toy":R'y=DEF(ιx)(x R y). Russell repeats that "R'yis a function ofy, but not a propositional function [sic]; we shall call it adescriptivefunction. All the ordinary functions of mathematics are of this kind. Thus in our notation "siny" would be written " sin'y", and "sin" would stand for the relation sin'yhas toy".[64]
David Hilbertset himself the goal of "formalizing" classical mathematics "as a formal axiomatic theory, and this theory shall be proved to beconsistent, i.e., free from contradiction".[65]InHilbert 1927The Foundations of Mathematicshe frames the notion of function in terms of the existence of an "object":
Hilbert then illustrates the three ways how the ε-function is to be used, firstly as the "for all" and "there exists" notions, secondly to represent the "object of which [a proposition] holds", and lastly how to cast it into thechoice function.
Recursion theory and computability: But the unexpected outcome of Hilbert's and his studentBernays's effort was failure; seeGödel's incompleteness theoremsof 1931. At about the same time, in an effort to solve Hilbert'sEntscheidungsproblem, mathematicians set about to define what was meant by an "effectively calculable function" (Alonzo Church1936), i.e., "effective method" or "algorithm", that is, an explicit, step-by-step procedure that would succeed in computing a function. Various models for algorithms appeared, in rapid succession, including Church'slambda calculus(1936),Stephen Kleene'sμ-recursive functions(1936) andAlan Turing's (1936–7) notion of replacing human "computers" with utterly-mechanical "computing machines" (seeTuring machines). It was shown that all of these models could compute the same class ofcomputable functions.Church's thesisholds that this class of functions exhausts all thenumber-theoretic functionsthat can be calculated by an algorithm. The outcomes of these efforts were vivid demonstrations that, in Turing's words, "there can be no general process for determining whether a given formulaUof the functional calculusK[Principia Mathematica] is provable";[67]see more atIndependence (mathematical logic)andComputability theory.
Set theory began with the work of the logicians with the notion of "class" (modern "set") for exampleDe Morgan (1847),Jevons(1880),Venn (1881),Frege (1879)andPeano (1889). It was given a push byGeorg Cantor's attempt to define the infinite in set-theoretic treatment (1870–1890) and a subsequent discovery of anantinomy(contradiction, paradox) in this treatment (Cantor's paradox), by Russell's discovery (1902) of an antinomy in Frege's 1879 (Russell's paradox), by the discovery of more antinomies in the early 20th century (e.g., the 1897Burali-Forti paradoxand the 1905Richard paradox), and by resistance to Russell's complex treatment of logic[68]and dislike of hisaxiom of reducibility[69](1908, 1910–1913) that he proposed as a means to evade the antinomies.
In 1902 Russell sent a letter to Frege pointing out that Frege's 1879Begriffsschriftallowed a function to be an argument of itself: "On the other hand, it may also be that the argument is determinate and the function indeterminate . . .."[70]From this unconstrained situation Russell was able to form a paradox:
Frege responded promptly that "Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build arithmetic".[72]
From this point forward development of the foundations of mathematics became an exercise in how to dodge "Russell's paradox", framed as it was in "the bare [set-theoretic] notions of set and element".[73]
The notion of "function" appears as Zermelo's axiom III—the Axiom of Separation (Axiom der Aussonderung). This axiom constrains us to use a propositional function Φ(x) to "separate" asubsetMΦfrom a previously formed setM:
As there is nouniversal set— sets originate by way of Axiom II from elements of (non-set)domain B– "...this disposes of the Russell antinomy so far as we are concerned".[75]But Zermelo's "definite criterion" is imprecise, and is fixed byWeyl,Fraenkel,Skolem, andvon Neumann.[76]
In fact Skolem in his 1922 referred to this "definite criterion" or "property" as a "definite proposition":
van Heijenoortsummarizes:
In this quote the reader may observe a shift in terminology: nowhere is mentioned the notion of "propositional function", but rather one sees the words "formula", "predicate calculus", "predicate", and "logical calculus." This shift in terminology is discussed more in the section that covers "function" in contemporary set theory.
The history of the notion of "ordered pair" is not clear. As noted above, Frege (1879) proposed an intuitive ordering in his definition of a two-argument function Ψ(A, B).Norbert Wienerin his 1914 (see below) observes that his own treatment essentially "revert(s) toSchröder'streatment of a relation as a class of ordered couples".[79]Russell (1903)considered the definition of a relation (such as Ψ(A, B)) as a "class of couples" but rejected it:
By 1910–1913 andPrincipia MathematicaRussell had given up on the requirement for anintensionaldefinition of a relation, stating that "mathematics is always concerned with extensions rather than intensions" and "Relations, like classes, are to be taken inextension".[81]To demonstrate the notion of a relation inextensionRussell now embraced the notion ofordered couple: "We may regard a relation ... as a class of couples ... the relation determined by φ(x, y) is the class of couples (x, y) for which φ(x, y) is true".[82]In a footnote he clarified his notion and arrived at this definition:
But he goes on to say that he would not introduce the ordered couples further into his "symbolic treatment"; he proposes his "matrix" and his unpopular axiom of reducibility in their place.
An attempt to solve the problem of theantinomiesled Russell to propose his "doctrine of types" in an appendix B of his 1903The Principles of Mathematics.[83]In a few years he would refine this notion and propose in his 1908The Theory of Typestwoaxioms of reducibility, the purpose of which were to reduce (single-variable) propositional functions and (dual-variable) relations to a "lower" form (and ultimately into a completelyextensionalform); he andAlfred North Whiteheadwould carry this treatment over toPrincipia Mathematica1910–1913 with a further refinement called "a matrix".[84]The first axiom is *12.1; the second is *12.11. To quote Wiener the second axiom *12.11 "is involved only in the theory of relations".[85]Both axioms, however, were met with skepticism and resistance; see more atAxiom of reducibility. By 1914 Norbert Wiener, using Whitehead and Russell's symbolism, eliminated axiom *12.11 (the "two-variable" (relational) version of the axiom of reducibility) by expressing a relation as an ordered pair using the null set. At approximately the same time,Hausdorff(1914, p. 32) gave the definition of the ordered pair (a,b) as {{a,1}, {b, 2}}. A few years laterKuratowski(1921) offered a definition that has been widely used ever since, namely {{a,b}, {a}}".[86]As noted bySuppes (1960)"This definition . . . was historically important in reducing the theory of relations to the theory of sets.[87]
Observe that while Wiener "reduced" the relational *12.11 form of the axiom of reducibility hedid notreduce nor otherwise change the propositional-function form *12.1; indeed he declared this "essential to the treatment of identity, descriptions, classes and relations".[88]
Where exactly thegeneralnotion of "function" as a many-one correspondence derives from is unclear. Russell in his 1920Introduction to Mathematical Philosophystates that "It should be observed that all mathematical functions result form one-many [sic – contemporary usage is many-one] relations . . . Functions in this sense aredescriptivefunctions".[89]A reasonable possibility is thePrincipia Mathematicanotion of "descriptive function" –R 'y=DEF(ιx)(x R y): "the singular object that has a relationRtoy". Whatever the case, by 1924,Moses Schönfinkelexpressed the notion, claiming it to be "well known":
According toWillard Quine,Schönfinkel 1924"provide[s] for ... the whole sweep of abstract set theory. The crux of the matter is that Schönfinkel lets functions stand as arguments. For Schönfinkel, substantially as for Frege, classes are special sorts of functions. They are propositional functions, functions whose values are truth values. All functions, propositional and otherwise, are for Schönfinkel one-place functions".[91]Remarkably, Schönfinkel reduces all mathematics to an extremely compactfunctional calculusconsisting of only three functions: Constancy, fusion (i.e., composition), and mutual exclusivity. Quine notes thatHaskell Curry(1958) carried this work forward "under the head ofcombinatory logic".[92]
By 1925Abraham Fraenkel(1922) andThoralf Skolem(1922) had amended Zermelo's set theory of 1908. But von Neumann was not convinced that this axiomatization could not lead to the antinomies.[93]So he proposed his own theory, his 1925An axiomatization of set theory.[94]It explicitly contains a "contemporary", set-theoretic version of the notion of "function":
At the outset he begins withI-objectsandII-objects, two objectsAandBthat are I-objects (first axiom), and two types of "operations" that assume ordering as a structural property[96]obtained of the resulting objects [x,y] and (x,y). The two "domains of objects" are called "arguments" (I-objects) and "functions" (II-objects); where they overlap are the "argument functions" (he calls them I-II objects). He introduces two "universal two-variable operations" – (i) the operation [x,y]: ". . . read 'the value of the functionxfor the argumenty. . . it itself is a type I object", and (ii) the operation (x,y): ". . . (read 'the ordered pairx,y') whose variablesxandymust both be arguments and that itself produces an argument (x,y). Its most important property is thatx1=x2andy1=y2follow from (x1=y2) = (x2=y2)". To clarify the function pair he notes that "Instead off(x) we write [f,x] to indicate thatf, just likex, is to be regarded as a variable in this procedure". To avoid the "antinomies of naive set theory, in Russell's first of all . . . we must forgo treating certain functions as arguments".[97]He adopts a notion from Zermelo to restrict these "certain functions".[98]
Suppes[99]observes that von Neumann's axiomatization was modified by Bernays "in order to remain nearer to the original Zermelo system . . . He introduced two membership relations: one between sets, and one between sets and classes". Then Gödel [1940][100]further modified the theory: "his primitive notions are those of set, class and membership (although membership alone is sufficient)".[101]This axiomatization is now known asvon Neumann–Bernays–Gödel set theory.
In 1939, the collaborationNIcolas Bourbaki, in addition to giving the well-known ordered pair definition of a function as a certain subset of thecartesian productE×F, gave the following:
"LetEandFbe two sets, which may or may not be distinct. A relation between a variable elementxofEand a variable elementyofFis called a functional relation inyif, for allx∈E, there exists a uniquey∈Fwhich is in the given relation withx.
We give the name of function to the operation which in this way associates with every elementx∈Ethe elementy∈Fwhich is in the given relation withx, and the function is said to be determined by the given functional relation. Two equivalent functional relations determine the same function."
Both axiomatic and naive forms of Zermelo's set theory as modified by Fraenkel (1922) and Skolem (1922)define"function" as a relation,definea relation as a set of ordered pairs, anddefinean ordered pair as a set of two "dissymetric" sets.
While the reader ofSuppes (1960)Axiomatic Set TheoryorHalmos (1970)Naive Set Theoryobserves the use of function-symbolism in theaxiom of separation, e.g., φ(x) (in Suppes) and S(x) (in Halmos), they will see no mention of "proposition" or even "first order predicate calculus". In their place are "expressionsof the object language", "atomic formulae", "primitive formulae", and "atomic sentences".
Kleene (1952)defines the words as follows: "In word languages, a proposition is expressed by a sentence. Then a 'predicate' is expressed by an incomplete sentence or sentence skeleton containing an open place. For example, "___ is a man" expresses a predicate ... The predicate is apropositional function of one variable. Predicates are often called 'properties' ... The predicate calculus will treat of the logic of predicates in this general sense of 'predicate', i.e., as propositional function".[102]
In 1954, Bourbaki, on p. 76 in Chapitre II of Theorie des Ensembles (theory of sets), gave a definition of a function as a triplef= (F,A,B).[103]HereFis afunctional graph, meaning a set of pairs where no two pairs have the same first member. On p. 77 (op. cit.) Bourbaki states (literal translation): "Often we shall use, in the remainder of this Treatise, the wordfunctioninstead offunctional graph."
Suppes (1960)inAxiomatic Set Theory, formally defines arelation(p. 57) as a set of pairs, and afunction(p. 86) as a relation where no two pairs have the same first member.
The reason for the disappearance of the words "propositional function" e.g., inSuppes (1960), andHalmos (1970), is explained byTarski (1946)together with further explanation of the terminology:
For his partTarskicalls the relational form of function a "FUNCTIONAL RELATION or simply a FUNCTION".[105]After a discussion of this "functional relation" he asserts that:
See more about "truth under an interpretation" atAlfred Tarski.
|
https://en.wikipedia.org/wiki/History_of_the_function_concept
|
VisualRankis a system forfindingand ranking images by analysing andcomparing their content, rather than searching image names, Web links or other text.Googlescientists made their VisualRank work public in a paper describing applyingPageRankto Google image search at the International World Wide Web Conference inBeijingin 2008.[1][2]
Bothcomputer visiontechniques andlocality-sensitive hashing(LSH) are used in the VisualRankalgorithm. Consider an image search initiated by a text query. An existing search technique based on image metadata and surrounding text is used to retrieve the initial result candidates (PageRank), which along with other images in the index are clustered in agraphaccording to their similarity (which is precomputed).Centralityis then measured on the clustering, which will return the most canonical image(s) with respect to the query. The idea here is that agreement between users of the web about the image and its related concepts will result in those images being deemed more similar. VisualRank is defined iteratively byVR=S∗×VR{\displaystyle VR=S^{*}\times VR}, whereS∗{\displaystyle S^{*}}is the image similarity matrix. As matrices are used,eigenvector centralitywill be the measure applied, with repeated multiplication ofVR{\displaystyle VR}andS∗{\displaystyle S^{*}}producing theeigenvectorwe're looking for. Clearly, the image similarity measure is crucial to the performance of VisualRank since it determines the underlying graph structure.
The main VisualRank system begins with local feature vectors being extracted from images usingscale-invariant feature transform(SIFT). Local feature descriptors are used instead of color histograms as they allow similarity to be considered between images with potential rotation, scale, and perspective transformations. Locality-sensitive hashing is then applied to these feature vectors using thep-stable distribution scheme. In addition to this, LSH amplification using AND/OR constructions are applied. As part of the applied scheme, aGaussian distributionis used under theℓ2{\displaystyle \ell _{2}}norm.
|
https://en.wikipedia.org/wiki/VisualRank
|
Agenerative adversarial network(GAN) is a class ofmachine learningframeworks and a prominent framework for approachinggenerative artificial intelligence. The concept was initially developed byIan Goodfellowand his colleagues in June 2014.[1]In a GAN, twoneural networkscompete with each other in the form of azero-sum game, where one agent's gain is another agent's loss.
Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form ofgenerative modelforunsupervised learning, GANs have also proved useful forsemi-supervised learning,[2]fullysupervised learning,[3]andreinforcement learning.[4]
The core idea of a GAN is based on the "indirect" training through the discriminator, another neural network that can tell how "realistic" the input seems, which itself is also being updated dynamically.[5]This means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner.
GANs are similar tomimicryinevolutionary biology, with anevolutionary arms racebetween both networks.
The original GAN is defined as the followinggame:[1]
Eachprobability space(Ω,μref){\displaystyle (\Omega ,\mu _{\text{ref}})}defines a GAN game.
There are 2 players: generator and discriminator.
The generator'sstrategy setisP(Ω){\displaystyle {\mathcal {P}}(\Omega )}, the set of all probability measuresμG{\displaystyle \mu _{G}}onΩ{\displaystyle \Omega }.
The discriminator's strategy set is the set ofMarkov kernelsμD:Ω→P[0,1]{\displaystyle \mu _{D}:\Omega \to {\mathcal {P}}[0,1]}, whereP[0,1]{\displaystyle {\mathcal {P}}[0,1]}is the set of probability measures on[0,1]{\displaystyle [0,1]}.
The GAN game is azero-sum game, with objective functionL(μG,μD):=Ex∼μref,y∼μD(x)[lny]+Ex∼μG,y∼μD(x)[ln(1−y)].{\displaystyle L(\mu _{G},\mu _{D}):=\operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln y]+\operatorname {E} _{x\sim \mu _{G},y\sim \mu _{D}(x)}[\ln(1-y)].}The generator aims to minimize the objective, and the discriminator aims to maximize the objective.
The generator's task is to approachμG≈μref{\displaystyle \mu _{G}\approx \mu _{\text{ref}}}, that is, to match its own output distribution as closely as possible to the reference distribution. The discriminator's task is to output a value close to 1 when the input appears to be from the reference distribution, and to output a value close to 0 when the input looks like it came from the generator distribution.
Thegenerativenetworkgenerates candidates while thediscriminativenetworkevaluates them.[1]The contest operates in terms of data distributions. Typically, the generative network learns to map from alatent spaceto a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution. The generative network's training objective is to increase the error rate of the discriminative network (i.e., "fool" the discriminator network by producing novel candidates that the discriminator thinks are not synthesized (are part of the true data distribution)).[1][6]
A known dataset serves as the initial training data for the discriminator. Training involves presenting it with samples from the training dataset until it achieves acceptable accuracy. The generator is trained based on whether it succeeds in fooling the discriminator. Typically, the generator is seeded with randomized input that is sampled from a predefinedlatent space(e.g. amultivariate normal distribution). Thereafter, candidates synthesized by the generator are evaluated by the discriminator. Independentbackpropagationprocedures are applied to both networks so that the generator produces better samples, while the discriminator becomes more skilled at flagging synthetic samples.[7]When used for image generation, the generator is typically adeconvolutional neural network, and the discriminator is aconvolutional neural network.
GANs areimplicit generative models,[8]which means that they do not explicitly model the likelihood function nor provide a means for finding the latent variable corresponding to a given sample, unlike alternatives such asflow-based generative model.
Compared to fully visible belief networks such asWaveNetand PixelRNN and autoregressive models in general, GANs can generate one complete sample in one pass, rather than multiple passes through the network.
Compared toBoltzmann machinesand linearICA, there is no restriction on the type of function used by the network.
Since neural networks areuniversal approximators, GANs areasymptotically consistent.Variational autoencodersmight be universal approximators, but it is not proven as of 2017.[9]
This section provides some of the mathematical theory behind these methods.
Inmodern probability theorybased onmeasure theory, a probability space also needs to be equipped with aσ-algebra. As a result, a more rigorous definition of the GAN game would make the following changes:
Each probability space(Ω,B,μref){\displaystyle (\Omega ,{\mathcal {B}},\mu _{\text{ref}})}defines a GAN game.
The generator's strategy set isP(Ω,B){\displaystyle {\mathcal {P}}(\Omega ,{\mathcal {B}})}, the set of all probability measuresμG{\displaystyle \mu _{G}}on the measure-space(Ω,B){\displaystyle (\Omega ,{\mathcal {B}})}.
The discriminator's strategy set is the set ofMarkov kernelsμD:(Ω,B)→P([0,1],B([0,1])){\displaystyle \mu _{D}:(\Omega ,{\mathcal {B}})\to {\mathcal {P}}([0,1],{\mathcal {B}}([0,1]))}, whereB([0,1]){\displaystyle {\mathcal {B}}([0,1])}is theBorel σ-algebraon[0,1]{\displaystyle [0,1]}.
Since issues of measurability never arise in practice, these will not concern us further.
In the most generic version of the GAN game described above, the strategy set for the discriminator contains all Markov kernelsμD:Ω→P[0,1]{\displaystyle \mu _{D}:\Omega \to {\mathcal {P}}[0,1]}, and the strategy set for the generator contains arbitraryprobability distributionsμG{\displaystyle \mu _{G}}onΩ{\displaystyle \Omega }.
However, as shown below, the optimal discriminator strategy against anyμG{\displaystyle \mu _{G}}is deterministic, so there is no loss of generality in restricting the discriminator's strategies to deterministic functionsD:Ω→[0,1]{\displaystyle D:\Omega \to [0,1]}. In most applications,D{\displaystyle D}is adeep neural networkfunction.
As for the generator, whileμG{\displaystyle \mu _{G}}could theoretically be any computable probability distribution, in practice, it is usually implemented as apushforward:μG=μZ∘G−1{\displaystyle \mu _{G}=\mu _{Z}\circ G^{-1}}. That is, start with a random variablez∼μZ{\displaystyle z\sim \mu _{Z}}, whereμZ{\displaystyle \mu _{Z}}is a probability distribution that is easy to compute (such as theuniform distribution, or theGaussian distribution), then define a functionG:ΩZ→Ω{\displaystyle G:\Omega _{Z}\to \Omega }. Then the distributionμG{\displaystyle \mu _{G}}is the distribution ofG(z){\displaystyle G(z)}.
Consequently, the generator's strategy is usually defined as justG{\displaystyle G}, leavingz∼μZ{\displaystyle z\sim \mu _{Z}}implicit. In this formalism, the GAN game objective isL(G,D):=Ex∼μref[lnD(x)]+Ez∼μZ[ln(1−D(G(z)))].{\displaystyle L(G,D):=\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln D(x)]+\operatorname {E} _{z\sim \mu _{Z}}[\ln(1-D(G(z)))].}
The GAN architecture has two main components. One is casting optimization into a game, of formminGmaxDL(G,D){\displaystyle \min _{G}\max _{D}L(G,D)}, which is different from the usual kind of optimization, of formminθL(θ){\displaystyle \min _{\theta }L(\theta )}. The other is the decomposition ofμG{\displaystyle \mu _{G}}intoμZ∘G−1{\displaystyle \mu _{Z}\circ G^{-1}}, which can be understood as a reparametrization trick.
To see its significance, one must compare GAN with previous methods for learning generative models, which were plagued with "intractable probabilistic computations that arise in maximum likelihood estimation and related strategies".[1]
At the same time, Kingma and Welling[10]and Rezende et al.[11]developed the same idea of reparametrization into a general stochastic backpropagation method. Among its first applications was thevariational autoencoder.
In the original paper, as well as most subsequent papers, it is usually assumed that the generatormoves first, and the discriminatormoves second, thus giving the following minimax game:minμGmaxμDL(μG,μD):=Ex∼μref,y∼μD(x)[lny]+Ex∼μG,y∼μD(x)[ln(1−y)].{\displaystyle \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D}):=\operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln y]+\operatorname {E} _{x\sim \mu _{G},y\sim \mu _{D}(x)}[\ln(1-y)].}
If both the generator's and the discriminator's strategy sets are spanned by a finite number of strategies, then by theminimax theorem,minμGmaxμDL(μG,μD)=maxμDminμGL(μG,μD){\displaystyle \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D})=\max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D})}that is, the move order does not matter.
However, since the strategy sets are both not finitely spanned, the minimax theorem does not apply, and the idea of an "equilibrium" becomes delicate. To wit, there are the following different concepts of equilibrium:
For general games, these equilibria do not have to agree, or even to exist. For the original GAN game, these equilibria all exist, and are all equal. However, for more general GAN games, these do not necessarily exist, or agree.[12]
The original GAN paper proved the following two theorems:[1]
Theorem(the optimal discriminator computes the Jensen–Shannon divergence)—For any fixed generator strategyμG{\displaystyle \mu _{G}}, let the optimal reply beD∗=argmaxDL(μG,D){\displaystyle D^{*}=\arg \max _{D}L(\mu _{G},D)}, then
D∗(x)=dμrefd(μref+μG)L(μG,D∗)=2DJS(μref;μG)−2ln2{\displaystyle {\begin{aligned}D^{*}(x)&={\frac {d\mu _{\text{ref}}}{d(\mu _{\text{ref}}+\mu _{G})}}\\[6pt]L(\mu _{G},D^{*})&=2D_{JS}(\mu _{\text{ref}};\mu _{G})-2\ln 2\end{aligned}}}
where the derivative is theRadon–Nikodym derivative, andDJS{\displaystyle D_{JS}}is theJensen–Shannon divergence.
By Jensen's inequality,
Ex∼μref,y∼μD(x)[lny]≤Ex∼μref[lnEy∼μD(x)[y]]{\displaystyle \operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln y]\leq \operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln \operatorname {E} _{y\sim \mu _{D}(x)}[y]]}and similarly for the other term. Therefore, the optimal reply can be deterministic, i.e.μD(x)=δD(x){\displaystyle \mu _{D}(x)=\delta _{D(x)}}for some functionD:Ω→[0,1]{\displaystyle D:\Omega \to [0,1]}, in which case
L(μG,μD):=Ex∼μref[lnD(x)]+Ex∼μG[ln(1−D(x))].{\displaystyle L(\mu _{G},\mu _{D}):=\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln D(x)]+\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))].}
To define suitable density functions, we define a base measureμ:=μref+μG{\displaystyle \mu :=\mu _{\text{ref}}+\mu _{G}}, which allows us to take the Radon–Nikodym derivatives
ρref=dμrefdμρG=dμGdμ{\displaystyle \rho _{\text{ref}}={\frac {d\mu _{\text{ref}}}{d\mu }}\quad \rho _{G}={\frac {d\mu _{G}}{d\mu }}}withρref+ρG=1{\displaystyle \rho _{\text{ref}}+\rho _{G}=1}.
We then have
L(μG,μD):=∫μ(dx)[ρref(x)ln(D(x))+ρG(x)ln(1−D(x))].{\displaystyle L(\mu _{G},\mu _{D}):=\int \mu (dx)\left[\rho _{\text{ref}}(x)\ln(D(x))+\rho _{G}(x)\ln(1-D(x))\right].}
The integrand is just the negativecross-entropybetween two Bernoulli random variables with parametersρref(x){\displaystyle \rho _{\text{ref}}(x)}andD(x){\displaystyle D(x)}. We can write this as−H(ρref(x))−DKL(ρref(x)∥D(x)){\displaystyle -H(\rho _{\text{ref}}(x))-D_{KL}(\rho _{\text{ref}}(x)\parallel D(x))}, whereH{\displaystyle H}is thebinary entropy function, so
L(μG,μD)=−∫μ(dx)(H(ρref(x))+DKL(ρref(x)∥D(x))).{\displaystyle L(\mu _{G},\mu _{D})=-\int \mu (dx)(H(\rho _{\text{ref}}(x))+D_{KL}(\rho _{\text{ref}}(x)\parallel D(x))).}
This means that the optimal strategy for the discriminator isD(x)=ρref(x){\displaystyle D(x)=\rho _{\text{ref}}(x)}, withL(μG,μD∗)=−∫μ(dx)H(ρref(x))=DJS(μref∥μG)−2ln2{\displaystyle L(\mu _{G},\mu _{D}^{*})=-\int \mu (dx)H(\rho _{\text{ref}}(x))=D_{JS}(\mu _{\text{ref}}\parallel \mu _{G})-2\ln 2}
after routine calculation.
Interpretation: For any fixed generator strategyμG{\displaystyle \mu _{G}}, the optimal discriminator keeps track of the likelihood ratio between the reference distribution and the generator distribution:D(x)1−D(x)=dμrefdμG(x)=μref(dx)μG(dx);D(x)=σ(lnμref(dx)−lnμG(dx)){\displaystyle {\frac {D(x)}{1-D(x)}}={\frac {d\mu _{\text{ref}}}{d\mu _{G}}}(x)={\frac {\mu _{\text{ref}}(dx)}{\mu _{G}(dx)}};\quad D(x)=\sigma (\ln \mu _{\text{ref}}(dx)-\ln \mu _{G}(dx))}whereσ{\displaystyle \sigma }is thelogistic function.
In particular, if the prior probability for an imagex{\displaystyle x}to come from the reference distribution is equal to12{\displaystyle {\frac {1}{2}}}, thenD(x){\displaystyle D(x)}is just the posterior probability thatx{\displaystyle x}came from the reference distribution:D(x)=Pr(xcame from reference distribution∣x).{\displaystyle D(x)=\Pr(x{\text{ came from reference distribution}}\mid x).}
Theorem(the unique equilibrium point)—For any GAN game, there exists a pair(μ^D,μ^G){\displaystyle ({\hat {\mu }}_{D},{\hat {\mu }}_{G})}that is both a sequential equilibrium and a Nash equilibrium:
L(μ^G,μ^D)=minμGmaxμDL(μG,μD)=maxμDminμGL(μG,μD)=−2ln2μ^D∈argmaxμDminμGL(μG,μD),μ^G∈argminμGmaxμDL(μG,μD)μ^D∈argmaxμDL(μ^G,μD),μ^G∈argminμGL(μG,μ^D)∀x∈Ω,μ^D(x)=δ12,μ^G=μref{\displaystyle {\begin{aligned}&L({\hat {\mu }}_{G},{\hat {\mu }}_{D})=\min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D})=&\max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D})=-2\ln 2\\[6pt]&{\hat {\mu }}_{D}\in \arg \max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D}),&\quad {\hat {\mu }}_{G}\in \arg \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D})\\[6pt]&{\hat {\mu }}_{D}\in \arg \max _{\mu _{D}}L({\hat {\mu }}_{G},\mu _{D}),&\quad {\hat {\mu }}_{G}\in \arg \min _{\mu _{G}}L(\mu _{G},{\hat {\mu }}_{D})\\[6pt]&\forall x\in \Omega ,{\hat {\mu }}_{D}(x)=\delta _{\frac {1}{2}},&\quad {\hat {\mu }}_{G}=\mu _{\text{ref}}\end{aligned}}}
That is, the generator perfectly mimics the reference, and the discriminator outputs12{\displaystyle {\frac {1}{2}}}deterministically on all inputs.
From the previous proposition,
argminμGmaxμDL(μG,μD)=μref;minμGmaxμDL(μG,μD)=−2ln2.{\displaystyle \arg \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D})=\mu _{\text{ref}};\quad \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D})=-2\ln 2.}
For any fixed discriminator strategyμD{\displaystyle \mu _{D}}, anyμG{\displaystyle \mu _{G}}concentrated on the set
{x∣Ey∼μD(x)[ln(1−y)]=infxEy∼μD(x)[ln(1−y)]}{\displaystyle \{x\mid \operatorname {E} _{y\sim \mu _{D}(x)}[\ln(1-y)]=\inf _{x}\operatorname {E} _{y\sim \mu _{D}(x)}[\ln(1-y)]\}}is an optimal strategy for the generator. Thus,
argmaxμDminμGL(μG,μD)=argmaxμDEx∼μref,y∼μD(x)[lny]+infxEy∼μD(x)[ln(1−y)].{\displaystyle \arg \max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D})=\arg \max _{\mu _{D}}\operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln y]+\inf _{x}\operatorname {E} _{y\sim \mu _{D}(x)}[\ln(1-y)].}
By Jensen's inequality, the discriminator can only improve by adopting the deterministic strategy of always playingD(x)=Ey∼μD(x)[y]{\displaystyle D(x)=\operatorname {E} _{y\sim \mu _{D}(x)}[y]}. Therefore,
argmaxμDminμGL(μG,μD)=argmaxDEx∼μref[lnD(x)]+infxln(1−D(x)){\displaystyle \arg \max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D})=\arg \max _{D}\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln D(x)]+\inf _{x}\ln(1-D(x))}
By Jensen's inequality,
lnEx∼μref[D(x)]+infxln(1−D(x))=lnEx∼μref[D(x)]+ln(1−supxD(x))=ln[Ex∼μref[D(x)](1−supxD(x))]≤ln[supxD(x))(1−supxD(x))]≤ln14,{\displaystyle {\begin{aligned}&\ln \operatorname {E} _{x\sim \mu _{\text{ref}}}[D(x)]+\inf _{x}\ln(1-D(x))\\[6pt]={}&\ln \operatorname {E} _{x\sim \mu _{\text{ref}}}[D(x)]+\ln(1-\sup _{x}D(x))\\[6pt]={}&\ln[\operatorname {E} _{x\sim \mu _{\text{ref}}}[D(x)](1-\sup _{x}D(x))]\leq \ln[\sup _{x}D(x))(1-\sup _{x}D(x))]\leq \ln {\frac {1}{4}},\end{aligned}}}
with equality ifD(x)=12{\displaystyle D(x)={\frac {1}{2}}}, so
∀x∈Ω,μ^D(x)=δ12;maxμDminμGL(μG,μD)=−2ln2.{\displaystyle \forall x\in \Omega ,{\hat {\mu }}_{D}(x)=\delta _{\frac {1}{2}};\quad \max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D})=-2\ln 2.}
Finally, to check that this is a Nash equilibrium, note that whenμG=μref{\displaystyle \mu _{G}=\mu _{\text{ref}}}, we have
L(μG,μD):=Ex∼μref,y∼μD(x)[ln(y(1−y))]{\displaystyle L(\mu _{G},\mu _{D}):=\operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln(y(1-y))]}which is always maximized byy=12{\displaystyle y={\frac {1}{2}}}.
When∀x∈Ω,μD(x)=δ12{\displaystyle \forall x\in \Omega ,\mu _{D}(x)=\delta _{\frac {1}{2}}}, any strategy is optimal for the generator.
While the GAN game has a unique global equilibrium point when both the generator and discriminator have access to their entire strategy sets, the equilibrium is no longer guaranteed when they have a restricted strategy set.[12]
In practice, the generator has access only to measures of formμZ∘Gθ−1{\displaystyle \mu _{Z}\circ G_{\theta }^{-1}}, whereGθ{\displaystyle G_{\theta }}is a function computed by a neural network with parametersθ{\displaystyle \theta }, andμZ{\displaystyle \mu _{Z}}is an easily sampled distribution, such as the uniform or normal distribution. Similarly, the discriminator has access only to functions of formDζ{\displaystyle D_{\zeta }}, a function computed by a neural network with parametersζ{\displaystyle \zeta }. These restricted strategy sets take up avanishingly small proportionof their entire strategy sets.[13]
Further, even if an equilibrium still exists, it can only be found by searching in the high-dimensional space of all possible neural network functions. The standard strategy of usinggradient descentto find the equilibrium often does not work for GAN, and often the game "collapses" into one of several failure modes. To improve the convergence stability, some training strategies start with an easier task, such as generating low-resolution images[14]or simple images (one object with uniform background),[15]and gradually increase the difficulty of the task during training. This essentially translates to applying a curriculum learning scheme.[16]
GANs often suffer frommode collapsewhere they fail to generalize properly, missing entire modes from the input data. For example, a GAN trained on theMNISTdataset containing many samples of each digit might only generate pictures of digit 0. This was termed "the Helvetica scenario".[1]
One way this can happen is if the generator learns too fast compared to the discriminator. If the discriminatorD{\displaystyle D}is held constant, then the optimal generator would only output elements ofargmaxxD(x){\displaystyle \arg \max _{x}D(x)}.[17]So for example, if during GAN training for generating MNIST dataset, for a few epochs, the discriminator somehow prefers the digit 0 slightly more than other digits, the generator may seize the opportunity to generate only digit 0, then be unable to escape the local minimum after the discriminator improves.
Some researchers perceive the root problem to be a weak discriminative network that fails to notice the pattern of omission, while others assign blame to a bad choice ofobjective function. Many solutions have been proposed, but it is still an open problem.[18][19]
Even the state-of-the-art architecture, BigGAN (2019), could not avoid mode collapse. The authors resorted to "allowing collapse to occur at the later stages of training, by which time a model is sufficiently trained to achieve good results".[20]
Thetwo time-scale update rule (TTUR)is proposed to make GAN convergence more stable by making the learning rate of the generator lower than that of the discriminator. The authors argued that the generator should move slower than the discriminator, so that it does not "drive the discriminator steadily into new regions without capturing its gathered information".
They proved that a general class of games that included the GAN game, when trained under TTUR, "converges under mild assumptions to a stationary local Nash equilibrium".[21]
They also proposed using theAdam stochastic optimization[22]to avoid mode collapse, as well as theFréchet inception distancefor evaluating GAN performances.
Conversely, if the discriminator learns too fast compared to the generator, then the discriminator could almost perfectly distinguishμGθ,μref{\displaystyle \mu _{G_{\theta }},\mu _{\text{ref}}}. In such case, the generatorGθ{\displaystyle G_{\theta }}could be stuck with a very high loss no matter which direction it changes itsθ{\displaystyle \theta }, meaning that the gradient∇θL(Gθ,Dζ){\displaystyle \nabla _{\theta }L(G_{\theta },D_{\zeta })}would be close to zero. In such case, the generator cannot learn, a case of thevanishing gradientproblem.[13]
Intuitively speaking, the discriminator is too good, and since the generator cannot take any small step (only small steps are considered in gradient descent) to improve its payoff, it does not even try.
One important method for solving this problem is theWasserstein GAN.
GANs are usually evaluated byInception score(IS), which measures how varied the generator's outputs are (as classified by an image classifier, usuallyInception-v3), orFréchet inception distance(FID), which measures how similar the generator's outputs are to a reference set (as classified by a learned image featurizer, such as Inception-v3 without its final layer). Many papers that propose new GAN architectures for image generation report how their architectures break thestate of the arton FID or IS.
Another evaluation method is the Learned Perceptual Image Patch Similarity (LPIPS), which starts with a learned image featurizerfθ:Image→Rn{\displaystyle f_{\theta }:{\text{Image}}\to \mathbb {R} ^{n}}, and finetunes it by supervised learning on a set of(x,x′,perceptualdifference(x,x′)){\displaystyle (x,x',\operatorname {perceptual~difference} (x,x'))}, wherex{\displaystyle x}is an image,x′{\displaystyle x'}is a perturbed version of it, andperceptualdifference(x,x′){\displaystyle \operatorname {perceptual~difference} (x,x')}is how much they differ, as reported by human subjects. The model is finetuned so that it can approximate‖fθ(x)−fθ(x′)‖≈perceptualdifference(x,x′){\displaystyle \|f_{\theta }(x)-f_{\theta }(x')\|\approx \operatorname {perceptual~difference} (x,x')}. This finetuned model is then used to defineLPIPS(x,x′):=‖fθ(x)−fθ(x′)‖{\displaystyle \operatorname {LPIPS} (x,x'):=\|f_{\theta }(x)-f_{\theta }(x')\|}.[23]
Other evaluation methods are reviewed in.[24]
There is a veritable zoo of GAN variants.[25]Some of the most prominent are as follows:
Conditional GANs are similar to standard GANs except they allow the model to conditionally generate samples based on additional information. For example, if we want to generate a cat face given a dog picture, we could use a conditional GAN.
The generator in a GAN game generatesμG{\displaystyle \mu _{G}}, a probability distribution on the probability spaceΩ{\displaystyle \Omega }. This leads to the idea of a conditional GAN, where instead of generating one probability distribution onΩ{\displaystyle \Omega }, the generator generates a different probability distributionμG(c){\displaystyle \mu _{G}(c)}onΩ{\displaystyle \Omega }, for each given class labelc{\displaystyle c}.
For example, for generating images that look likeImageNet, the generator should be able to generate a picture of cat when given the class label "cat".
In the original paper,[1]the authors noted that GAN can be trivially extended to conditional GAN by providing the labels to both the generator and the discriminator.
Concretely, the conditional GAN game is just the GAN game with class labels provided:L(μG,D):=Ec∼μC,x∼μref(c)[lnD(x,c)]+Ec∼μC,x∼μG(c)[ln(1−D(x,c))]{\displaystyle L(\mu _{G},D):=\operatorname {E} _{c\sim \mu _{C},x\sim \mu _{\text{ref}}(c)}[\ln D(x,c)]+\operatorname {E} _{c\sim \mu _{C},x\sim \mu _{G}(c)}[\ln(1-D(x,c))]}whereμC{\displaystyle \mu _{C}}is a probability distribution over classes,μref(c){\displaystyle \mu _{\text{ref}}(c)}is the probability distribution of real images of classc{\displaystyle c}, andμG(c){\displaystyle \mu _{G}(c)}the probability distribution of images generated by the generator when given class labelc{\displaystyle c}.
In 2017, a conditional GAN learned to generate 1000 image classes ofImageNet.[26]
The GAN game is a general framework and can be run with any reasonable parametrization of the generatorG{\displaystyle G}and discriminatorD{\displaystyle D}. In the original paper, the authors demonstrated it usingmultilayer perceptronnetworks andconvolutional neural networks. Many alternative architectures have been tried.
Deep convolutional GAN (DCGAN):[27]For both generator and discriminator, uses only deep networks consisting entirely of convolution-deconvolution layers, that is, fully convolutional networks.[28]
Self-attention GAN (SAGAN):[29]Starts with the DCGAN, then adds residually-connected standardself-attention modulesto the generator and discriminator.
Variational autoencoder GAN (VAEGAN):[30]Uses avariational autoencoder(VAE) for the generator.
Transformer GAN (TransGAN):[31]Uses the puretransformerarchitecture for both the generator and discriminator, entirely devoid of convolution-deconvolution layers.
Flow-GAN:[32]Usesflow-based generative modelfor the generator, allowing efficient computation of the likelihood function.
Many GAN variants are merely obtained by changing the loss functions for the generator and discriminator.
Original GAN:
We recast the original GAN objective into a form more convenient for comparison:{minDLD(D,μG)=−Ex∼μG[lnD(x)]−Ex∼μref[ln(1−D(x))]minGLG(D,μG)=−Ex∼μG[ln(1−D(x))]{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln D(x)]-\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln(1-D(x))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\end{cases}}}
Original GAN, non-saturating loss:
This objective for generator was recommended in the original paper for faster convergence.[1]LG=Ex∼μG[lnD(x)]{\displaystyle L_{G}=\operatorname {E} _{x\sim \mu _{G}}[\ln D(x)]}The effect of using this objective is analyzed in Section 2.2.2 of Arjovsky et al.[33]
Original GAN, maximum likelihood:
LG=Ex∼μG[(exp∘σ−1∘D)(x)]{\displaystyle L_{G}=\operatorname {E} _{x\sim \mu _{G}}[({\exp }\circ \sigma ^{-1}\circ D)(x)]}whereσ{\displaystyle \sigma }is the logistic function. When the discriminator is optimal, the generator gradient is the same as inmaximum likelihood estimation, even though GAN cannot perform maximum likelihood estimationitself.[34][35]
Hinge lossGAN:[36]LD=−Ex∼pref[min(0,−1+D(x))]−Ex∼μG[min(0,−1−D(x))]{\displaystyle L_{D}=-\operatorname {E} _{x\sim p_{\text{ref}}}\left[\min \left(0,-1+D(x)\right)\right]-\operatorname {E} _{x\sim \mu _{G}}\left[\min \left(0,-1-D\left(x\right)\right)\right]}LG=−Ex∼μG[D(x)]{\displaystyle L_{G}=-\operatorname {E} _{x\sim \mu _{G}}[D(x)]}Least squares GAN:[37]LD=Ex∼μref[(D(x)−b)2]+Ex∼μG[(D(x)−a)2]{\displaystyle L_{D}=\operatorname {E} _{x\sim \mu _{\text{ref}}}[(D(x)-b)^{2}]+\operatorname {E} _{x\sim \mu _{G}}[(D(x)-a)^{2}]}LG=Ex∼μG[(D(x)−c)2]{\displaystyle L_{G}=\operatorname {E} _{x\sim \mu _{G}}[(D(x)-c)^{2}]}wherea,b,c{\displaystyle a,b,c}are parameters to be chosen. The authors recommendeda=−1,b=1,c=0{\displaystyle a=-1,b=1,c=0}.
The Wasserstein GAN modifies the GAN game at two points:
One of its purposes is to solve the problem of mode collapse (see above).[13]The authors claim "In no experiment did we see evidence of mode collapse for the WGAN algorithm".
An adversarial autoencoder (AAE)[38]is more autoencoder than GAN. The idea is to start with a plainautoencoder, but train a discriminator to discriminate the latent vectors from a reference distribution (often the normal distribution).
In conditional GAN, the generator receives both a noise vectorz{\displaystyle z}and a labelc{\displaystyle c}, and produces an imageG(z,c){\displaystyle G(z,c)}. The discriminator receives image-label pairs(x,c){\displaystyle (x,c)}, and computesD(x,c){\displaystyle D(x,c)}.
When the training dataset is unlabeled, conditional GAN does not work directly.
The idea of InfoGAN is to decree that every latent vector in the latent space can be decomposed as(z,c){\displaystyle (z,c)}: an incompressible noise partz{\displaystyle z}, and an informative label partc{\displaystyle c}, and encourage the generator to comply with the decree, by encouraging it to maximizeI(c,G(z,c)){\displaystyle I(c,G(z,c))}, themutual informationbetweenc{\displaystyle c}andG(z,c){\displaystyle G(z,c)}, while making no demands on the mutual informationz{\displaystyle z}betweenG(z,c){\displaystyle G(z,c)}.
Unfortunately,I(c,G(z,c)){\displaystyle I(c,G(z,c))}is intractable in general, The key idea of InfoGAN is Variational Mutual Information Maximization:[39]indirectly maximize it by maximizing a lower boundI^(G,Q)=Ez∼μZ,c∼μC[lnQ(c∣G(z,c))];I(c,G(z,c))≥supQI^(G,Q){\displaystyle {\hat {I}}(G,Q)=\mathbb {E} _{z\sim \mu _{Z},c\sim \mu _{C}}[\ln Q(c\mid G(z,c))];\quad I(c,G(z,c))\geq \sup _{Q}{\hat {I}}(G,Q)}whereQ{\displaystyle Q}ranges over allMarkov kernelsof typeQ:ΩY→P(ΩC){\displaystyle Q:\Omega _{Y}\to {\mathcal {P}}(\Omega _{C})}.
The InfoGAN game is defined as follows:[40]
Three probability spaces define an InfoGAN game:
There are 3 players in 2 teams: generator, Q, and discriminator. The generator and Q are on one team, and the discriminator on the other team.
The objective function isL(G,Q,D)=LGAN(G,D)−λI^(G,Q){\displaystyle L(G,Q,D)=L_{GAN}(G,D)-\lambda {\hat {I}}(G,Q)}whereLGAN(G,D)=Ex∼μref,[lnD(x)]+Ez∼μZ[ln(1−D(G(z,c)))]{\displaystyle L_{GAN}(G,D)=\operatorname {E} _{x\sim \mu _{\text{ref}},}[\ln D(x)]+\operatorname {E} _{z\sim \mu _{Z}}[\ln(1-D(G(z,c)))]}is the original GAN game objective, andI^(G,Q)=Ez∼μZ,c∼μC[lnQ(c∣G(z,c))]{\displaystyle {\hat {I}}(G,Q)=\mathbb {E} _{z\sim \mu _{Z},c\sim \mu _{C}}[\ln Q(c\mid G(z,c))]}
Generator-Q team aims to minimize the objective, and discriminator aims to maximize it:minG,QmaxDL(G,Q,D){\displaystyle \min _{G,Q}\max _{D}L(G,Q,D)}
The standard GAN generator is a function of typeG:ΩZ→ΩX{\displaystyle G:\Omega _{Z}\to \Omega _{X}}, that is, it is a mapping from a latent spaceΩZ{\displaystyle \Omega _{Z}}to the image spaceΩX{\displaystyle \Omega _{X}}. This can be understood as a "decoding" process, whereby every latent vectorz∈ΩZ{\displaystyle z\in \Omega _{Z}}is a code for an imagex∈ΩX{\displaystyle x\in \Omega _{X}}, and the generator performs the decoding. This naturally leads to the idea of training another network that performs "encoding", creating anautoencoderout of the encoder-generator pair.
Already in the original paper,[1]the authors noted that "Learned approximate inference can be performed by training an auxiliary network to predictz{\displaystyle z}givenx{\displaystyle x}". The bidirectional GAN architecture performs exactly this.[41]
The BiGAN is defined as follows:
Two probability spaces define a BiGAN game:
There are 3 players in 2 teams: generator, encoder, and discriminator. The generator and encoder are on one team, and the discriminator on the other team.
The generator's strategies are functionsG:ΩZ→ΩX{\displaystyle G:\Omega _{Z}\to \Omega _{X}}, and the encoder's strategies are functionsE:ΩX→ΩZ{\displaystyle E:\Omega _{X}\to \Omega _{Z}}. The discriminator's strategies are functionsD:ΩX→[0,1]{\displaystyle D:\Omega _{X}\to [0,1]}.
The objective function isL(G,E,D)=Ex∼μX[lnD(x,E(x))]+Ez∼μZ[ln(1−D(G(z),z))]{\displaystyle L(G,E,D)=\mathbb {E} _{x\sim \mu _{X}}[\ln D(x,E(x))]+\mathbb {E} _{z\sim \mu _{Z}}[\ln(1-D(G(z),z))]}
Generator-encoder team aims to minimize the objective, and discriminator aims to maximize it:minG,EmaxDL(G,E,D){\displaystyle \min _{G,E}\max _{D}L(G,E,D)}
In the paper, they gave a more abstract definition of the objective as:L(G,E,D)=E(x,z)∼μE,X[lnD(x,z)]+E(x,z)∼μG,Z[ln(1−D(x,z))]{\displaystyle L(G,E,D)=\mathbb {E} _{(x,z)\sim \mu _{E,X}}[\ln D(x,z)]+\mathbb {E} _{(x,z)\sim \mu _{G,Z}}[\ln(1-D(x,z))]}whereμE,X(dx,dz)=μX(dx)⋅δE(x)(dz){\displaystyle \mu _{E,X}(dx,dz)=\mu _{X}(dx)\cdot \delta _{E(x)}(dz)}is the probability distribution onΩX×ΩZ{\displaystyle \Omega _{X}\times \Omega _{Z}}obtained bypushingμX{\displaystyle \mu _{X}}forwardviax↦(x,E(x)){\displaystyle x\mapsto (x,E(x))}, andμG,Z(dx,dz)=δG(z)(dx)⋅μZ(dz){\displaystyle \mu _{G,Z}(dx,dz)=\delta _{G(z)}(dx)\cdot \mu _{Z}(dz)}is the probability distribution onΩX×ΩZ{\displaystyle \Omega _{X}\times \Omega _{Z}}obtained by pushingμZ{\displaystyle \mu _{Z}}forward viaz↦(G(x),z){\displaystyle z\mapsto (G(x),z)}.
Applications of bidirectional models includesemi-supervised learning,[42]interpretable machine learning,[43]andneural machine translation.[44]
CycleGAN is an architecture for performing translations between two domains, such as between photos of horses and photos of zebras, or photos of night cities and photos of day cities.
The CycleGAN game is defined as follows:[45]
There are two probability spaces(ΩX,μX),(ΩY,μY){\displaystyle (\Omega _{X},\mu _{X}),(\Omega _{Y},\mu _{Y})}, corresponding to the two domains needed for translations fore-and-back.
There are 4 players in 2 teams: generatorsGX:ΩX→ΩY,GY:ΩY→ΩX{\displaystyle G_{X}:\Omega _{X}\to \Omega _{Y},G_{Y}:\Omega _{Y}\to \Omega _{X}}, and discriminatorsDX:ΩX→[0,1],DY:ΩY→[0,1]{\displaystyle D_{X}:\Omega _{X}\to [0,1],D_{Y}:\Omega _{Y}\to [0,1]}.
The objective function isL(GX,GY,DX,DY)=LGAN(GX,DX)+LGAN(GY,DY)+λLcycle(GX,GY){\displaystyle L(G_{X},G_{Y},D_{X},D_{Y})=L_{GAN}(G_{X},D_{X})+L_{GAN}(G_{Y},D_{Y})+\lambda L_{cycle}(G_{X},G_{Y})}
whereλ{\displaystyle \lambda }is a positive adjustable parameter,LGAN{\displaystyle L_{GAN}}is the GAN game objective, andLcycle{\displaystyle L_{cycle}}is thecycle consistency loss:Lcycle(GX,GY)=Ex∼μX‖GX(GY(x))−x‖+Ey∼μY‖GY(GX(y))−y‖{\displaystyle L_{cycle}(G_{X},G_{Y})=E_{x\sim \mu _{X}}\|G_{X}(G_{Y}(x))-x\|+E_{y\sim \mu _{Y}}\|G_{Y}(G_{X}(y))-y\|}The generators aim to minimize the objective, and the discriminators aim to maximize it:minGX,GYmaxDX,DYL(GX,GY,DX,DY){\displaystyle \min _{G_{X},G_{Y}}\max _{D_{X},D_{Y}}L(G_{X},G_{Y},D_{X},D_{Y})}
Unlike previous work like pix2pix,[46]which requires paired training data, cycleGAN requires no paired data. For example, to train a pix2pix model to turn a summer scenery photo to winter scenery photo and back, the dataset must contain pairs of the same place in summer and winter, shot at the same angle; cycleGAN would only need a set of summer scenery photos, and an unrelated set of winter scenery photos.
The BigGAN is essentially a self-attention GAN trained on a large scale (up to 80 million parameters) to generate large images of ImageNet (up to 512 x 512 resolution), with numerous engineering tricks to make it converge.[20][47]
When there is insufficient training data, the reference distributionμref{\displaystyle \mu _{\text{ref}}}cannot be well-approximated by theempirical distributiongiven by the training dataset. In such cases,data augmentationcan be applied, to allow training GAN on smaller datasets. Naïve data augmentation, however, brings its problems.
Consider the original GAN game, slightly reformulated as follows:{minDLD(D,μG)=−Ex∼μref[lnD(x)]−Ex∼μG[ln(1−D(x))]minGLG(D,μG)=−Ex∼μG[ln(1−D(x))]{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln D(x)]-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\end{cases}}}Now we use data augmentation by randomly sampling semantic-preserving transformsT:Ω→Ω{\displaystyle T:\Omega \to \Omega }and applying them to the dataset, to obtain the reformulated GAN game:{minDLD(D,μG)=−Ex∼μref,T∼μtrans[lnD(T(x))]−Ex∼μG[ln(1−D(x))]minGLG(D,μG)=−Ex∼μG[ln(1−D(x))]{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{\text{ref}},T\sim \mu _{\text{trans}}}[\ln D(T(x))]-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\end{cases}}}This is equivalent to a GAN game with a different distributionμref′{\displaystyle \mu _{\text{ref}}'}, sampled byT(x){\displaystyle T(x)}, withx∼μref,T∼μtrans{\displaystyle x\sim \mu _{\text{ref}},T\sim \mu _{\text{trans}}}. For example, ifμref{\displaystyle \mu _{\text{ref}}}is the distribution of images in ImageNet, andμtrans{\displaystyle \mu _{\text{trans}}}samples identity-transform with probability 0.5, and horizontal-reflection with probability 0.5, thenμref′{\displaystyle \mu _{\text{ref}}'}is the distribution of images in ImageNet and horizontally-reflected ImageNet, combined.
The result of such training would be a generator that mimicsμref′{\displaystyle \mu _{\text{ref}}'}. For example, it would generate images that look like they are randomly cropped, if the data augmentation uses random cropping.
The solution is to apply data augmentation to both generated and real images:{minDLD(D,μG)=−Ex∼μref,T∼μtrans[lnD(T(x))]−Ex∼μG,T∼μtrans[ln(1−D(T(x)))]minGLG(D,μG)=−Ex∼μG,T∼μtrans[ln(1−D(T(x)))]{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{\text{ref}},T\sim \mu _{\text{trans}}}[\ln D(T(x))]-\operatorname {E} _{x\sim \mu _{G},T\sim \mu _{\text{trans}}}[\ln(1-D(T(x)))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G},T\sim \mu _{\text{trans}}}[\ln(1-D(T(x)))]\end{cases}}}The authors demonstrated high-quality generation using just 100-picture-large datasets.[48]
The StyleGAN-2-ADA paper points out a further point on data augmentation: it must beinvertible.[49]Continue with the example of generating ImageNet pictures. If the data augmentation is "randomly rotate the picture by 0, 90, 180, 270 degrees withequalprobability", then there is no way for the generator to know which is the true orientation: Consider two generatorsG,G′{\displaystyle G,G'}, such that for any latentz{\displaystyle z}, the generated imageG(z){\displaystyle G(z)}is a 90-degree rotation ofG′(z){\displaystyle G'(z)}. They would have exactly the same expected loss, and so neither is preferred over the other.
The solution is to only use invertible data augmentation: instead of "randomly rotate the picture by 0, 90, 180, 270 degrees withequalprobability", use "randomly rotate the picture by 90, 180, 270 degrees with 0.1 probability, and keep the picture as it is with 0.7 probability". This way, the generator is still rewarded to keep images oriented the same way as un-augmented ImageNet pictures.
Abstractly, the effect of randomly sampling transformationsT:Ω→Ω{\displaystyle T:\Omega \to \Omega }from the distributionμtrans{\displaystyle \mu _{\text{trans}}}is to define a Markov kernelKtrans:Ω→P(Ω){\displaystyle K_{\text{trans}}:\Omega \to {\mathcal {P}}(\Omega )}. Then, the data-augmented GAN game pushes the generator to find someμ^G∈P(Ω){\displaystyle {\hat {\mu }}_{G}\in {\mathcal {P}}(\Omega )}, such thatKtrans∗μref=Ktrans∗μ^G{\displaystyle K_{\text{trans}}*\mu _{\text{ref}}=K_{\text{trans}}*{\hat {\mu }}_{G}}where∗{\displaystyle *}is theMarkov kernel convolution.
A data-augmentation method is defined to beinvertibleif its Markov kernelKtrans{\displaystyle K_{\text{trans}}}satisfiesKtrans∗μ=Ktrans∗μ′⟹μ=μ′∀μ,μ′∈P(Ω){\displaystyle K_{\text{trans}}*\mu =K_{\text{trans}}*\mu '\implies \mu =\mu '\quad \forall \mu ,\mu '\in {\mathcal {P}}(\Omega )}Immediately by definition, we see that composing multiple invertible data-augmentation methods results in yet another invertible method. Also by definition, if the data-augmentation method is invertible, then using it in a GAN game does not change the optimal strategyμ^G{\displaystyle {\hat {\mu }}_{G}}for the generator, which is stillμref{\displaystyle \mu _{\text{ref}}}.
There are two prototypical examples of invertible Markov kernels:
Discrete case: Invertiblestochastic matrices, whenΩ{\displaystyle \Omega }is finite.
For example, ifΩ={↑,↓,←,→}{\displaystyle \Omega =\{\uparrow ,\downarrow ,\leftarrow ,\rightarrow \}}is the set of four images of an arrow, pointing in 4 directions, and the data augmentation is "randomly rotate the picture by 90, 180, 270 degrees with probabilityp{\displaystyle p}, and keep the picture as it is with probability(1−3p){\displaystyle (1-3p)}", then the Markov kernelKtrans{\displaystyle K_{\text{trans}}}can be represented as a stochastic matrix:[Ktrans]=[(1−3p)pppp(1−3p)pppp(1−3p)pppp(1−3p)]{\displaystyle [K_{\text{trans}}]={\begin{bmatrix}(1-3p)&p&p&p\\p&(1-3p)&p&p\\p&p&(1-3p)&p\\p&p&p&(1-3p)\end{bmatrix}}}andKtrans{\displaystyle K_{\text{trans}}}is an invertible kernel iff[Ktrans]{\displaystyle [K_{\text{trans}}]}is an invertible matrix, that is,p≠1/4{\displaystyle p\neq 1/4}.
Continuous case: The gaussian kernel, whenΩ=Rn{\displaystyle \Omega =\mathbb {R} ^{n}}for somen≥1{\displaystyle n\geq 1}.
For example, ifΩ=R2562{\displaystyle \Omega =\mathbb {R} ^{256^{2}}}is the space of 256x256 images, and the data-augmentation method is "generate a gaussian noisez∼N(0,I2562){\displaystyle z\sim {\mathcal {N}}(0,I_{256^{2}})}, then addϵz{\displaystyle \epsilon z}to the image", thenKtrans{\displaystyle K_{\text{trans}}}is just convolution by the density function ofN(0,ϵ2I2562){\displaystyle {\mathcal {N}}(0,\epsilon ^{2}I_{256^{2}})}. This is invertible, because convolution by a gaussian is just convolution by theheat kernel, so given anyμ∈P(Rn){\displaystyle \mu \in {\mathcal {P}}(\mathbb {R} ^{n})}, the convolved distributionKtrans∗μ{\displaystyle K_{\text{trans}}*\mu }can be obtained by heating upRn{\displaystyle \mathbb {R} ^{n}}precisely according toμ{\displaystyle \mu }, then wait for timeϵ2/4{\displaystyle \epsilon ^{2}/4}. With that, we can recoverμ{\displaystyle \mu }by running theheat equationbackwards in timeforϵ2/4{\displaystyle \epsilon ^{2}/4}.
More examples of invertible data augmentations are found in the paper.[49]
SinGAN pushes data augmentation to the limit, by using only a single image as training data and performing data augmentation on it. The GAN architecture is adapted to this training method by using a multi-scale pipeline.
The generatorG{\displaystyle G}is decomposed into a pyramid of generatorsG=G1∘G2∘⋯∘GN{\displaystyle G=G_{1}\circ G_{2}\circ \cdots \circ G_{N}}, with the lowest one generating the imageGN(zN){\displaystyle G_{N}(z_{N})}at the lowest resolution, then the generated image is scaled up tor(GN(zN)){\displaystyle r(G_{N}(z_{N}))}, and fed to the next level to generate an imageGN−1(zN−1+r(GN(zN))){\displaystyle G_{N-1}(z_{N-1}+r(G_{N}(z_{N})))}at a higher resolution, and so on. The discriminator is decomposed into a pyramid as well.[50]
The StyleGAN family is a series of architectures published byNvidia's research division.
Progressive GAN[14]is a method for training GAN for large-scale image generation stably, by growing a GAN generator from small to large scale in a pyramidal fashion. Like SinGAN, it decomposes the generator asG=G1∘G2∘⋯∘GN{\displaystyle G=G_{1}\circ G_{2}\circ \cdots \circ G_{N}}, and the discriminator asD=D1∘D2∘⋯∘DN{\displaystyle D=D_{1}\circ D_{2}\circ \cdots \circ D_{N}}.
During training, at first onlyGN,DN{\displaystyle G_{N},D_{N}}are used in a GAN game to generate 4x4 images. ThenGN−1,DN−1{\displaystyle G_{N-1},D_{N-1}}are added to reach the second stage of GAN game, to generate 8x8 images, and so on, until we reach a GAN game to generate 1024x1024 images.
To avoid shock between stages of the GAN game, each new layer is "blended in" (Figure 2 of the paper[14]). For example, this is how the second stage GAN game starts:
StyleGAN-1 is designed as a combination of Progressive GAN withneural style transfer.[51]
The key architectural choice of StyleGAN-1 is a progressive growth mechanism, similar to Progressive GAN. Each generated image starts as a constant4×4×512{\displaystyle 4\times 4\times 512}array, and repeatedly passed through style blocks. Each style block applies a "style latent vector" via affine transform ("adaptive instance normalization"), similar to how neural style transfer usesGramian matrix. It then adds noise, and normalize (subtract the mean, then divide by the variance).
At training time, usually only one style latent vector is used per image generated, but sometimes two ("mixing regularization") in order to encourage each style block to independently perform its stylization without expecting help from other style blocks (since they might receive an entirely different style latent vector).
After training, multiple style latent vectors can be fed into each style block. Those fed to the lower layers control the large-scale styles, and those fed to the higher layers control the fine-detail styles.
Style-mixing between two imagesx,x′{\displaystyle x,x'}can be performed as well. First, run a gradient descent to findz,z′{\displaystyle z,z'}such thatG(z)≈x,G(z′)≈x′{\displaystyle G(z)\approx x,G(z')\approx x'}. This is called "projecting an image back to style latent space". Then,z{\displaystyle z}can be fed to the lower style blocks, andz′{\displaystyle z'}to the higher style blocks, to generate a composite image that has the large-scale style ofx{\displaystyle x}, and the fine-detail style ofx′{\displaystyle x'}. Multiple images can also be composed this way.
StyleGAN-2 improves upon StyleGAN-1, by using the style latent vector to transform the convolution layer's weights instead, thus solving the "blob" problem.[52]
This was updated by the StyleGAN-2-ADA ("ADA" stands for "adaptive"),[49]which uses invertible data augmentation as described above. It also tunes the amount of data augmentation applied by starting at zero, and gradually increasing it until an "overfitting heuristic" reaches a target level, thus the name "adaptive".
StyleGAN-3[53]improves upon StyleGAN-2 by solving the "texture sticking" problem, which can be seen in the official videos.[54]They analyzed the problem by theNyquist–Shannon sampling theorem, and argued that the layers in the generator learned to exploit the high-frequency signal in the pixels they operate upon.
To solve this, they proposed imposing strictlowpass filtersbetween each generator's layers, so that the generator is forced to operate on the pixels in a wayfaithfulto the continuous signals they represent, rather than operate on them as merely discrete signals. They further imposed rotational and translational invariance by using moresignal filters. The resulting StyleGAN-3 is able to solve the texture sticking problem, as well as generating images that rotate and translate smoothly.
Other than for generative and discriminative modelling of data, GANs have been used for other things.
GANs have been used fortransfer learningto enforce the alignment of the latent feature space, such as indeep reinforcement learning.[55]This works by feeding the embeddings of the source and target task to the discriminator which tries to guess the context. The resulting loss is then (inversely) backpropagated through the encoder.
GAN-generated molecules were validated experimentally in mice.[72][73]
One of the major concerns in medical imaging is preserving patient privacy. Due to these reasons, researchers often face difficulties in obtaining medical images for their research purposes. GAN has been used for generatingsynthetic medical images, such asMRIandPETimages to address this challenge.[74]
GAN can be used to detectglaucomatousimages helping the early diagnosis which is essential to avoid partial or total loss of vision.[75]
GANs have been used to createforensic facial reconstructionsof deceased historical figures.[76]
Concerns have been raised about the potential use of GAN-basedhuman image synthesisfor sinister purposes, e.g., to produce fake, possibly incriminating, photographs and videos.[77]GANs can be used to generate unique, realistic profile photos of people who do not exist, in order to automate creation of fake social media profiles.[78]
In 2019 the state of California considered[79]and passed on October 3, 2019, thebill AB-602, which bans the use of human image synthesis technologies to make fake pornography without the consent of the people depicted, andbill AB-730, which prohibits distribution of manipulated videos of a political candidate within 60 days of an election. Both bills were authored by Assembly memberMarc Bermanand signed by GovernorGavin Newsom. The laws went into effect in 2020.[80]
DARPA's Media Forensics program studies ways to counteract fake media, including fake media produced using GANs.[81]
GANs can be used to generate art;The Vergewrote in March 2019 that "The images created by GANs have become the defining look of contemporary AI art."[82]GANs can also be used to
Some have worked with using GAN for artistic creativity, as "creative adversarial network".[88][89]A GAN, trained on a set of 15,000 portraits fromWikiArtfrom the 14th to the 19th century, created the 2018 paintingEdmond de Belamy,which sold for US$432,500.[90]
GANs were used by thevideo game moddingcommunity toup-scalelow-resolution 2D textures in old video games by recreating them in4kor higher resolutions via image training, and then down-sampling them to fit the game's native resolution (resemblingsupersamplinganti-aliasing).[91]
In 2020,Artbreederwas used to create the main antagonist in the sequel to the psychological web horror seriesBen Drowned. The author would later go on to praise GAN applications for their ability to help generate assets for independent artists who are short on budget and manpower.[92][93]
In May 2020,Nvidiaresearchers taught an AI system (termed "GameGAN") to recreate the game ofPac-Mansimply by watching it being played.[94][95]
In August 2019, a large dataset consisting of 12,197 MIDI songs each with paired lyrics and melody alignment was created for neural melody generation from lyrics using conditional GAN-LSTM (refer to sources at GitHubAI Melody Generation from Lyrics).[96]
GANs have been used to
In 1991,Juergen Schmidhuberpublished "artificial curiosity",neural networksin azero-sum game.[108]The first network is agenerative modelthat models aprobability distributionover output patterns. The second network learns bygradient descentto predict the reactions of the environment to these patterns. GANs can be regarded as a case where the environmental reaction is 1 or 0 depending on whether the first network's output is in a given set.[109]
Other people had similar ideas but did not develop them similarly. An idea involving adversarial networks was published in a 2010 blog post by Olli Niemitalo.[110]This idea was never implemented and did not involvestochasticityin the generator and thus was not a generative model. It is now known as a conditional GAN or cGAN.[111]An idea similar to GANs was used to model animal behavior by Li, Gauci and Gross in 2013.[112]
Another inspiration for GANs was noise-contrastive estimation,[113]which uses the same loss function as GANs and which Goodfellow studied during his PhD in 2010–2014.
Adversarial machine learninghas other uses besides generative modeling and can be applied to models other than neural networks. In control theory, adversarial learning based on neural networks was used in 2006 to train robust controllers in a game theoretic sense, by alternating the iterations between a minimizer policy, the controller, and a maximizer policy, the disturbance.[114][115]
In 2017, a GAN was used for image enhancement focusing on realistic textures rather than pixel-accuracy, producing a higher image quality at high magnification.[116]In 2017, the first faces were generated.[117]These were exhibited in February 2018 at the Grand Palais.[118][119]Faces generated byStyleGAN[120]in 2019 drew comparisons withDeepfakes.[121][122][123]
|
https://en.wikipedia.org/wiki/Generative_adversarial_network
|
Rule-based machine translation(RBMT) is a classical approach ofmachine translationsystems based onlinguisticinformation about source and target languages. Such information is retrieved from (unilingual, bilingual or multilingual) dictionaries and grammars covering the main semantic, morphological, and syntactic regularities of each language. Having input sentences, an RBMT system generates output sentences on the basis of analysis of both the source and the target languages involved. RBMT has been progressively superseded by more efficient methods, particularlyneural machine translation.[1]
The first RBMT systems were developed in the early 1970s. The most important steps of this evolution were the emergence of the following RBMT systems:
Today, other common RBMT systems include:
There are three different types of rule-based machine translation systems:
RBMT systems can also be characterized as the systems opposite to Example-based Systems of Machine Translation (Example Based Machine Translation), whereas Hybrid Machine Translations Systems make use of many principles derived from RBMT.
The main approach of RBMT systems is based on linking the structure of the given input sentence with the structure of the demanded output sentence, necessarily preserving their unique meaning. The following example can illustrate the general frame of RBMT:
Minimally, to get a German translation of this English sentence one needs:
And finally, we need rules according to which one can relate these two structures together.
Accordingly, we can state the followingstages of translation:
Often only partial parsing is sufficient to get to the syntactic structure of the source sentence and to map it onto the structure of the target sentence.
Anontologyis a formal representation of knowledge that includes the concepts (such as objects, processes etc.) in a domain and some relations between them. If the stored information is of linguistic nature, one can speak of a lexicon.[6]InNLP, ontologies can be used as a source of knowledge for machine translation systems. With access to a large knowledge base, rule-based systems can be enabled to resolve many (especially lexical) ambiguities on their own. In the following classic examples, as humans, we are able to interpret theprepositional phraseaccording to the context because we use our world knowledge, stored in our lexicons:
I saw a man/star/molecule with a microscope/telescope/binoculars.[6]
Since the syntax does not change, a traditional rule-based machine translation system may not be able to differentiate between the meanings. With a large enough ontology as a source of knowledge however, the possible interpretations of ambiguous words in a specific context can be reduced.[6]
The ontology generated for the PANGLOSS knowledge-based machine translation system in 1993 may serve as an example of how an ontology forNLPpurposes can be compiled:[7][8]
The RBMT system contains:
The RBMT system makes use of the following:
|
https://en.wikipedia.org/wiki/Rule-based_machine_translation
|
Animpersonatoris someone who imitates or copies the behavior or actions of another.[1]There are many reasons for impersonating someone:
Celebrity impersonatorsare impostors who look similar tocelebritiesand dress in such a way as to imitate them. Impersonators are known as sound-alikes, look-alikes, impressionists, imitators, tribute artists, and wannabees. The interest may have originated with the need or desire to see a celebrity who has died.[citation needed]One of the most prominent examples of this phenomenon is the case ofElvis Presley.Edward Mosshas appeared in movies and sitcoms, impersonatingMichael Jackson.[3][4]Tom Joneshas attracted his share ofimpersonatorsfrom different places around the world. From the United States, to South East Asia, to the UK, there are performers who either sound like him or imitate his act.[5][6][7][8]
In England and Wales, thePoor Law Amendment Act 1851, section 3, made it an offence to impersonate a "person entitled to vote" at an election. In the case of Whiteley v Chappell (1868), theliteral ruleofstatutory interpretationwas employed to find that a dead person was not a "person entitled to vote" and consequently a person accused of this offence wasacquitted.[9]
Although in aColoradocase, an immigrant was charged with "criminal impersonation" for using another person'sSocial Security numberwhen signing up for a job,[citation needed]some courts have ruled that supplying this wrong information may not be criminal.[10]The ruling hinges on whether there was harm to the other person.[citation needed]
Audio deepfakeshave been used as part ofsocial engineeringscams, fooling people into thinking they are receiving instructions from a trusted individual.[11]In 2019, a U.K.-based energy firm's CEO was scammed over the phone when he was ordered to transfer €220,000 into a Hungarian bank account by an individual who used audio deepfake technology to impersonate the voice of the firm's parent company's chief executive.[12]
As of 2023, the combination advances in deepfake technology, which could clone an individual's voice from a recording of a few seconds to a minute, and newtext generation tools, enabled automated impersonation scams, targeting victims using a convincing digital clone of a friend or relative.[13]
|
https://en.wikipedia.org/wiki/Impersonation
|
Anacrosticis apoemor other word composition in which thefirstletter (or syllable, or word) of each new line (orparagraph, or other recurring feature in the text) spells out a word, message or the alphabet.[1]The term comes from the Frenchacrostichefrom post-classicalLatinacrostichis, fromKoine Greekἀκροστιχίς, fromAncient Greekἄκρος"highest, topmost" andστίχος"verse".[2]As a form ofconstrained writing, an acrostic can be used as amnemonicdevice to aid memory retrieval. When thelastletter of each new line (or other recurring feature) forms a word it is called atelestich(ortelestic); the combination of an acrostic and a telestich in the same composition is called adouble acrostic(e.g. the first-century LatinSator Square).
Acrostics are common in medieval literature, where they usually serve to highlight the name of the poet or his patron, or to make a prayer to a saint. They are most frequent in verse works but can also appear in prose. The Middle High German poetRudolf von Emsfor example opens all his great works with an acrostic of his name, and his world chronicle marks the beginning of each age with an acrostic of the key figure (Moses, David, etc.). In chronicles, acrostics are common in German and English but rare in other languages.[3]
Relatively simple acrostics may merely spell out the letters of the alphabet in order; such an acrostic may be called an 'alphabetical acrostic' orabecedarius. These acrostics occur in theHebrew Biblein the first four of the five chapters of theBook of Lamentations, in the praise of the good wife inProverbs 31:10-31, and inPsalms9-10,25,34,37,111,112,119and145.[4]Notable among the acrostic Psalms is the longPsalm 119, which typically is printed in subsections named after the 22 letters of theHebrew alphabet, each section consisting of 8 verses, each of which begins with the same letter of the alphabet and the entire psalm consisting of 22 x 8 = 176 verses; andPsalm 145, which is recited three times a day in theJewish services. Some acrostic psalms are technically imperfect. For example,Psalm 9andPsalm 10appear to constitute a single acrostic psalm together, but the length assigned to each letter is unequal and five of the 22 letters of the Hebrew alphabet are not represented and the sequence of two letters is reversed. In Psalm 25 one Hebrew letter is not represented, the following letter (Resh) repeated. In Psalm 34 the current final verse, 23, does fit verse 22 in content, but adds an additional line to the poem. In Psalms 37 and 111 the numbering of verses and the division into lines are interfering with each other; as a result in Psalm 37, for the lettersDalethandKaphthere is only one verse, and the letterAyinis not represented. Psalm 111 and 112 have 22 lines, but 10 verses. Psalm 145 does not represent the letterNun, having 21 one verses, but one Qumran manuscript of this Psalm does have that missing line, which agrees with theSeptuagint. Some, like O Palmer Robertson, see the acrostic Psalms of book 1 and book 5 of Psalms as teaching and memory devices as well as transitions between subjects in the structure of the Psalms.[5]
Often the ease of detectability of an acrostic can depend on the intention of its creator. In some cases an author may desire an acrostic to have a better chance of being perceived by an observant reader, such as the acrostic contained in theHypnerotomachia Poliphili(where the key capital letters are decorated with ornate embellishments). However, acrostics may also be used as a form ofsteganography, where the author seeks to conceal the message rather than proclaim it. This might be achieved by making the key letters uniform in appearance with the surrounding text, or by aligning the words in such a way that the relationship between the key letters is less obvious. These are referred to asnull ciphersin steganography, using the first letter of each word to form a hidden message in an otherwise innocuous text.[6]Using letters to hide a message, as in acrostic ciphers, was popular during theRenaissance, and could employ various methods of enciphering, such as selecting other letters than initials based on a repeating pattern (equidistant letter sequences), or even concealing the message by starting at the end of the text and working backwards.[7]
A well-known acrostic in Greek is for the phraseJESUS CHRIST, GOD’S SON, SAVIOUR, the initial letters of which spellΙΧΘΥΣ(ICHTHYS), which meansfish:
According toCicero, acrostics were a regular feature ofSibylline prophecies(which were written in Greekhexameters. The type of acrostic is that known as a “gamma acrostic” (from the shape of the Greek letterΓ), where the same words are found both horizontally and vertically.[8]Cicero refers to an acrostic in this passage using the Greek wordἀκροστιχίς.
The 3rd-century BC didactic poetAratus, who was much admired and imitated by Cicero, Virgil and other Latin writers, appears to have started a fashion for using acrostics. One example is the famous passage inPhaenomena783–7 where the wordλεπτή'slender, subtle'occurs as a gamma acrostic and also twice in the text, as well as diagonally in the text and even cryptically taking the initial letters of certain words in lines 2 and 1:[9]
Several acrostics have recently been discovered in Roman poets, especially inVirgil. Among others, inEclogue9the acrosticVNDIS'in the waves'(lines 34–38) immediately precedes the wordsquis est nam ludus in undis?'for what is your game in the waves?'', andDEA DIO(i.e.dea Dione'the goddess Dione') (lines 46–51) in a passage which mentions the goddessDione(another name forVenus).[10]InEclogue8, alongside a passage dedicating the poem to an unnamed person and asking him to accept it, Neil Adkin reads the wordsTV SI ES ACI(i.e.accipe) ('if you are the one, accept!').[10]
InAeneid7.601–4, a passage which mentionsMarsand war, describing the custom of opening the gates of theTemple of Janus, the nameMARS(the god of war) appears in acrostic form as well as in the text as follows:[11]
InGeorgics1 429–433, next to a passage which contains the wordsnamque is certissimus auctor'for he is the most certain author', the double-letter reverse acrosticMA VE PV(i.e. Publius Vergilius Maro) is found on alternate lines.[10]
InEclogue 6, 13–24 Virgil uses a double acrostic, with the same wordLAESIS'for those who have been harmed'going both upwards and downwards starting from the same letter L in line 19.[12]Another double acrostic is found inAeneid 2, where the wordPITHI(i.e.πείθει, Greek for he ‘persuades’ or ‘he deceives’) is found first backwards at 103–107, then forwards at 142–146, at the beginning and end of a speech by Sinon persuading the Trojans to bring the wooden horse into the city.[13]The discoverer of this acrostic, Neil Adkin, points out that the same wordπείθειoccurs at more or less exactly the same line-numbers in a repeated line describing how Odysseus’ wife Penelope deceived the suitors inOdyssey2.106 and 24.141.
Another transliterated Greek word used as an acrostic in a pseudo-Sibylline prophecy has recently been noticed in the syllablesDE CA TE(i.e. Greekδεκάτη'tenth') inEclogue 4, 9–11, with the sameDEC A TErepeated cryptically both forwards and backwards in line 11.[14]
In another pseudo-Sibylline prophecy in poem 5 ofTibullus book 2the wordsAVDI ME‘hear me!’ are picked out in the first letters of alternate lines at the beginning of the prophecy.[15]
Virgil’s friendHoracealso made occasional use of acrostics, but apparently much less than Virgil. Examples areDISCE‘learn!’ (Odes1.18.11–15) (forming a gamma acrostic with the worddiscernunt'they discern'in line 18) andOTIA'leisure'inSatires1.2.7–10, which appears just after Horace has been advised to take a rest from writing satire. The acrosticOTIAalso occurs inOvid,Metamorphoses15.478–81, a passage describing the return of the peace-loving kingNuma Pompiliusto Rome.[16]Odes4.2, which starts with the wordPindarum'(the poet) Pindar' has next to it the truncated acrostic PIN in a gamma formation.[17]In the first poem of Horace'sEpodes(which were also known asIambi'iambics'), the first two lines beginibis ... amice, and it has been suggested that these words were deliberately chosen so that their initial letters IBI ... AM could be rearranged to read IAMBI.[18]
Towards the end of the 2nd century AD[19]a verse-summary of the plot was added to each of the plays ofPlautus. Each of these has an acrostic of the name of the play, for example:
The 3rd century AD poetCommodianwrote a series of 80 short poems on Christian themes calledInstructiones. Each of these is fully acrostic (with the exception of poem 60, where the initial letters are in alphabetical order), starting withPRAEFATIO‘preface’ andINDIGNATIO DEI‘the wrath of God’. The initials of poem 80, read backwards, giveCOMMODIANUS MENDICUS CHRISTI‘Commodian, Christ’s beggar’.
Chapters 2–5 of Book 12 in theRight Ginza, aMandaic text, are acrostic hymns, with each stanza ordered according to a letter of theMandaic alphabet.[20]
There is an acrostic secreted in the Dutch national anthemWilhelmus[21](William): the first letters of its fifteen stanzas spell WILLEM VAN NASSAU. This was one of the hereditary titles of William of Orange (William the Silent), who introduces himself in the poem to the Dutch people. This title also returned in the 2010speech from the throne, during theDutch State Opening of Parliament, whose first 15 lines also formed WILLEM VAN NASSOV.
Vladimir Nabokov's short story "The Vane Sisters" is known for its acrostic final paragraph, which contains a message from beyond the grave.
In 1829,Edgar Allan Poewrote an acrostic and simply titled itAn Acrostic, possibly dedicated to his cousin Elizabeth Rebecca Herring (though the initials L.E.L. refer toLetitia Elizabeth Landon):
Elizabeth it is in vain you say"Love not" — thou sayest it in so sweet a way:In vain those words from thee or L.E.L.Zantippe's talents had enforced so well:Ah! if that language from thy heart arise,Breath it less gently forth — and veil thine eyes.Endymion, recollect, when Luna triedTo cure his love — was cured of all beside —His folly — pride — and passion — for he died.
In 1939,Rolfe Humphriesreceived a lifelong ban from contributing toPoetrymagazineafter he penned and attempted to publish "a poem containing a concealed scurrilous phrase aimed at a well-known person", namelyNicholas Murray Butler. The poem, entitled "An ode for a Phi Beta Kappa affair", was inunrhymed iambic pentameter, contained oneclassicalreferenceper line, and ran as follows:
Niobe's daughters yearn to the womb again,Ioniansbright and fair, to the chill stone;Chaos in cry,Actaeon's angry pack,Hounds ofMolossus, shaggy wolves drivenOverAmpsanctus' vale andPentheus' glade,LaelapsandLadon, Dromas,Canace,As these in fury harry brake and hillSo the great dogs of evil bay the world.Memory, Mother ofMuses, be resignedUntil KingSaturncomes to rule again!Remember now no more the golden dayRemember now no more the fading gold,Astraeafled,Proserpinainhell;You searchers of the earth be reconciled!Because, through all the blight of human woe,UnderRobigo's rust, andClotho's shears,The mind of man still keeps its argosies,Lacedaemonian Helen wakes her tower,Echoreplies, and lamentation loudReverberates fromThracetoDelosIsle;Itylusgrieves, for whom thenightingaleSweetly as ever tunes her Daulian strain.And overTenedosthe flagship burns.How shall men loiter when the great moon shinesOpaque upon the sail, andArgiveseasRear like blue dolphins their cerulean curves?Samosis fallen,Lesbosstreams with fire,Etna in rage,Canopuscold in hate,Summon the Orphic bard to stranger dreams.And so for us who raiseAthene's torch.Sufficient to her message in this hour:Sons ofColumbia, awake, arise!
Acrostic: Nicholas Murray Butler is a horse's ass.
In October 2009,CaliforniagovernorArnold Schwarzeneggersent anoteto assemblymanTom Ammianoin which the first letters of lines 3-9 spell "Fuck You"; Schwarzenegger claimed that the acrostic message was coincidental, which mathematicians Stephen Devlin and Philip Stark disputed as statistically implausible.[22][23][24]
In January 2010,Jonathan I. Schwartz, the CEO ofSun Microsystems, sent an email to Sun employees on the completion of the acquisition of Sun byOracle Corporation. The initial letters of the first seven paragraphs spelled "BeatIBM".[25]
James May, former presenter on the BBC programTop Gear, was fired from the publicationAutocarfor spelling out a message using the large redinitialat the beginning of each review in the publication'sRoad Test Yearbook Issuefor 1992. Properly punctuated, the message reads: "So you think it's really good, yeah? You should try making the bloody thing up; it's a real pain in the arse."[26]
In the 2012 third novel of hisCaged Flower[27]series, author Cullman Wallace used acrostics as a plot device. The parents of a protagonist send e-mails where the first letters of the lines reveal their situation in a concealed message.
On 19 August 2017, the members of presidentDonald Trump'sCommittee on Arts and Humanitiesresigned in protest over his response to theUnite the Right rallyincident in Charlottesville, Virginia. The members' letter of resignation contained the acrostic "RESIST" formed from the first letter of each paragraph.[28]
On 23 August 2017,University of California, Berkeleyenergy professor Daniel Kammen resigned from his position as a State Department science envoy with a resignation letter in which the word "IMPEACH" was spelled out by the first letters of each paragraph.[29]
In the video gameZorkthe first letters of sentences in a prayer spelled "Odysseus" which was a possible solution to aCyclopsencounter in another room.[30]
On 4 May 2024,Noelia Voigtresigned asMiss USA 2023with a resignation letter containing an acrostic spelling out "I am silenced".[31]
Adouble acrostic, may have words at the beginning and end of its lines, as in this example, on the name ofStroud, by Paul Hansford:
The first letters make up the acrostic and the last letters the telestich; in this case they are identical.
Another example of a double acrostic is the first-century LatinSator Square.[32]
As well as being a double acrostic, the square contains severalpalindromes, and it can be read as a 25-letter palindromic sentence (of an obscure meaning).[33][34]
The poemBehold, O God!, by William Browne,[35]can be considered a complex kind of acrostic.
In the manuscript, some letters are capitalized and written extra-large, non-italic, and in red, and the lines are shifted left or right and internally spaced out as necessary to position the red letters within three crosses that extend through all the lines of the poem.
The letters within each cross spell out a verse from theNew Testament:
The "INRI" at the top of the middle cross stands forIēsus Nazarēnus,Rēx Iūdaeōrum, Latin for "Jesus of Nazareth, King of the Jews" (John 19:19). The three quotes represent the three figures crucified on Golgotha, as recorded in the gospels of Matthew and Luke.
(The text of the manuscript shown differs significantly from the text usually published, including in the reference.[35]Many of the lines have somewhat different wording; and while the acrostics are the same as far as they go, the published text is missing the last four lines, truncating the acrostics to "Lord, remember me when thou comest into thy kin", "O God, my God, why hast thou forsak", and "If thou art the Christ, save thyself". The manuscript text is printed below, first as normal poetry, then spaced and bolded to bring out the acrostics. The word "Thou" in line 8 is not visible in this photograph, but is in the published version and is included in a cross-stitch sampler of the poem from 1793.[36])
Behold, O God! In rivers of my tearsI come to thee! bow down thy blessed earsTo hear my Plaint; and let thine eyes which keepContinual watch behold a Sinner weep:Let not, O God my God my Sins, tho' great,And numberless, between thy Mercy's-SeatAnd my poor Soul have place; since we are taught,[Thou]Lord, remember'st thyne, if Thou art sought.I come not, Lord, with any other meritThan what I by my Saviour Christ inherit:Be then his wounds my balm— his stripes my Bliss;His thorns my crown; my death be blest in his.And thou, my blest Redeemer, Saviour, God,Quit my accounts, withhold thy vengeful rod!O beg for me, my hopes on Thee are set;And Christ forgive me, since thou'st paid my debtThe living font, the Life, the Way, I know,And but to thee, O whither shall I go?All other helps are vain: grant thine to me,For in thy cross my saving health I see.O hearken then, that I with faith implore,Lest Sin and Death sink me to rise no more.Lastly, O God, my course direct and guide,In Death defend me, that I never slide;And at Doomsday let me be rais'd again,To live with thee sweet Jesus say, Amen.
|
https://en.wikipedia.org/wiki/Acrostic
|
Apassenger name record(PNR) is a record in the database of acomputer reservation system(CRS) that contains the itinerary for a passenger or a group of passengers travelling together. The concept of a PNR was first introduced byairlinesthat needed to exchange reservation information in case passengers required flights of multiple airlines to reach their destination ("interlining"). For this purpose,IATAandATAhave defined standards for interline messaging of PNR and other data through the "ATA/IATA Reservations Interline Message Procedures - Passenger" (AIRIMP). There is no general industry standard for the layout and content of a PNR. In practice, each CRS or hosting system has its own proprietary standards, although common industry needs, including the need to map PNR data easily to AIRIMP messages, has resulted in many general similarities in data content and format between all of the major systems.
When a passenger books an itinerary, the travel agent or travel website user will create a PNR in the computer reservation system it uses. This is typically one of the largeglobal distribution systems, such asAmadeus,Sabre, orTravelport(Apollo, Galileo, and Worldspan) but if the booking is made directly with an airline the PNR can also be in the database of the airline's CRS. This PNR is called the Master PNR for the passenger and the associated itinerary. The PNR is identified in the particular database by arecord locator.
When portions of the travel are not provided by the holder of the master PNR, then copies of the PNR information are sent to the CRSs of the airlines that will be providing transportation. These CRSs will open copies of the original PNR in their own database to manage the portion of the itinerary for which they are responsible. Many airlines have their CRS hosted by one of the GDSs, which allows sharing of the PNR.
The record locators of the copied PNRs are communicated back to the CRS that owns the Master PNR, so all records remain tied together. This allows exchanging updates of the PNR when the status of trip changes in any of the CRSs.
Although PNRs were originally introduced for air travel, airlines systems can now also be used for bookings ofhotels,car rental, airport transfers, andtraintrips.
From a technical point of view, there are five parts of a PNR required before the booking can be completed. They are:
Other information, such as a timestamp and the agency'spseudo-city code, will go into the booking automatically. All entered information will be retained in the "history" of the booking.
Once the booking has been completed to this level, the CRS will issue a unique all alpha or alpha-numeric record locator, which will remain the same regardless of any further changes made (except if a multi-person PNR is split). Each airline will create their own booking record with a unique record locator, which, depending on service level agreement between the CRS and the airline(s) involved, will be transmitted to the CRS and stored in the booking. If an airline uses the same CRS as the travel agency, the record locator will be the same for both.
A considerable amount of other information is often desired by both the airlines and the travel agent to ensure efficient travel. This includes:
In more recent times,[when?]many governments now require the airline to provide further information included assisting investigators tracing criminals or terrorists. These include:
The components of a PNR are identified internally in a CRS by a one-character code. This code is often used when creating a PNR via direct entry into a terminal window (as opposed to using a graphical interface). The following codes are standard across all CRSs based on the original PARS system:
The majority of airlines and travel agencies choose to host theirPNRdatabases with acomputer reservations system(CRS) orglobal distribution system(GDS) company such asSabre,Galileo,WorldspanandAmadeus.[2]
Some privacy organizations are concerned at the amount of personal data that a PNR might contain. While the minimum data for completing a booking is quite small, a PNR will typically contain much more information of a sensitive nature.
This will include the passenger's full name, date of birth, home and work address, telephone number, e-mail address, credit card details, IP address if booked online, as well as the names and personal information of emergency contacts.
Designed to "facilitate easy global sharing of PNR data," the CRS-GDS companies "function both as data warehouses and data aggregators, and have a relationship to travel data analogous to that of credit bureaus to financial data.".[3]A canceled or completed trip does not erase the record since "copies of the PNRs are ‘purged’ from live to archival storage systems, and can be retained indefinitely by CRSs, airlines, and travel agencies."[4]Further, CRS-GDS companies maintain web sites that allow almost unrestricted access to PNR data – often, the information is accessible by just the reservation number printed on the ticket.
Additionally, "[t]hrough billing, meeting, and discount eligibility codes, PNRs contain detailed information on patterns of association between travelers. PNRs can contain religious meal preferences and special service requests that describe details of physical and medical conditions (e.g., "Uses wheelchair, can control bowels and bladder") – categories of information that have special protected status in the European Union and some other countries as“sensitive” personal data.”[5][6]Despite the sensitive character of the information they contain, PNRs are generally not recognized as deserving the same privacy protection afforded to medical and financial records. Instead, they are treated as a form of commercial transaction data.[5]
On 16 January 2004, theArticle 29 Working Partyreleased theirOpinion 1/2004 (WP85)on the level of PNR protection ensured in Australia for the transmission of Passenger Name Record data from airlines.
Customs applies a general policy of non-retention for these data. For those 0.05% to 0.1% of passengers who are referred to Customs for further evaluation, the airline PNR data are temporarily retained, but not stored, pending resolution of the border evaluation. After resolution, their PNR data are erased from the PC of the Customs PAU officer concerned and are not entered into Australian databases.
In 2010 the European Commission'sDirectorate-General for Justice, Freedom and Securitywas split in two. The resulting bodies were theDirectorate-General for Justice (European Commission)and theDirectorate-General for Home Affairs (European Commission).
On 4 May 2011,Stefano Manservisi, Director-General at theDirectorate-General for Home Affairs (European Commission)wrote to theEuropean Data Protection Supervisor(EDPS) with regards to a PNR sharing agreement with Australia,[7]a close ally of the US and signatory to theUKUSA Agreementonsignals intelligence.
The EDPS responded on 5 May inLetter 0420 D845:[7]
I am writing to you in reply to your letter of 4 May concerning the two draft Proposals for Council Decisions on (i) the conclusion and (ii) the signature of the Agreement between the European Union and Australia on the processing and transfer of Passenger Name Record (PNR) data by air carriers to the Australian Customs and Border Protection Service.We understand that the consultation of the EDPS takes place in the context of a fast track procedure. However,we regret that the time available for us to analyse the Proposal is reduced to a single day. Such a deadline precludes the EDPS from being able to exercise its competences in an appropriate way, even in the context of a file which we have been closely following since 2007.
TheArticle 29 Working PartydocumentOpinion 1/2005 on the level of protection ensured in Canada for the transmission of Passenger Name Record and Advance Passenger Information from airlines (WP 103), 19 January 2005, offers information on the nature of PNR agreements withCanada.Archived2014-11-27 at theWayback Machine.
|
https://en.wikipedia.org/wiki/Passenger_name_record
|
Multi Expression Programming(MEP) is an evolutionary algorithm for generating mathematical functions describing a given set of data. MEP is aGenetic Programmingvariant encoding multiple solutions in the same chromosome. MEP representation is not specific (multiple representations have been tested). In the simplest variant, MEP chromosomes are linear strings of instructions. This representation was inspired byThree-address code. MEP strength consists in the ability to encode multiple solutions, of a problem, in the same chromosome. In this way, one can explore larger zones of the search space. For most of the problems this advantage comes with no running-time penalty compared withgenetic programmingvariants encoding a single solution in a chromosome.[1][2][3]
MEP chromosomes are arrays of instructions represented inThree-address codeformat.
Each instruction contains a variable, a constant, or a function. If the instruction is a function, then the arguments (given as instruction's addresses) are also present.
Here is a simple MEP chromosome (labels on the left side are not a part of the chromosome):
When the chromosome is evaluated it is unclear which instruction will provide the output of the program. In many cases, a set of programs is obtained, some of them being completely unrelated (they do not have common instructions).
For the above chromosome, here is the list of possible programs obtained during decoding:
Each instruction is evaluated as a possible output of the program.
The fitness (or error) is computed in a standard manner. For instance, in the case ofsymbolic regression, the fitness is the sum of differences (in absolute value) between the expected output (called target) and the actual output.
Which expression will represent the chromosome? Which one will give the fitness of the chromosome?
In MEP, the best of them (which has the lowest error) will represent the chromosome. This is different from other GP techniques: InLinear genetic programmingthe last instruction will give the output. InCartesian Genetic Programmingthe gene providing the output is evolved like all other genes.
Note that, for many problems, this evaluation has the same complexity as in the case of encoding a single solution in each chromosome. Thus, there is no penalty in running time compared to other techniques.
MEPX is a cross-platform (Windows, macOS, and Linux Ubuntu) free software for the automatic generation of computer programs. It can be used for data analysis, particularly for solvingsymbolic regression,statistical classificationandtime-seriesproblems.
Libmepis a free and open source library implementing Multi Expression Programming technique. It is written in C++.
hmepis a new open source library implementing Multi Expression Programming technique in Haskell programming language.
|
https://en.wikipedia.org/wiki/Multi_expression_programming
|
Amateur radio, also known asham radio, is the use of theradio frequencyspectrumfor purposes ofnon-commercialexchange of messages,wirelessexperimentation, self-training, private recreation,Radiosport,contesting, andemergency communications.[1]The term"radio amateur"is used to specify"a duly authorized person interested in radioelectric practice with a purely personal aim and withoutpecuniaryinterest"[2](either direct monetary or other similar reward); and to differentiate it fromcommercial broadcasting, public safety (police and fire), ortwo-way radioprofessional services (maritime, aviation, taxis, etc.).
The amateur radio service (amateur serviceandamateur-satellite service) is established by theInternational Telecommunication Union(ITU) through their recommendedRadio Regulations. National governments regulate technical and operational characteristics of transmissions and issue individual station licenses with a unique identifyingcall sign, which must be used in all transmissions.Amateur operatorsmust hold anamateur radio licenseobtained by successfully passing an official examination that demonstrates adequate technical and theoretical knowledge of amateur radio, electronics, and related topics essential for the hobby; it also assesses sufficient understanding of the laws and regulations governing amateur radio within the country issuing the license.
Radio amateurs are privileged to transmit on a limited specific set of frequency bands – theamateur radio bands– allocated internationally, throughout theradio spectrum, but within these bands are allowed to transmit on anyfrequency; although on some of those frequencies they are limited to one or a few of a variety of modes of voice, text, image, anddata communications. This enables communication across a city, region, country, continent, the world, or even into space. In many countries, amateur radio operators may also send, receive, or relay radio communications betweencomputersortransceiversconnected to securevirtual private networkson theInternet.
Amateur radio is officially represented and coordinated by theInternational Amateur Radio Union(IARU), which is organized in three regions and has as its members the national amateur radio societies which exist in most countries. According to a 2011 estimate by theARRL(theU.S.national amateur radio society), two million people throughout the world are regularly involved with amateur radio.[3]About830000amateur radio stationsare located inIARU Region 2(the Americas) followed byIARU Region 3(South and East Asia and the Pacific Ocean) with about750000stations.Significantly fewer, about400000stations,are located inIARU Region 1(Europe, Middle East,CIS, Africa).
The origins of amateur radio can be traced to the late 19th century, but amateur radio as practised today began in the early 20th century. TheFirst Annual Official Wireless Blue Book of the Wireless Association of America, produced in 1909, contains a list of amateur radio stations.[4]This radiocallbooklistswireless telegraphstations in Canada and the United States, including 89 amateur radio stations. As with radio in general, amateur radio was associated with various amateur experimenters and hobbyists. Amateur radio enthusiasts have significantly contributed toscience,engineering, industry, andsocial services. Research by amateur operators has founded new industries,[5]built economies,[6]empowered nations,[7]and saved lives in times of emergency.[8][9]Ham radio can also be used in the classroom to teach English, map skills, geography, math, science, and computer skills.[10]
The term"ham"was first apejorativeterm used in professionalwired telegraphyduring the 19th century, to mock operators with poorMorse code-sending skills ("ham-fisted").[11][12][13][14]This term continued to be used after the invention of radio, and the proliferation of amateur experimentation with wireless telegraphy; among land- and sea-based professional radio telegraphers,"ham"amateurs were considered a nuisance. The use of"ham"meaning"amateurish or unskilled"survives today sparsely in other disciplines (e.g."ham actor").
The amateur radio community subsequently reclaimed the word as a label of pride,[15]and by the mid-20th century it had lost its pejorative meaning. Although not an acronym or initialism, it is occasionally written as "HAM" in capital letters.
The many facets of amateur radio attract practitioners with a wide range of interests. Many amateurs begin with a fascination with radio communication and then combine other personal interests to make pursuit of the hobby rewarding. Some of the focal areas amateurs pursue includeradio contesting,radio propagationstudy,public service communication,technical experimentation, andcomputer networking.
Hobbyist radio enthusiasts employ avariety of transmission methods for interaction. The primary modes for vocal communications arefrequency modulation(FM) andsingle sideband(SSB). FM is recognized for its superior audio quality, whereas SSB is more efficient both for long-range communication and for limitedbandwidthconditions.[16]The most efficient for both distance and limited bandwidth remains CW, and lately, some digital modes.
RadiotelegraphyusingInternational Morse code, also known as "CW" from "continuous wave", is the wireless extension of landline (wired)telegraphyfirst developed bySamuel Morse, and greatly revised byAlfred Vail,Friedrich Gerke, and a comittee of theITU; in one revision or another, it dates to the earliest days of radio. Although computer-based (digital) modes and methods have largely replaced CW for commercial and military applications, many amateur radio operators still use the CW mode – particularly on theshortwavebands and for experimental work, such asEarth–Moon–Earth communication– because of its inherent advantage insignal-to-noise ratio. Morse, using internationally agreed message encodings such as theQ code, enables communication between amateurs who speak different languages. It is also popular withhomebrewersand in particular with"QRP"or very-low-power enthusiasts, as CW-only transmitters are simpler to construct, and the human ear-brain signal processing system can pull weak CW signals out of the noise where voice signals would be effectively inaudible. Similarly, the "legacy"amplitude modulation(AM) mode is popular with some home constructors because of its simpler modulation-demodulation circuitry; it is also pursued by manyvintage amateur radioenthusiasts and aficionados ofvacuum tubetechnology.
Demonstrating a proficiency in Morse code was for many years a requirement to obtain an amateur license to transmit on frequencies below 30 MHz. Following changes in international regulations in 2003, countries are no longer required to demand proficiency.[17]The United StatesFederal Communications Commission, for example, phased out this requirement for all license classes on 23 February 2007.[18][19]
Modern personal computers have encouraged the use ofdigitalmodes such asradioteletype(RTTY) which previously required cumbersome mechanical equipment.[20]Hams led the development ofpacket radioin the 1970s, which has employed protocols such asAX.25andTCP/IP. Specialized digital modes such asPSK31allow real-time, low-power communications on the shortwave bands but have been losing favor in place of newer digital modes such asFT8.
Radio over IP, or RoIP, is similar toVoice over IP(VoIP), but augments two-way radio communications rather than telephone calls.EchoLinkusing VoIP technology has enabled amateurs to communicate through local Internet-connected repeaters and radio nodes,[21]whileIRLPhas allowed the linking of repeaters to provide greater coverage area.
Automatic link establishment (ALE) has enabled continuous amateur radio networks to operate on thehigh frequencybands with global coverage. Other modes, such as FSK441 using software such asWSJT, are used for weak signal modes includingmeteor scatterandmoonbouncecommunications.[22]
Fast scanamateur televisionhas gained popularity as hobbyists adapt inexpensive consumer video electronics like camcorders and video cards inPCs. Because of the widebandwidthand stable signals required, amateur television is typically found in the70 cm(420–450 MHz) wavelength range, though there is also limited use on33 cm(902–928 MHz),23 cm(1240–1300 MHz) and shorter. These requirements also effectively limit the signal range to between 20 and 60 miles (30–100 km).
Linkedrepeatersystems, however, can allow transmissions ofVHFand higher frequencies across hundreds of miles.[23]Repeaters are usually located on heights of land or on tall structures, and allow operators to communicate over hundreds of miles using hand-held or mobiletransceivers. Repeaters can also be linked together by using otheramateur radio bands,landline, or theInternet.
Amateur radio satellitescan be accessed, some using a hand-held transceiver (HT), even, at times, using the factory "rubber duck" antenna.[24]Hams also use themoon, theaurora borealis, and the ionized trails ofmeteorsas reflectors of radio waves.[25]Hams can also contact theInternational Space Station(ISS) because manyastronautsare licensed as amateur radio operators.[26][27]
Amateur radio operators use theiramateur radio stationto make contacts with individual hams as well as participate in round-table discussion groups or "rag chew sessions" on the air. Some join in regularly scheduled on-air meetings with other amateur radio operators, called "nets" (as in "networks"), which are moderated by a station referred to as "Net Control".[28]Nets can allow operators to learn procedures for emergencies, be an informal round table, or cover specific interests shared by a group.[29]
Amateur radio operators, using battery- or generator-powered equipment, often provide essential communications services when regular channels are unavailable due to natural disaster or other disruptive events .[30]
Many amateur radio operators participate in radio contests, during which an individual or team of operators typically seek to contact and exchange information with as many other amateur radio stations as possible in a given period of time. In addition to contests, a number ofamateur radio operating awardschemes exist, sometimes suffixed with "on the Air", such asSummits on the Air, Islands on the Air,Worked All StatesandJamboree on the Air.
Amateur radio operators may also act ascitizen scientistsfor propagation research andatmospheric science.[31]
Radio transmission permits are closely controlled by nations' governments because radio waves propagate beyond national boundaries, and therefore radio is of international concern.[32]
Both the requirements for and privileges granted to a licensee vary from country to country, but generally follow the international regulations and standards established by theInternational Telecommunication Union[33]andWorld Radio Conferences.
All countries that license citizens to use amateur radio require operators to display knowledge and understanding of key concepts, usually by passing an exam.[34]The licenses grant hams the privilege to operate in larger segments of theradio frequencyspectrum, with a wider variety of communication techniques, and with higher power levels relative to unlicensed personal radio services (such asCB radio,FRS, andPMR446), which require type-approved equipment restricted in mode, range, and power.[35]
Amateur licensing is a routine civil administrative matter in many countries. Amateurs therein must pass an examination to demonstrate technical knowledge, operating competence, and awareness of legal and regulatory requirements, in order to avoid interfering with other amateurs and other radio services.[36]A series of exams are often available, each progressively more challenging and granting more privileges: greater frequency availability, higher power output, permitted experimentation, and, in some countries, distinctive call signs.[37][38]Some countries, such as the United Kingdom and Australia, have begun requiring a practical assessment in addition to the written exams in order to obtain a beginner's license, which they call a Foundation License.[39]
In most countries, an operator will be assigned acall signwith their license.[40]In some countries, a separate "station license" is required for any station used by an amateur radio operator. Amateur radio licenses may also be granted to organizations or clubs. In some countries, hams were allowed to operate only club stations.[41]
An amateur radio license is valid only in the country where it is issued or in another country that has a reciprocal licensing agreement with the issuing country.[42][43]
In some countries, an amateur radio license is necessary in order to purchase or possess amateur radio equipment.[44]
Amateur radio licensing in the United Statesexemplifies the way in which some countries[which?]award different levels of amateur radio licenses based on technical knowledge: three sequential levels of licensing exams (Technician Class, General Class, and Amateur Extra Class) are currently offered, which allow operators who pass them access to larger portions of the Amateur Radio spectrum and more desirable (shorter) call signs. An exam, authorized by the Federal Communications Commission (FCC), is required for all levels of the Amateur Radio license. These exams are administered by Volunteer Examiners, accredited by the FCC-recognized Volunteer Examiner Coordinator (VEC) system. The Technician Class and General Class exams consist of 35 multiple-choice questions, drawn randomly from a pool of at least 350. To pass, 26 of the 35 questions must be answered correctly.[45]The Extra Class exam has 50 multiple choice questions (drawn randomly from a pool of at least 500), 37 of which must be answered correctly.[45]The tests cover regulations, customs, and technical knowledge, such as FCC provisions, operating practices, advanced electronics theory, radio equipment design, and safety. Morse Code is no longer tested in the U.S. Once the exam is passed, the FCC issues an Amateur Radio license which is valid for ten years. Studying for the exam is made easier because the entire question pools for all license classes are posted in advance. The question pools are updated every four years by the National Conference of VECs.[45]
Prospective amateur radio operators are examined on understanding of the key concepts of electronics, radio equipment, antennas,radio propagation,RFsafety, and the radio regulations of the government granting the license.[1]These examinations are sets of questions typically posed in either a short answer or multiple-choice format. Examinations can be administered bybureaucrats, non-paid certified examiners, or previously licensed amateur radio operators.[1]
The ease with which an individual can acquire an amateur radio license varies from country to country. In some countries, examinations may be offered only once or twice a year in the national capital and can be inordinately bureaucratic (for example in India) or challenging because some amateurs must undergo difficult security approval (as inIran). Currently, onlyYemenandNorth Koreado not issue amateur radio licenses to their citizens.[46][47]Some developing countries, especially those in Africa, Asia, andLatin America, require the payment of annual license fees that can be prohibitively expensive for most of their citizens. A few small countries may not have a national licensing process and may instead require prospective amateur radio operators to take the licensing examinations of a foreign country. In countries with the largest numbers of amateur radio licensees, such as Japan, the United States, Thailand, Canada, and most of the countries in Europe, there are frequent license examinations opportunities in major cities.
Granting a separate license to a club or organization generally requires that an individual with a current and valid amateur radio license who is in good standing with the telecommunications authority assumes responsibility for any operations conducted under the club license or club call sign.[38]A few countries may issue special licenses to novices or beginners that do not assign the individual a call sign but instead require the newly licensed individual to operate from stations licensed to a club or organization for a period of time before a higher class of license can be acquired.[1]
A reciprocal licensing agreement between two countries allows bearers of an amateur radio license in one country under certain conditions to legally operate an amateur radio station in the other country without having to obtain an amateur radio license from the country being visited, or the bearer of a valid license in one country can receive a separate license and a call sign in another country, both of which have a mutually-agreed reciprocal licensing approvals. Reciprocal licensing requirements vary from country to country. Some countries have bilateral or multilateral reciprocal operating agreements allowing hams to operate within their borders with a single set of requirements. Some countries lack reciprocal licensing systems. Others use international bodies such as the Organization of American States to facilitate licensing reciprocity.[48]
When traveling abroad, visiting amateur operators must follow the rules of the country in which they wish to operate. Some countries have reciprocalinternational operatingagreements allowing hams from other countries to operate within their borders with just their home country license. Other host countries require that the visiting ham apply for a formal permit, or even a new host country-issued license, in advance.
The reciprocal recognition of licenses frequently not only depends on the involved licensing authorities, but also on the nationality of the bearer. As an example, in the US, foreign licenses are recognized only if the bearer does not have US citizenship and holds no US license (which may differ in terms of operating privileges and restrictions). Conversely, a US citizen may operate under reciprocal agreements in Canada, but not a non-US citizen holding a US license.
Many people start their involvement in amateur radio on social media or by finding a local club. Clubs often provide information about licensing, local operating practices, and technical advice. Newcomers also often study independently by purchasing books or other materials, sometimes with the help of a mentor, teacher, or friend. In North America, established amateurs who help newcomers are often referred to as "Elmers", as coined by Rodney Newkirk (W9BRD),[49]within the ham community.[50][51]In addition, many countries have national amateur radio societies which encourage newcomers and work with government communications regulation authorities for the benefit of all radio amateurs. The oldest of these societies is theWireless Institute of Australia, formed in 1910; other notable societies are theRadio Society of Great Britain, theAmerican Radio Relay League,Radio Amateurs of Canada,Bangladesh NGOs Network for Radio and Communication, theNew Zealand Association of Radio TransmittersandSouth African Radio League. (SeeCategory:Amateur radio organizations)
An amateur radio operator uses acall signon the air to legally identify the operator or station.[52]In some countries, the call sign assigned to the station must always be used, whereas in other countries, the call sign of either the operator or the station may be used.[53]In certain jurisdictions, an operator may also select a"vanity"call sign although these must also conform to the issuing government's allocation and structure used for amateur radio call signs.[54]Some jurisdictions require a fee to obtain a vanity call sign; in others, such as the UK, a fee is not required and the vanity call sign may be selected when the license is applied for. The FCC in the U.S. discontinued its fee for vanity call sign applications in September 2015, but reinstated it at $35 in 2022.[55]
Call sign structure as prescribed by the ITU consists of three parts which break down as follows, using the call signZS1NATas an example:
The combination of the three parts identifies the specific transmitting station, and the station's identification (its call sign) is determined by the license held by its operator. In the case of commercial stations and amateur club stations, the operator is a corporation; in the case of amateur radio operators, the license-holder is a resident of the country identified by the first part of the call sign.
Many countries do not follow the ITU convention for the second-part digit. In the United Kingdom the original callsG0xxx,G2xxx,G3xxx,G4xxx, were Full (A) License holders along with the lastM0xxxfull call signs issued by theCity & Guildsexamination authority in December 2003. Additional Full Licenses were originally granted to (B) Licenses withG1xxx,G6xxx,G7xxx,G8xxxand 1991 onward withM1xxxcall signs. The newer three-level Intermediate License holders are assigned2E0xxxand2E1xxx, and the basic Foundation License holders are granted call signsM3xxx,M6xxxorM7xxx.[56]
Instead of using numbers, in the U.K. the second letter after the initial 'G' or 'M' identifies the station's location; for example, a call signG7OOEbecomesGM7OOEandM0RDMbecomesMM0RDMwhen the license holder is operating their station in Scotland. PrefixGM&MMare Scotland,GW&MWare Wales,GI&MIare Northern Ireland,GD&MDare the Isle of Man,GJ&MJare Jersey andGU&MUare Guernsey. Intermediate licence call signs are slightly different. They begin2z0and2z1where thezis replaced with one of the country letters, as above. For example2M0and2M1are Scotland,2W0and2W1are Wales and so on. The exception however is for England, whose letter would be 'E'; however, letter 'E'isused, butonlyin intermediate-level call signs, and perplexingly never by the advanced licenses. For example2E0&2E1are used whereas the call signs beginning 'G' or 'M' for foundation and full licenses never use the 'E'.[57]
In the United States, for non-vanity licenses, the numeral indicates the geographical district the holder resided in when the license was first issued. Prior to 1978, US hams were required to obtain a new call sign if they moved out of their geographic district.
In Canada, call signs start withVA,VE,VY,VO, andCY. Call signs starting with 'V' end with a number after to indicate the political region; whereas the prefixCYindicates geographic islands. PrefixesVA1andVE1are used forNova Scotia;VA2&VE2forQuebec;VA3&VE3forOntario;VA4&VE4forManitoba;VA5&VE5forSaskatchewan;VA6&VE6forAlberta;VA7&VE7forBritish Columbia;VE8for theNorthwest Territories;VE9forNew Brunswick;VY0forNunavut;VY1for theYukon;VY2forPrince Edward Island;VO1forNewfoundland; andVO2forLabrador.CYis for amateurs operating fromSable Island(CY0) orSt. Paul Island(CY9). Special permission is required to access either of these: fromParks Canadafor Sable andCoast Guardfor St. Paul. The last two or three letters of the call signs are typically the operator's choice (upon completing the licensing test, the ham writes three most-preferred options). Two-letter call sign suffixes require a ham to have already been licensed for 5 years. Call signs in Canada can be requested with a fee.
Also, for smaller geopolitical entities, the digit at the second or third character might be part of the country identification. For example,VP2xxxis in the British West Indies, which is subdivided intoVP2ExxAnguilla,VP2MxxMontserrat, andVP2VxxBritish Virgin Islands.VP5xxxis in the Turks and Caicos Islands,VP6xxxis on Pitcairn Island,VP8xxxis in the Falklands, andVP9xxxis in Bermuda.
Onlinecallbooksor call sign databases can be browsed or searched to find out who holds a specific call sign.[58]An example of an online callbook isQRZ.com. Various partial lists of famous people who hold or held amateur radio call signs have been compiled and published.[59]
Many jurisdictions (but not in the U.K. nor Europe) may issue specialtyvehicle registration platesto licensed amateur radio operators.[60][61]The fees for application and renewal are usually less than the standard rate for specialty plates.[60][62]
In most administrations, unlike other RF spectrum users, radio amateurs may build or modify transmitting equipment for their own use within the amateur spectrum without the need to obtain government certification of the equipment.[63][a][64][b]Licensed amateurs can also use any frequency in their bands (rather than being allocated fixed frequencies or channels) and can operate medium-to-high-powered equipment on a wide range of frequencies[65]so long as they meet certain technical parameters including occupied bandwidth, power, and prevention ofspurious emission.
Radio amateurs have access to frequency allocations throughout the RF spectrum, usually allowing choice of an effective frequency for communications across a local, regional, or worldwide path. The shortwave bands, orHF, are suitable for worldwide communication, and theVHFandUHFbands normally provide local or regional communication, while themicrowavebands have enough space, orbandwidth, for amateur television transmissions and high-speedcomputer networks.
In most countries, an amateur radio license grants permission to the license holder to own, modify, and operate equipment that is not certified by a governmental regulatory agency. This encourages amateur radio operators to experiment with home-constructed or modified equipment. The use of such equipment must still satisfy national and international standards onspurious emissions.
Amateur radio operators are encouraged both by regulations and tradition of respectful use of the spectrum to use as little power as possible to accomplish the communication.[66]This is to minimise interference orEMCto any other device. Although allowablepowerlevels are moderate by commercial standards, they are sufficient to enable global communication. Lower license classes usually have lower power limits; for example, the lowest license class in the UK (Foundation licence) has a limit of 25 W.[67]
Power limits vary from country to country and between license classes within a country. For example, thepeak envelope powerlimits for the highest available license classes in a few selected countries are: 2.25kWin Canada;[68]1.5 kW in the United States; 1.0 kW in Belgium,Luxembourg, Switzerland, South Africa, the United Kingdom, and New Zealand; 750 W in Germany; 500 W in Italy; 400 W in Australia and India; and 150 W inOman.
Output power limits may also depend on the mode of transmission. In Australia, for example, 400 W may be used forSSBtransmissions, but FM and other modes are limited to 120 W.
The point at which power output is measured may also affect transmissions: The United Kingdom measures at the point the antenna is connected to the signal feed cable, which means the radio system may transmit more than 400 W to overcome signal loss in the cable; conversely, the U.S. and Germany measure power at the output of the final amplification stage, which results in a loss in radiated power with longer cable feeds.[citation needed]
Certain countries permit amateur radio licence holders to hold a Notice of Variation that allows higher power to be used than normally allowed for certain specific purposes. E.g. in the UK some amateur radio licence holders are allowed to transmit using (33 dBw) 2.0 kW for experiments entailing using the moon as a passive radio reflector (known asEarth–Moon–Earth communication) (EME).
TheInternational Telecommunication Union(ITU) governs the allocation of communications frequencies worldwide, with participation by each nation's communications regulation authority. National communications regulators have some liberty to restrict access to thesebandplanfrequencies or to award additional allocations as long as radio services in other countries do not suffer interference. In some countries, specificemission typesare restricted to certain parts of the radio spectrum, and in most other countries,International Amateur Radio Union(IARU) member societies adopt voluntary plans to ensure the most effective use of spectrum.
In a few cases, a national telecommunication agency may also allow hams to use frequencies outside of the internationally allocated amateur radio bands. InTrinidad and Tobago, hams are allowed to use a repeater which is located on 148.800 MHz. This repeater is used and maintained by theNational Emergency Management Agency(NEMA), but may be used by radio amateurs in times of emergency or during normal times to test their capability and conduct emergency drills. This repeater can also be used by non-ham NEMA staff andREACTmembers. In Australia and New Zealand, ham operators are authorized to use one of the UHF TV channels. In the U.S., amateur radio operators providing essential communication needs in connection with the immediate safety of human life and immediate protection of property when normal communication systems are not available may use any frequency including those of other radio services such as police and fire and in cases of disaster in Alaska may use the statewide emergency frequency of 5.1675 MHz with restrictions upon emissions.[69]
Similarly, amateurs in the United States may apply to be registered with theMilitary Auxiliary Radio System(MARS). Once approved and trained, these amateurs also operate on US government military frequencies to provide contingency communications and morale message traffic support to the military services.
Amateurs use a variety of voice, text, image, and data communication modes the over radio. Generally new modes can be tested in the amateur radio service, although national regulations may require disclosure of a new mode to permit radio licensing authorities to monitor the transmissions.Encryption, for example, is not generally permitted in the Amateur Radio service except for the special purpose of satellite vehicle control uplinks. The following is a partial list of the modes of communication used, where the mode includes bothmodulationtypes and operating protocols.
In former times, most amateur digital modes were transmitted by inserting audio into the microphone input of a radio and using an analog scheme, such asamplitude modulation(AM),frequency modulation(FM), orsingle-sideband modulation(SSB). Beginning in 2017, increased use of several digital modes, particularlyFT8, became popular within the amateur radio community.[70]
The following "modes" use no one specific modulation scheme but rather are classified by the activity of the communication.
|
https://en.wikipedia.org/wiki/Amateur_radio
|
Inabstract algebra, thefree monoidon asetis themonoidwhose elements are all thefinite sequences(or strings) of zero or more elements from that set, withstring concatenationas the monoid operation and with the unique sequence of zero elements, often called theempty stringand denoted by ε or λ, as theidentity element. The free monoid on a setAis usually denotedA∗. Thefree semigrouponAis the subsemigroupofA∗containing all elements except the empty string. It is usually denotedA+.[1][2]
More generally, an abstract monoid (or semigroup)Sis described asfreeif it isisomorphicto the free monoid (or semigroup) on some set.[3]
As the name implies, free monoids and semigroups are those objects which satisfy the usualuniversal propertydefiningfree objects, in the respectivecategoriesof monoids and semigroups. It follows that every monoid (or semigroup) arises as a homomorphic image of a free monoid (or semigroup). The study of semigroups as images of free semigroups is called combinatorial semigroup theory.
Free monoids (and monoids in general) areassociative, by definition; that is, they are written without any parenthesis to show grouping or order of operation. The non-associative equivalent is thefree magma.
The monoid (N0,+) ofnatural numbers(including zero) under addition is a free monoid on a singleton free generator, in this case, the natural number 1.
According to the formal definition, this monoid consists of all sequences like "1", "1+1", "1+1+1", "1+1+1+1", and so on, including the empty sequence.
Mapping each such sequence to its evaluation result[4]and the empty sequence to zero establishes an isomorphism from the set of such sequences toN0.
This isomorphism is compatible with "+", that is, for any two sequencessandt, ifsis mapped (i.e. evaluated) to a numbermandtton, then their concatenations+tis mapped to the summ+n.
Informal languagetheory, usually a finite set of "symbols" A (sometimes called thealphabet) is considered. A finite sequence of symbols is called a "word overA", and the free monoidA∗is called the "Kleene starofA".
Thus, the abstract study of formal languages can be thought of as the study of subsets of finitely generated free monoids.
For example, assuming an alphabetA= {a,b,c}, its Kleene starA∗contains all concatenations ofa,b, andc:
IfAis any set, theword lengthfunction onA∗is the uniquemonoid homomorphismfromA∗to (N0,+) that maps each element ofAto 1. A free monoid is thus agraded monoid.[5](A graded monoidM{\displaystyle M}is a monoid that can be written asM=M0⊕M1⊕M2⋯{\displaystyle M=M_{0}\oplus M_{1}\oplus M_{2}\cdots }. EachMn{\displaystyle M_{n}}is a grade; the grading here is just the length of the string. That is,Mn{\displaystyle M_{n}}contains those strings of lengthn.{\displaystyle n.}The⊕{\displaystyle \oplus }symbol here can be taken to mean "set union"; it is used instead of the symbol∪{\displaystyle \cup }because, in general, set unions might not be monoids, and so a distinct symbol is used. By convention, gradations are always written with the⊕{\displaystyle \oplus }symbol.)
There are deep connections between the theory ofsemigroupsand that ofautomata. For example, every formal language has asyntactic monoidthat recognizes that language. For the case of aregular language, that monoid is isomorphic to thetransition monoidassociated to thesemiautomatonof somedeterministic finite automatonthat recognizes that language. The regular languages over an alphabet A are the closure of the finite subsets of A*, the free monoid over A, under union, product, and generation of submonoid.[6]
For the case ofconcurrent computation, that is, systems withlocks,mutexesorthread joins, the computation can be described withhistory monoidsandtrace monoids. Roughly speaking, elements of the monoid can commute, (e.g. different threads can execute in any order), but only up to a lock or mutex, which prevent further commutation (e.g. serialize thread access to some object).
We define a pair of words inA∗of the formuvandvuasconjugate: the conjugates of a word are thus itscircular shifts.[7]Two words are conjugate in this sense if they areconjugate in the sense of group theoryas elements of thefree groupgenerated byA.[8]
A free monoid isequidivisible: if the equationmn=pqholds, then there exists anssuch that eitherm=ps,sn=q(example see image) orms=p,n=sq.[9]This result is also known asLevi's lemma.[10]
A monoid is free if and only if it isgraded(in the strong sense that only the identity has gradation 0) and equidivisible.[9]
The members of a setAare called thefree generatorsforA∗andA+. The superscript * is then commonly understood to be theKleene star. More generally, ifSis an abstract free monoid (semigroup), then a set of elements which maps onto the set of single-letter words under an isomorphism to a monoidA∗(semigroupA+) is called aset of free generatorsforS.
Each free monoid (or semigroup)Shas exactly one set of free generators, thecardinalityof which is called therankofS.
Two free monoids or semigroups are isomorphic if and only if they have the same rank. In fact,everyset of generatorsfor a free monoid or semigroupScontains the free generators, since a free generator has word length 1 and hence can only be generated by itself. It follows that a free semigroup or monoid is finitely generated if and only if it has finite rank.
AsubmonoidNofA∗isstableifu,v,ux,xvinNtogether implyxinN.[11]A submonoid ofA∗is stable if and only if it is free.[12]For example, using the set ofbits{ "0", "1" } asA, the setNof all bit strings containing an even number of "1"s is a stable submonoid because ifucontains an even number of "1"s, anduxas well, thenxmust contain an even number of "1"s, too. WhileNcannot be freely generated by any set of single bits, itcanbe freely generated by the set of bit strings { "0", "11", "101", "1001", "10001", ... } – the set of strings of the form "10n1" for some nonnegative integern(along with the string "0").
A set of free generators for a free monoidPis referred to as abasisforP: a set of wordsCis acodeifC* is a free monoid andCis a basis.[3]A setXof words inA∗is aprefix, or has theprefix property, if it does not contain a proper(string) prefixof any of its elements. Every prefix inA+is a code, indeed aprefix code.[3][13]
A submonoidNofA∗isright unitaryifx,xyinNimpliesyinN. A submonoid is generated by a prefix if and only if it is right unitary.[14]
A factorization of a free monoid is a sequence of subsets of words with the property that every word in the free monoid can be written as a concatenation of elements drawn from the subsets. TheChen–Fox–Lyndon theoremstates that theLyndon wordsfurnish a factorization. More generally,Hall wordsprovide a factorization; the Lyndon words are a special case of the Hall words.
The intersection of free submonoids of a free monoidA∗is again free.[15][16]IfSis a subset of a free monoidA* then the intersection of all free submonoids ofA* containingSis well-defined, sinceA* itself is free, and containsS; it is a free monoid and called thefree hullofS. A basis for this intersection is a code.
Thedefect theorem[15][16][17]states that ifXis finite andCis the basis of the free hull ofX, then eitherXis a code andC=X, or
Amonoid morphismffrom a free monoidB∗to a monoidMis a map such thatf(xy) =f(x)⋅f(y) for wordsx,yandf(ε) = ι, where ε and ι denote the identity elements ofB∗andM, respectively. The morphismfis determined by its values on the letters ofBand conversely any map fromBtoMextends to a morphism. A morphism isnon-erasing[18]orcontinuous[19]if no letter ofBmaps to ι andtrivialif every letter ofBmaps to ι.[20]
A morphismffrom a free monoidB∗to a free monoidA∗istotalif every letter ofAoccurs in some word in the image off;cyclic[20]orperiodic[21]if the image offis contained in {w}∗for some wordwofA∗. A morphismfisk-uniformif the length |f(a)| is constant and equal tokfor allainA.[22][23]A 1-uniform morphism isstrictly alphabetic[19]or acoding.[24]
A morphismffrom a free monoidB∗to a free monoidA∗issimplifiableif there is an alphabetCof cardinality less than that ofBsuch the morphismffactors throughC∗, that is, it is the composition of a morphism fromB∗toC∗and a morphism from that toA∗; otherwisefiselementary. The morphismfis called acodeif the image of the alphabetBunderfis a code. Every elementary morphism is a code.[25]
ForLa subset ofB∗, a finite subsetTofLis atest setforLif morphismsfandgonB∗agree onLif and only if they agree onT. TheEhrenfeucht conjectureis that any subsetLhas a test set:[26]it has been proved[27]independently by Albert and Lawrence; McNaughton; and Guba. The proofs rely onHilbert's basis theorem.[28]
The computational embodiment of a monoid morphism is amapfollowed by afold. In this setting, the free monoid on a setAcorresponds tolistsof elements fromAwith concatenation as the binary operation. A monoid homomorphism from the free monoid to any other monoid (M,•) is a functionfsuch that
whereeis the identity onM. Computationally, every such homomorphism corresponds to amapoperation applyingfto all the elements of a list, followed by afoldoperation which combines the results using the binary operator •. Thiscomputational paradigm(which can be generalized to non-associative binary operators) has inspired theMapReducesoftware framework.[citation needed]
AnendomorphismofA∗is a morphism fromA∗to itself.[29]Theidentity mapIis an endomorphism ofA∗, and the endomorphisms form amonoidundercomposition of functions.
An endomorphismfisprolongableif there is a letterasuch thatf(a) =asfor a non-empty strings.[30]
The operation ofstring projectionis an endomorphism. That is, given a lettera∈ Σ and a strings∈ Σ∗, the string projectionpa(s) removes every occurrence ofafroms; it is formally defined by
Note that string projection is well-defined even if the rank of the monoid is infinite, as the above recursive definition works for all strings of finite length. String projection is amorphismin the category of free monoids, so that
wherepa(Σ∗){\displaystyle p_{a}\left(\Sigma ^{*}\right)}is understood to be the free monoid of all finite strings that don't contain the lettera. Projection commutes with the operation of string concatenation, so thatpa(st)=pa(s)pa(t){\displaystyle p_{a}(st)=p_{a}(s)p_{a}(t)}for all stringssandt. There are many right inverses to string projection, and thus it is asplit epimorphism.
The identity morphism ispε,{\displaystyle p_{\varepsilon },}defined aspε(s)=s{\displaystyle p_{\varepsilon }(s)=s}for all stringss, andpε(ε)=ε{\displaystyle p_{\varepsilon }(\varepsilon )=\varepsilon }.
String projection is commutative, as clearly
For free monoids of finite rank, this follows from the fact that free monoids of the same rank are isomorphic, as projection reduces the rank of the monoid by one.
String projection isidempotent, as
for all stringss. Thus, projection is an idempotent, commutative operation, and so it forms a boundedsemilatticeor a commutativeband.
Given a setA, thefreecommutative monoidonAis the set of all finitemultisetswith elements drawn fromA, with the monoid operation being multiset sum and the monoid unit being the empty multiset.
For example, ifA= {a,b,c}, elements of the free commutative monoid onAare of the form
Thefundamental theorem of arithmeticstates that the monoid of positive integers under multiplication is a free commutative monoid on an infinite set of generators, theprime numbers.
Thefree commutative semigroupis the subset of the free commutative monoid that contains all multisets with elements drawn fromAexcept the empty multiset.
Thefree partially commutative monoid, ortrace monoid, is a generalization that encompasses both the free and free commutative monoids as instances. This generalization finds applications incombinatoricsand in the study ofparallelismincomputer science.
|
https://en.wikipedia.org/wiki/Free_monoid
|
Aprogramming languageis a system of notation for writingcomputer programs.[1]Programming languages are described in terms of theirsyntax(form) andsemantics(meaning), usually defined by aformal language. Languages usually provide features such as atype system,variables, and mechanisms forerror handling. Animplementationof a programming language is required in order toexecuteprograms, namely aninterpreteror acompiler. An interpreter directly executes the source code, while acompilerproduces anexecutableprogram.
Computer architecturehas strongly influenced the design of programming languages, with the most common type (imperative languages—which implement operations in a specified order) developed to perform well on the popularvon Neumann architecture. While early programming languages were closely tied to thehardware, over time they have developed moreabstractionto hide implementation details for greater simplicity.
Thousands of programming languages—often classified as imperative,functional,logic, orobject-oriented—have been developed for a wide variety of uses. Many aspects of programming language design involve tradeoffs—for example,exception handlingsimplifies error handling, but at a performance cost.Programming language theoryis the subfield ofcomputer sciencethat studies the design, implementation, analysis, characterization, and classification of programming languages.
Programming languages differ fromnatural languagesin that natural languages are used for interaction between people, while programming languages are designed to allow humans to communicate instructions to machines.[citation needed]
The termcomputer languageis sometimes used interchangeably with "programming language".[2]However, usage of these terms varies among authors.
In one usage, programming languages are described as a subset of computer languages.[3]Similarly, the term "computer language" may be used in contrast to the term "programming language" to describe languages used in computing but not considered programming languages.[citation needed]Most practical programming languages are Turing complete,[4]and as such are equivalent in what programs they can compute.
Another usage regards programming languages as theoretical constructs for programmingabstract machinesand computer languages as the subset thereof that runs on physical computers, which have finite hardware resources.[5]John C. Reynoldsemphasizes thatformal specificationlanguages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.[6]
The first programmable computers were invented at the end of the 1940s, and with them, the first programming languages.[7]The earliest computers were programmed infirst-generation programming languages(1GLs),machine language(simple instructions that could be directly executed by the processor). This code was very difficult to debug and was notportablebetween different computer systems.[8]In order to improve the ease of programming,assembly languages(orsecond-generation programming languages—2GLs) were invented, diverging from the machine language to make programs easier to understand for humans, although they did not increase portability.[9]
Initially, hardware resources were scarce and expensive, whilehuman resourceswere cheaper. Therefore, cumbersome languages that were time-consuming to use, but were closer to the hardware for higher efficiency were favored.[10]The introduction ofhigh-level programming languages(third-generation programming languages—3GLs)—revolutionized programming. These languagesabstractedaway the details of the hardware, instead being designed to express algorithms that could be understood more easily by humans. For example, arithmetic expressions could now be written in symbolic notation and later translated into machine code that the hardware could execute.[9]In 1957,Fortran(FORmula TRANslation) was invented. Often considered the firstcompiledhigh-level programming language,[9][11]Fortran has remained in use into the twenty-first century.[12]
Around 1960, the firstmainframes—general purpose computers—were developed, although they could only be operated by professionals and the cost was extreme. The data and instructions were input bypunch cards, meaning that no input could be added while the program was running. The languages developed at this time therefore are designed for minimal interaction.[14]After the invention of themicroprocessor, computers in the 1970s became dramatically cheaper.[15]New computers also allowed more user interaction, which was supported by newer programming languages.[16]
Lisp, implemented in 1958, was the firstfunctional programminglanguage.[17]Unlike Fortran, it supportedrecursionandconditional expressions,[18]and it also introduceddynamic memory managementon aheapand automaticgarbage collection.[19]For the next decades, Lisp dominatedartificial intelligenceapplications.[20]In 1978, another functional language,ML, introducedinferred typesand polymorphicparameters.[16][21]
AfterALGOL(ALGOrithmic Language) was released in 1958 and 1960,[22]it became the standard in computing literature for describingalgorithms. Although its commercial success was limited, most popular imperative languages—includingC,Pascal,Ada,C++,Java, andC#—are directly or indirectly descended from ALGOL 60.[23][12]Among its innovations adopted by later programming languages included greater portability and the first use ofcontext-free,BNFgrammar.[24]Simula, the first language to supportobject-oriented programming(includingsubtypes,dynamic dispatch, andinheritance), also descends from ALGOL and achieved commercial success.[25]C, another ALGOL descendant, has sustained popularity into the twenty-first century. C allows access to lower-level machine operations more than other contemporary languages. Its power and efficiency, generated in part with flexiblepointeroperations, comes at the cost of making it more difficult to write correct code.[16]
Prolog, designed in 1972, was the firstlogic programminglanguage, communicating with a computer using formal logic notation.[26][27]With logic programming, the programmer specifies a desired result and allows theinterpreterto decide how to achieve it.[28][27]
During the 1980s, the invention of thepersonal computertransformed the roles for which programming languages were used.[29]New languages introduced in the 1980s included C++, asupersetof C that can compile C programs but also supportsclassesandinheritance.[30]Adaand other new languages introduced support forconcurrency.[31]The Japanese government invested heavily into the so-calledfifth-generation languagesthat added support for concurrency to logic programming constructs, but these languages were outperformed by other concurrency-supporting languages.[32][33]
Due to the rapid growth of theInternetand theWorld Wide Webin the 1990s, new programming languages were introduced to supportWeb pagesandnetworking.[34]Java, based on C++ and designed for increased portability across systems and security, enjoyed large-scale success because these features are essential for many Internet applications.[35][36]Another development was that ofdynamically typedscripting languages—Python,JavaScript,PHP, andRuby—designed to quickly produce small programs that coordinate existingapplications. Due to their integration withHTML, they have also been used for building web pages hosted onservers.[37][38]
During the 2000s, there was a slowdown in the development of new programming languages that achieved widespread popularity.[39]One innovation wasservice-oriented programming, designed to exploitdistributed systemswhose components are connected by a network. Services are similar to objects in object-oriented programming, but run on a separate process.[40]C#andF#cross-pollinated ideas between imperative and functional programming.[41]After 2010, several new languages—Rust,Go,Swift,ZigandCarbon—competed for the performance-critical software for which C had historically been used.[42]Most of the new programming languages usestatic typingwhile a few numbers of new languages usedynamic typinglikeRingandJulia.[43][44]
Some of the new programming languages are classified asvisual programming languageslikeScratch,LabVIEWandPWCT. Also, some of these languages mix between textual and visual programming usage likeBallerina.[45][46][47][48]Also, this trend lead to developing projects that help in developing new VPLs likeBlocklybyGoogle.[49]Many game engines likeUnrealandUnityadded support for visual scripting too.[50][51]
Every programming language includes fundamental elements for describing data and the operations or transformations applied to them, such as adding two numbers or selecting an item from a collection. These elements are governed by syntactic and semantic rules that define their structure and meaning, respectively.
A programming language's surface form is known as itssyntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, some programming languages aregraphical, using visual relationships between symbols to specify a program.
The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (eitherformalor hard-coded in areference implementation). Since most languages are textual, this article discusses textual syntax.
The programming language syntax is usually defined using a combination ofregular expressions(forlexicalstructure) andBackus–Naur form(forgrammaticalstructure). Below is a simple grammar, based onLisp:
This grammar specifies the following:
The following are examples of well-formed token sequences in this grammar:12345,()and(a b c232 (1)).
Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibitundefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it.
Usingnatural languageas an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false:
The followingC languagefragment is syntactically correct, but performs operations that are not semantically defined (the operation*p >> 4has no meaning for a value having a complex type andp->imis not defined because the value ofpis thenull pointer):
If thetype declarationon the first line were omitted, the program would trigger an error on the undefined variablepduring compilation. However, the program would still be syntactically correct since type declarations provide only semantic information.
The grammar needed to specify a programming language can be classified by its position in theChomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they arecontext-free grammars.[52]Some languages, including Perl and Lisp, contain constructs that allow execution during the parsing phase. Languages that have constructs that allow the programmer to alter the behavior of the parser make syntax analysis anundecidable problem, and generally blur the distinction between parsing and execution.[53]In contrast toLisp's macro systemand Perl'sBEGINblocks, which may contain general computations, C macros are merely string replacements and do not require code execution.[54]
The termsemanticsrefers to the meaning of languages, as opposed to their form (syntax).
Static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms.[1][failed verification]For compiled languages, static semantics essentially include those semantic rules that can be checked at compile time. Examples include checking that everyidentifieris declared before it is used (in languages that require such declarations) or that the labels on the arms of acase statementare distinct.[55]Many important restrictions of this type, like checking that identifiers are used in the appropriate context (e.g. not adding an integer to a function name), or thatsubroutinecalls have the appropriate number and type of arguments, can be enforced by defining them as rules in alogiccalled atype system. Other forms ofstatic analyseslikedata flow analysismay also be part of static semantics. Programming languages such asJavaandC#havedefinite assignment analysis, a form of data flow analysis, as part of their respective static semantics.[56]
Once data has been specified, the machine must be instructed to perform operations on the data. For example, the semantics may define thestrategyby which expressions are evaluated to values, or the manner in whichcontrol structuresconditionally executestatements. Thedynamic semantics(also known asexecution semantics) of a language defines how and when the various constructs of a language should produce a program behavior. There are many ways of defining execution semantics. Natural language is often used to specify the execution semantics of languages commonly used in practice. A significant amount of academic research goes intoformal semantics of programming languages, which allows execution semantics to be specified in a formal manner. Results from this field of research have seen limited application to programming language design and implementation outside academia.[56]
Adata typeis a set of allowable values and operations that can be performed on these values.[57]Each programming language'stype systemdefines which data types exist, the type of anexpression, and howtype equivalenceandtype compatibilityfunction in the language.[58]
According totype theory, a language is fully typed if the specification of every operation defines types of data to which the operation is applicable.[59]In contrast, an untyped language, such as mostassembly languages, allows any operation to be performed on any data, generally sequences of bits of various lengths.[59]In practice, while few languages are fully typed, most offer a degree of typing.[59]
Because different types (such asintegersandfloats) represent values differently, unexpected results will occur if one type is used when another is expected.Type checkingwill flag this error, usually atcompile time(runtime type checking is more costly).[60]Withstrong typing,type errorscan always be detected unless variables are explicitlycastto a different type.Weak typingoccurs when languages allow implicit casting—for example, to enable operations between variables of different types without the programmer making an explicit type conversion. The more cases in which thistype coercionis allowed, the fewer type errors can be detected.[61]
Early programming languages often supported only built-in, numeric types such as theinteger(signed and unsigned) andfloating point(to support operations onreal numbersthat are not integers). Most programming languages support multiple sizes of floats (often calledfloatanddouble) and integers depending on the size and precision required by the programmer. Storing an integer in a type that is too small to represent it leads tointeger overflow. The most common way of representing negative numbers with signed types istwos complement, althoughones complementis also used.[62]Other common types includeBoolean—which is either true or false—andcharacter—traditionally onebyte, sufficient to represent allASCIIcharacters.[63]
Arraysare a data type whose elements, in many languages, must consist of a single type of fixed length. Other languages define arrays as references to data stored elsewhere and support elements of varying types.[64]Depending on the programming language, sequences of multiple characters, calledstrings, may be supported as arrays of characters or their ownprimitive type.[65]Strings may be of fixed or variable length, which enables greater flexibility at the cost of increased storage space and more complexity.[66]Other data types that may be supported includelists,[67]associative (unordered) arraysaccessed via keys,[68]recordsin which data is mapped to names in an ordered structure,[69]andtuples—similar to records but without names for data fields.[70]Pointersstore memory addresses, typically referencing locations on theheapwhere other data is stored.[71]
The simplestuser-defined typeis anordinal type, often called anenumeration, whose values can be mapped onto the set of positive integers.[72]Since the mid-1980s, most programming languages also supportabstract data types, in which the representation of the data and operations arehidden from the user, who can only access aninterface.[73]The benefits ofdata abstractioncan include increased reliability, reduced complexity, less potential forname collision, and allowing the underlyingdata structureto be changed without the client needing to alter its code.[74]
Instatic typing, all expressions have their types determined before a program executes, typically at compile-time.[59]Most widely used, statically typed programming languages require the types of variables to be specified explicitly. In some languages, types are implicit; one form of this is when the compiler caninfertypes based on context. The downside ofimplicit typingis the potential for errors to go undetected.[75]Complete type inference has traditionally been associated with functional languages such asHaskellandML.[76]
With dynamic typing, the type is not attached to the variable but only the value encoded in it. A single variable can be reused for a value of a different type. Although this provides more flexibility to the programmer, it is at the cost of lower reliability and less ability for the programming language to check for errors.[77]Some languages allow variables of aunion typeto which any type of value can be assigned, in an exception to their usual static typing rules.[78]
In computing, multiple instructions can be executed simultaneously. Many programming languages support instruction-level and subprogram-level concurrency.[79]By the twenty-first century, additional processing power on computers was increasingly coming from the use of additional processors, which requires programmers to design software that makes use of multiple processors simultaneously to achieve improved performance.[80]Interpreted languagessuch asPythonandRubydo not support the concurrent use of multiple processors.[81]Other programming languages do support managing data shared between different threads by controlling the order of execution of key instructions via the use ofsemaphores, controlling access to shared data viamonitor, or enablingmessage passingbetween threads.[82]
Many programming languages include exception handlers, a section of code triggered byruntime errorsthat can deal with them in two main ways:[83]
Some programming languages support dedicating a block of code to run regardless of whether an exception occurs before the code is reached; this is called finalization.[84]
There is a tradeoff between increased ability to handle exceptions and reduced performance.[85]For example, even though array index errors are common[86]C does not check them for performance reasons.[85]Although programmers can write code to catch user-defined exceptions, this can clutter a program. Standard libraries in some languages, such as C, use their return values to indicate an exception.[87]Some languages and their compilers have the option of turning on and off error handling capability, either temporarily or permanently.[88]
One of the most important influences on programming language design has beencomputer architecture.Imperative languages, the most commonly used type, were designed to perform well onvon Neumann architecture, the most common computer architecture.[89]In von Neumann architecture, thememorystores both data and instructions, while theCPUthat performs instructions on data is separate, and data must be piped back and forth to the CPU. The central elements in these languages are variables,assignment, anditeration, which is more efficient thanrecursionon these machines.[90]
Many programming languages have been designed from scratch, altered to meet new needs, and combined with other languages. Many have eventually fallen into disuse.[citation needed]The birth of programming languages in the 1950s was stimulated by the desire to make a universal programming language suitable for all machines and uses, avoiding the need to write code for different computers.[91]By the early 1960s, the idea of a universal language was rejected due to the differing requirements of the variety of purposes for which code was written.[92]
Desirable qualities of programming languages include readability, writability, and reliability.[93]These features can reduce the cost of training programmers in a language, the amount of time needed to write and maintain programs in the language, the cost of compiling the code, and increase runtime performance.[94]
Programming language design often involves tradeoffs.[104]For example, features to improve reliability typically come at the cost of performance.[105]Increased expressivity due to a large number of operators makes writing code easier but comes at the cost of readability.[105]
Natural-language programminghas been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate.Edsger W. Dijkstratook the position that the use of a formal language is essential to prevent the introduction of meaningless constructs.[106]Alan Perliswas similarly dismissive of the idea.[107]
The specification of a programming language is an artifact that the languageusersand theimplementorscan use to agree upon whether a piece ofsource codeis a validprogramin that language, and if so what its behavior shall be.
A programming language specification can take several forms, including the following:
An implementation of a programming language is the conversion of a program intomachine codethat can be executed by the hardware. The machine code then can be executed with the help of theoperating system.[111]The most common form of interpretation inproduction codeis by acompiler, which translates the source code via an intermediate-level language into machine code, known as anexecutable. Once the program is compiled, it will run more quickly than with other implementation methods.[112]Some compilers are able to provide furtheroptimizationto reduce memory or computation usage when the executable runs, but increasing compilation time.[113]
Another implementation method is to run the program with aninterpreter, which translates each line of software into machine code just before it executes. Although it can make debugging easier, the downside of interpretation is that it runs 10 to 100 times slower than a compiled executable.[114]Hybrid interpretation methods provide some of the benefits of compilation and some of the benefits of interpretation via partial compilation. One form this takes isjust-in-time compilation, in which the software is compiled ahead of time into an intermediate language, and then into machine code immediately before execution.[115]
Although most of the most commonly used programming languages have fully open specifications and implementations, many programming languages exist only as proprietary programming languages with the implementation available only from a single vendor, which may claim that such a proprietary language is their intellectual property. Proprietary programming languages are commonlydomain-specific languagesor internalscripting languagesfor a single product; some proprietary languages are used only internally within a vendor, while others are available to external users.[citation needed]
Some programming languages exist on the border between proprietary and open; for example,Oracle Corporationasserts proprietary rights to some aspects of theJava programming language,[116]andMicrosoft'sC#programming language, which has open implementations of most parts of the system, also hasCommon Language Runtime(CLR) as a closed environment.[117]
Many proprietary languages are widely used, in spite of their proprietary nature; examples includeMATLAB,VBScript, andWolfram Language. Some languages may make the transition from closed to open; for example,Erlangwas originally Ericsson's internal programming language.[118]
Open source programming languagesare particularly helpful foropen scienceapplications, enhancing the capacity forreplicationand code sharing.[119]
Thousands of different programming languages have been created, mainly in the computing field.[120]Individual software projects commonly use five programming languages or more.[121]
Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness. When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers "do exactly what they are told to do", and cannot "understand" what code the programmer intended to write. The combination of the language definition, a program, and the program's inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program. On the other hand, ideas about an algorithm can be communicated to humans without the precision required for execution by usingpseudocode, which interleaves natural language with code written in a programming language.
A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. Aprogrammeruses theabstractionspresent in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available (calledprimitives).[122]Programmingis the process by which programmers combine these primitives to compose new programs, or adapt existing ones to new uses or a changing environment.
Programs for a computer might beexecutedin abatch processwithout any human interaction, or a user might typecommandsin aninteractive sessionof aninterpreter. In this case the "commands" are simply programs, whose execution is chained together. When a language can run its commands through an interpreter (such as aUnix shellor othercommand-line interface), without compiling, it is called ascripting language.[123]
Determining which is the most widely used programming language is difficult since the definition of usage varies by context. One language may occupy the greater number of programmer hours, a different one has more lines of code, and a third may consume the most CPU time. Some languages are very popular for particular kinds of applications. For example,COBOLis still strong in the corporate data center, often on largemainframes;[124][125]Fortranin scientific and engineering applications;Adain aerospace, transportation, military, real-time, and embedded applications; andCin embedded applications and operating systems. Other languages are regularly used to write many different kinds of applications.
Various methods of measuring language popularity, each subject to a different bias over what is measured, have been proposed:
Combining and averaging information from various internet sites, stackify.com reported the ten most popular programming languages (in descending order by overall popularity):Java,C,C++,Python,C#,JavaScript,VB .NET,R,PHP, andMATLAB.[129]
As of June 2024, the top five programming languages as measured byTIOBE indexarePython,C++,C,JavaandC#. TIOBE provides a list of top 100 programming languages according to popularity and update this list every month.[130]
Adialectof a programming language or adata exchange languageis a (relatively small) variation or extension of the language that does not change its intrinsic nature. With languages such asSchemeandForth, standards may be considered insufficient, inadequate, or illegitimate by implementors, so often they will deviate from the standard, making a newdialect. In other cases, a dialect is created for use in adomain-specific language, often a subset. In theLispworld, most languages that use basicS-expressionsyntax and Lisp-like semantics are considered Lisp dialects, although they vary wildly as do, say,RacketandClojure. As it is common for one language to have several dialects, it can become quite difficult for an inexperienced programmer to find the right documentation. TheBASIClanguage hasmany dialects.
Programming languages are often placed into four main categories:imperative,functional,logic, andobject oriented.[131]
Althoughmarkup languagesare not programming languages, some have extensions that support limited programming. Additionally, there are special-purpose languages that are not easily compared to other programming languages.[135]
|
https://en.wikipedia.org/wiki/Dialecting
|
Inmathematics,Sperner's lemmais acombinatorialresult on colorings oftriangulations,analogousto theBrouwer fixed point theorem, which is equivalent to it.[1]It states that everySperner coloring(described below) of a triangulation of ann{\displaystyle n}-dimensionalsimplexcontains a cell whose vertices all have different colors.
The initial result of this kind was proved byEmanuel Sperner, in relation with proofs ofinvariance of domain. Sperner colorings have been used for effective computation offixed pointsand inroot-finding algorithms, and are applied infair division(cake cutting) algorithms.
According to the SovietMathematical Encyclopaedia(ed.I.M. Vinogradov), a related 1929 theorem (ofKnaster,BorsukandMazurkiewicz) had also become known as theSperner lemma– this point is discussed in the English translation (ed. M. Hazewinkel). It is now commonly known as theKnaster–Kuratowski–Mazurkiewicz lemma.
In one dimension, Sperner's Lemma can be regarded as a discrete version of theintermediate value theorem. In this case, it essentially says that if a discretefunctiontakes only the values 0 and 1, begins at the value 0 and ends at the value 1, then it must switch values an odd number of times.
The two-dimensional case is the one referred to most frequently. It is stated as follows:
Subdivide atriangleABCarbitrarily into a triangulation consisting of smaller triangles meeting edge to edge. Then a Sperner coloring of the triangulation is defined as an assignment of three colors to the vertices of the triangulation such that
Then every Sperner coloring of every triangulation has at least one "rainbow triangle", a smaller triangle in the triangulation that has its vertices colored with all three different colors. More precisely, there must be an odd number of rainbow triangles.
In the general case the lemma refers to an-dimensionalsimplex:
Consider any triangulationT, a disjoint division ofA{\displaystyle {\mathcal {A}}}into smallern-dimensional simplices, again meeting face-to-face. Denote the coloring function as:
whereSis the set of vertices ofT. A coloring function defines a Sperner coloring when:
Ai1Ai2…Aik+1{\displaystyle A_{i_{1}}A_{i_{2}}\ldots A_{i_{k+1}}}
are colored only with the colors
i1,i2,…,ik+1.{\displaystyle i_{1},i_{2},\ldots ,i_{k+1}.}
Then every Sperner coloring of every triangulation of then-dimensionalsimplexhas an odd number of instances of arainbow simplex, meaning a simplex whose vertices are colored with alln+ 1colors. In particular, there must be at least one rainbow simplex.
We shall first address the two-dimensional case. Consider a graphGbuilt from the triangulationTas follows:
Note that on the intervalABthere is an odd number of borders colored 1-2 (simply because A is colored 1, B is colored 2; and as we move alongAB, there must be an odd number of color changes in order to get different colors at the beginning and at the end). On the intervals BC and CA, there are no borders colored 1-2 at all. Therefore, the vertex ofGcorresponding to the outer area has an odd degree. But it is known (thehandshaking lemma) that in a finite graph there is an even number of vertices with odd degree. Therefore, the remaining graph, excluding the outer area, has an odd number of vertices with odd degree corresponding to members ofT.
It can be easily seen that the only possible degree of a triangle fromTis 0, 1, or 2, and that the degree 1 corresponds to a triangle colored with the three colors 1, 2, and 3.
Thus we have obtained a slightly stronger conclusion, which says that in a triangulationTthere is an odd number (and at least one) of full-colored triangles.
A multidimensional case can be proved by induction on the dimension of a simplex. We apply the same reasoning, as in the two-dimensional case, to conclude that in an-dimensional triangulation there is an odd number of full-colored simplices.
Here is an elaboration of the proof given previously, for a reader new tograph theory.
This diagram numbers the colors of the vertices of the example given previously. The small triangles whose vertices all have different numbers are shaded in the graph. Each small triangle becomes a node in the new graph derived from the triangulation. The small letters identify the areas, eight inside the figure, and areaidesignates the space outside of it.
As described previously, those nodes that share an edge whose endpoints are numbered 1 and 2 are joined in the derived graph. For example, nodedshares an edge with the outer areai, and its vertices all have different numbers, so it is also shaded. Nodebis not shaded because two vertices have the same number, but it is joined to the outer area.
One could add a new full-numbered triangle, say by inserting a node numbered 3 into the edge between 1 and 1 of nodea, and joining that node to the other vertex ofa. Doing so would have to create a pair of new nodes, like the situation with nodesfandg.
Andrew McLennan and Rabee Tourky presented a different proof, using thevolume of a simplex. It proceeds in one step, with no induction.[2][3]
Suppose there is ad-dimensional simplex of side-lengthN, and it is triangulated into sub-simplices of side-length 1. There is a function that, given any vertex of the triangulation, returns its color. The coloring is guaranteed to satisfy Sperner's boundary condition. How many times do we have to call the function in order to find a rainbow simplex? Obviously, we can go over all the triangulation vertices, whose number is O(Nd), which is polynomial inNwhen the dimension is fixed. But, can it be done in time O(poly(logN)), which is polynomial in the binary representation of N?
This problem was first studied byChristos Papadimitriou. He introduced acomplexity classcalledPPAD, which contains this as well as related problems (such as finding aBrouwer fixed point). He proved that finding a Sperner simplex isPPAD-completeeven ford=3. Some 15 years later, Chen and Deng proved PPAD-completeness even ford=2.[4]It is believed that PPAD-hard problems cannot be solved in time O(poly(logN)).
Suppose that each vertex of the triangulation may be labeled with multiple colors, so that the coloring function isF:S→ 2[n+1].
For every sub-simplex, the set of labelings on its vertices is a set-family over the set of colors[n+ 1]. This set-family can be seen as ahypergraph.
If, for every vertexvon a face of the simplex, the colors inf(v)are a subset of the set of colors on the face endpoints, then there exists a sub-simplex with abalanced labeling– a labeling in which the correspondinghypergraph admits a perfect fractional matching. To illustrate, here are some balanced labeling examples forn= 2:
This was proved byShapleyin 1973.[5]It is a combinatorial analogue of theKKMS lemma.
Suppose that we have ad-dimensionalpolytopePwithnvertices.Pis triangulated, and each vertex of the triangulation is labeled with a label from{1, …,n}.Every main vertexiis labeledi. A sub-simplex is calledfully-labeledif it isd-dimensional, and each of itsd+ 1vertices has a different label. If every vertex in a faceFofPis labeled with one of the labels on the endpoints ofF, then there are at leastn–dfully-labeled simplices. Some special cases are:
The general statement was conjectured byAtanassovin 1996, who proved it for the cased= 2.[6]The proof of the general case was first given by de Loera, Peterson, andSuin 2002.[7]They provide two proofs: the first is non-constructive and uses the notion ofpebble sets; the second is constructive and is based on arguments of following paths ingraphs.
Meunier[8]extended the theorem from polytopes topolytopal bodies,which need not be convex or simply-connected. In particular, ifPis a polytope, then the set of its faces is a polytopal body. In every Sperner labeling of a polytopal body with verticesv1, …,vn, there are at least:
fully-labeled simplices such that any pair of these simplices receives two different labelings. The degreedegB(P)(vi)is the number of edges ofB(P)to whichvibelongs. Since the degree is at leastd, the lower bound is at leastn–d. But it can be larger. For example, for thecyclic polytopein 4 dimensions withnvertices, the lower bound is:
Musin[9]further extended the theorem tod-dimensionalpiecewise-linear manifolds, with or without a boundary.
Asada, Frick, Pisharody, Polevy, Stoner, Tsang and Wellner[10]further extended the theorem topseudomanifoldswith boundary, and improved the lower bound on the number of facets with pairwise-distinct labels.
Suppose that, instead of a simplex triangulated into sub-simplices, we have ann-dimensional cube partitioned into smallern-dimensional cubes.
Harold W. Kuhn[11]proved the following lemma. Suppose the cube[0,M]n, for some integerM, is partitioned intoMnunit cubes. Suppose each vertex of the partition is labeled with a label from{1, …,n+ 1},such that for every vertexv: (1) ifvi= 0then the label onvis at mosti; (2) ifvi=Mthen the label onvis noti. Then there exists a unit cube with all the labels{1, …,n+ 1}(some of them more than once). The special casen= 2is: suppose a square is partitioned into sub-squares, and each vertex is labeled with a label from{1,2,3}.The left edge is labeled with1(= at most 1); the bottom edge is labeled with1or2(= at most 2); the top edge is labeled with1or3(= not 2); and the right edge is labeled with2or3(= not 1). Then there is a square labeled with1,2,3.
Another variant, related to thePoincaré–Miranda theorem,[12]is as follows. Suppose the cube[0,M]nis partitioned intoMnunit cubes. Suppose each vertex is labeled with a binary vector of lengthn, such that for every vertexv: (1) ifvi= 0then the coordinateiof label onvis 0; (2) ifvi=Mthen coordinateiof the label onvis 1; (3) if two vertices are neighbors, then their labels differ by at most one coordinate. Then there exists a unit cube in which all2nlabels are different. In two dimensions, another way to formulate this theorem is:[13]in any labeling that satisfies conditions (1) and (2), there is at least one cell in which the sum of labels is 0 [a 1-dimensional cell with(1,1)and(-1,-1)labels, or a 2-dimensional cells with all four different labels].
Wolsey[14]strengthened these two results by proving that the number of completely-labeled cubes is odd.
Musin[13]extended these results to generalquadrangulations.
Suppose that, instead of a single labeling, we havendifferent Sperner labelings. We consider pairs (simplex, permutation) such that, the label of each vertex of the simplex is chosen from a different labeling (so for each simplex, there aren!different pairs). Then there are at leastn!fully labeled pairs. This was proved byRavindra Bapat[15]for any triangulation. A simpler proof, which only works for specific triangulations, was presented later by Su.[16]
Another way to state this lemma is as follows. Suppose there arenpeople, each of whom produces a different Sperner labeling of the same triangulation. Then, there exists a simplex, and a matching of the people to its vertices, such that each vertex is labeled by its owner differently (one person labels its vertex by 1, another person labels its vertex by 2, etc.). Moreover, there are at leastn!such matchings. This can be used to find anenvy-free cake-cuttingwith connected pieces.
Asada, Frick, Pisharody, Polevy, Stoner, Tsang and Wellner[10]extended this theorem topseudomanifoldswith boundary.
More generally, suppose we havemdifferent Sperner labelings, wheremmay be different thann. Then:[17]: Thm 2.1
Both versions reduce to Sperner's lemma whenm= 1, or when allmlabelings are identical.
See[18]for similar generalizations.
Brown and Cairns[19]strengthened Sperner's lemma by considering theorientationof simplices. Each sub-simplex has an orientation that can be either +1 or -1 (if it is fully-labeled), or 0 (if it is not fully-labeled). They proved that the sum of all orientations of simplices is +1. In particular, this implies that there is an odd number of fully-labeled simplices.
As an example forn= 3, suppose a triangle is triangulated and labeled with{1,2,3}.Consider the cyclic sequence of labels on the boundary of the triangle. Define thedegreeof the labeling as the number of switches from 1 to 2, minus the number of switches from 2 to 1. See examples in the table at the right. Note that the degree is the same if we count switches from 2 to 3 minus 3 to 2, or from 3 to 1 minus 1 to 3.
Musin proved thatthe number of fully labeled triangles is at least the degree of the labeling.[20]In particular, if the degree is nonzero, then there exists at least one fully labeled triangle.
If a labeling satisfies the Sperner condition, then its degree is exactly 1: there are 1-2 and 2-1 switches only in the side between vertices 1 and 2, and the number of 1-2 switches must be one more than the number of 2-1 switches (when walking from vertex 1 to vertex 2). Therefore, the original Sperner lemma follows from Musin's theorem.
There is a similar lemma about finite and infinitetreesandcycles.[21]
Mirzakhani and Vondrak[22]study a weaker variant of a Sperner labeling, in which the only requirement is that labeliis not used on the face opposite to vertexi. They call itSperner-admissible labeling. They show that there are Sperner-admissible labelings in which every cell contains at most 4 labels. They also prove an optimal lower bound on the number of cells that must have at least two different labels in each Sperner-admissible labeling. They also prove that, for any Sperner-admissible partition of the regular simplex, the total area of the boundary between the parts is minimized by theVoronoi partition.
Sperner colorings have been used for effective computation offixed points. A Sperner coloring can be constructed such that fully labeled simplices correspond to fixed points of a given function. By making a triangulation smaller and smaller, one can show that the limit of the fully labeled simplices is exactly the fixed point. Hence, the technique provides a way to approximate fixed points.
A related application is the numerical detection ofperiodic orbitsandsymbolic dynamics.[23]Sperner's lemma can also be used inroot-finding algorithmsandfair divisionalgorithms; seeSimmons–Su protocols.
Sperner's lemma is one of the key ingredients of the proof ofMonsky's theorem, that a square cannot be cut into an odd number ofequal-area triangles.[24]
Sperner's lemma can be used to find acompetitive equilibriumin anexchange economy, although there are more efficient ways to find it.[25]: 67
Fifty years after first publishing it, Sperner presented a survey on the development, influence and applications of his combinatorial lemma.[26]
There are several fixed-point theorems which come in three equivalent variants: analgebraic topologyvariant, a combinatorial variant and a set-covering variant. Each variant can be proved separately using totally different arguments, but each variant can also be reduced to the other variants in its row. Additionally, each result in
the top row can be deduced from the one below it in the same column.[27]
|
https://en.wikipedia.org/wiki/Sperner%27s_lemma
|
Apersonal transporter(alsopowered transporter,[1]electric rideable,personal lightelectric vehicle,personal mobility device, etc.) is any of a class of compact, mostly recent (21st century), motorisedmicromobilityvehicle for transporting an individual at speeds that do not normally exceed 25 km/h (16 mph). They includeelectric skateboards,kick scooters,self-balancing unicyclesandSegways, as well as gasoline-fueledmotorised scootersor skateboards, typically usingtwo-stroke enginesof less than 49 cc (3.0 cu in)displacement.[2][3]Many newer versions use recent advances invehicle batteryand motor-control technologies. They are growing in popularity, and legislators are in the process of determining how these devices should be classified, regulated and accommodated during a period of rapid innovation.
Generally excluded from this legal category areelectric bicycles(that are considered to be a type of bicycle);electric motorbikes and scooters(that are treated as a type ofmotorcycleormoped); and powered mobility aids with 3 or 4 wheels on which the rider sits (which fall within regulations covering poweredmobility scooters).[4]
The first personal transporter was theAutoped, a stand-up scooter with a gasoline engine made from 1915 to 1922. Engine-powered scooters and skateboards reappeared in the 1970s and the 1980s.TwikeandSinclair C5were 1980s enclosed hybridvelomobilesthat also used pedal power.
With the rapid improvements in lithium batteries in the late 1990s and early 2000s, a range of new types of personal transporters appeared, and began to spread into use in urban settings for both recreation and practical transportation.
Dean Kamenapplied for his first patent for a 'human transporter', the Segway PT, in 1994.[5]This was followed by other patent applications prior to its product launch in late 2001 and first deliveries to customers early in 2002.[6][7][8]
Trevor Blackwelldemonstrated a self-balancing unicycle based on the control-mechanism from a Segway PT in 2004[9][better source needed]for which he publishedopen sourcedesigns (seeEunicycle).Focus Designsreleased the first commercially available self-balancing unicycle (which had a seat) in 2008[10]and in 2010Shane Chen, an American businessman and founder of Inventist, filed a patent for the more familiar and compact seatless device[11]which his company, Inventis launched in 2011.[12]
Chen then went on to file a patent for aself-balancing scooterin February 2013,[13]and launched aKickstarterfund-raising campaign in May 2013[14]with multiple companies, mainly in China releasing similar products. 500,000 units from 10 suppliers were recalled from the US market alone in July 2016.[15][16]
Louie Finkle of California is credited[by whom?]with creating the first commercial electric skateboards, offering his first wireless electric skateboard in 1997[17][18]and he filed for a patent in April 1999,[19]though it was not until 2004 that electric motors and batteries had sufficienttorqueand efficiency to power boards effectively.[17][20]In 2012 ZBoard raised nearly 30 times their target for a balance controlled electric skateboard on Kickstarter,[21]which was well received at theConsumer Electronics Showin Las Vegas in January 2013.[22]
In December 2016The Vergemagazine suggested that 2017 would be an "important year" for personal electric vehicles of all sizes.[23]On 14 August 2018, a unicycle manufactured by InMotion caughtfirein a Britishflat. About 1 week later, InMotion issued a statement to discourage customers from buyingparallel imports.[24][25]From1 July 2019onwards,Singaporeenforces thefire safetystandard known as "UL 2272"[26]bybanningthesalesof non-certified products,[27][28]and by publishing a list oflegalproducts.[29]
The terminology for these devices is not yet stable (As of 2017[update]) as the media and legislators discuss a rapidly emerging potential class of motor vehicle and its relationship to laws relating to other transport devices, including electric bicycles andmobility aidssuch as mobility scooters.[23][3]Commonly used terms are used for these new devices include:
Media:rideable,[30][31]electric rideable,[23][32]electric personal transporter,personal electric vehicle,[33]personal transporter[34]portable electric vehicle.[35]portable personal vehicle[36]
Legislative:personal mobility device(Singapore,[37]Australia - Victoria Transport Policy Unit[3])personal e-mobility device(Underwriters Laboratory),[38]electrically motorized board(California, United States),[39]personal light electric vehicles(European Union),[40]electric personal assistive mobility device(Washington state, United States),[41]powered transporters(UK).[2]
Other languages:Engins de déplacement personnel(French),[42][43]средства индивидуальной мобильности(Russian,lit.'means of individual mobility').[44]
The earliest example of a motorized scooter, or standing scooter with an internal combustion engine, was the 1915Autoped, made in the US until 1919 and in Germany until 1922.
An electric standing scooter with a small platform with two or morewheelsdriven by anelectric motorwhich fold for portability.
An electric skateboard is an electrically powered skateboard controlled by the rider shifting their weight and in some cases also a hand-held throttle.
The self-balancing scooter is a category of personal transporter which includes all self-balancing powered portable devices withtwo parallel wheels; these include the Segway PT, the Segway miniPRO and self-balancing hoverboards.
An electric unicycle is a single-rider electrically poweredunicyclethat balances itself automatically using computer-controlledaccelerometers,gyroscopes, and amagnetometer.[45]
TheOnewheelhas elements of an electric skateboard (it is powered) and a self-balancing unicycle (it has one wheel).[46]
TheHonda UNI-CUBand its predecessor theHonda U3-Xare concept seated devices that are fully stable that can travel sideways as well as in the forwards/backwards axis.
Most devices are powered by rechargeablelithium-ionvehicle batteries, and often18650-sizeLiFePO4batteries controlled by complexbattery management systems.Lithium polymer batteriesare being tested for higher performance.[47]
Many devices now contain one, or sometimes two, batteries in the 101 to 160Wh(360 to 580kJ) range, which fall within the sizes that can be carried on an airline.[48][49]Airlines may restrict carrying some devices due to the earlier product defects.[50]As a rule, every 100 WHours of capacity will provide 6–7 miles of range.[51]
These batteries, which have goodenergy density, energy-to-mass ratio provide the range,torque, operational life required,[52]unlike the previously availablelead–acid,NiMHandNiCadtechnologies.
Many of these devices usebrushless DC electric motorswith permanent magnets attached to the moving hub which turns around a fixedarmaturewhich offer high efficiency, good speed-torque characteristics and low weight. This motor is often built into the wheel itself, eliminating gears and drive belts.[53]Many devices have a motor in the 250-500wattsrange which provides good performance for an adult rider on the flat and on an incline, with sportier models using motors in excess of 1500 Watts.[54]
Brushless DC motors, which often haveregenerative braking, also need complexmotor controllers.[55]
Early 2019 according to secretaryChan, the Government is conducting a "consultation research (顧問研究)".[56]That does not mean that personal transporter is legal. TheTransport Departmentissued a 2015 statement that under the Road Traffic Ordinance, a personal transporter is classified as motor vehicle, since it is mechanically propelled.[57]
Registration and licence is required before any motor vehicle is used on the roads, including private roads. However, since the construction and operation of these motor-driven devices could pose a danger to the users themselves and other road users, they are not appropriate to use on roads, hence they cannot be registered and licensed.[58][59]
According topolicestatistics, there were 9 complaints, 1arrestand 1accidentbetween 5 July and 19 November 2019.[60]
In 2006, the Segway PT was approved for use onsidewalksand other pedestrian designated locations, and on roads without sidewalks, with obstructed sidewalks or sidewalks that lackcurb cuts. The user must be over 16 years old. No license is required. The maximum allowed speed is 13.5 km/h (8.4 mph), enforced by electronic restriction put in place by the importer.[61]
In a court, Segway PT was classified as a motorcycle, owing to the power output;[62]however, there is no report of registration. Segway Japan, an authorized dealer, sells Segways only to corporations to use in facilities.[63]
InMeccathey were banned after a video of a pilgrim, using it duringhajjon a hoverboard was posted on social media.[64]
In December 2016 the Land Transport Authority started a 6-month trial where devices were allowed on trains and buses at all times.[65]
Personal transporters are not allowed on publicroads.[66]Abillin early 2020 bans all personal transporters on sidewalks / footpaths, and requires shops to givenoticesregarding this ban.[67]Since sometime in 2019, riding personal transporters in theHDBcommon areas could result in afineup to S$5,000. The fine also applies tobicyclesandmotorized bicycles.[68]
TheEuropean Committee for Standardization(CEN) has been in the process of defining a standard for personal transporters, referred to as 'personal light electric vehicle', including both self-balancing vehicles and standing vehicles with maximum speeds of up to 25 km/h (16 mph) and is expected to complete its work by the end of 2017.[69][70]In the meantime some countries have allowed personal transporters to be used on public roads with certain conditions.
TheEuropean Committee for Electrotechnical Standardization(CENELEC) has adopted the IEC standards as European Standards:
– EN IEC 63281-2-1:2024 -E-Transporters - Part 2-1: Safety requirements and test methods for personal e-Transporters
– EN IEC 63281-1:2023 -E-Transporters - Part 1: Terminology and classification
which provides relevant terminology and specifies safety requirements and test methods for personal e-transporters (PeTs). These European and International standards are applicable to electrically powered personal e-Transporters (PeTs) which are used in private and public areas, where the speed control and/or the steering control is electric/electronic.[71]
A law revision by the Government of Åland concerning "small electrically powered vehicles" means the Segway PT and all other mainly one person electrical vehicles have been classified as bicycles since 14 March 2012.
The type Segway i2 is (width 63 cm) narrower than the 80 cm (31 in) width limit and has a low-enough maximum speed to come under laws relating to electric bicycles and therefore has to use cycle lanes and paths, otherwise street lanes. The type Segway x2 reaches with its bigger wheels 84 cm width and is, therefore, an electric vehicle, that needs a license and insurance. Neither type may use sidewalks (lengthwise) or pedestrian zones (unless exemption stated).
In Belgium the law was recently adjusted allowing electrical motorized devices to the public road. Art 2.15.2.[72]Devices with a max speed of 18 km/h (11 mph) can ride on the cycle path. One can also use these devices on sidewalks at a walking pace. Devices with a higher maximum speed are subject under the existing rules for motorised vehicles. An insurance and protective wear will be required in any cases.[42][better source needed]
Use of a Segway PT is allowed within city limits wherever pedestrians and bicycles are allowed, i.e., sidewalks, bicycle paths, parks, etc. Segways can be rented for city tours in cities ofZagreb,SplitandDubrovnik.
Until February 2016, legal status of Segway was controversial and unclear. At least since the autumn of 2010, the Ministry of Transport enforced the interpretation that a rider on the Segway is considered as a pedestrian (with possible reference to the legal definition of a pedestrian which mentions "persons on skis, rollerskates or other similar sport equipment" and with an uttered rationale that the device is quite ineligible to fulfil requirements for vehicles). The central Prague districtPraha 1and the city ofPrague, supported by some of transport experts including academic Petr Moos, strongly opposed this interpretation. The ministry was preparing a legal change which would mention PT Segway and skateboards explicitly in the definition of a pedestrian (which should cover alsounicyclesandroller shoesimplicitly). The city of Prague proposed to bring PT transporter to the act as a quite new and special category of road traffic vehicles/participants.
The amendment act 48/2016 Sb., in force since 20 February 2016, defines a new strange term "osobní technický prostředek" (= personal technical device/medium) for "personal transporter with selfbalancing device" and "other similar devices". However, the text of the act uses the term "osobní přepravník" ("personal transporter") in that sense instead. The factual regulation is similar to users of skis and rollerskates, i.e. they fall under rules for pedestrians and in addition, they can use cyclist lanes and cyclist paths. Compared to rollerskates, PTs have their speed limited to "speed of walking" at walkways. Municipality can restrict their traffic by municipal decree, but such a restriction needs to be marked by road signs. Since 21 March 2016, a new ordinance of the Ministry of Transport, 84/2016 Sb., which introduced several new road signs, is in force:[73]
Kick scooters are explicitly considered as bicycles by law. Personal transporters which are not "self-balancing" are not treated specifically.
Segways are used by municipal police corps in several cities asPrague,Plzeň,Olomouc,Karlovy Vary,ZnojmoandSlaný. Since 2014, ambulance Segway is used by the private rescue service Trans Hospital.
Owners and operators of rental Segway transporters are associated in the "Asociace Segway ČR" which had 9 members in August 2014, all their rental shops are in the centre ofPrague. In October 2012, this association prescribed rules for its members which contain a list of prohibited hazardous frequented localities.[74]Some other operators are not associated and don't respect the rules. Metro daily newspaper in a May 2015 article presented an estimate that there are ca 300 Segways in Prague streets.[75]However, since November 2016, Segways are prohibited in the broader centre of Prague.
Massive usage of Segways, as well as restrictions, are still limited to the area of the broader centre ofPrague.
On 15 September 2014Praha 1placed to theKampapark the first Czech road signs which prohibit entrance of Segways. The sign consisted of the message "No entrance for pedestrians" with an additional text sign "JEN ZAŘÍZENÍ SEGWAY" (only Segway devices). These signs were criticized by media and by the Ministry of Transport as confusing and incomprehensible.
Praha 1 prohibited for Segways also the passage of Richter House between Michalská street and Little Square at the Old Town, in 2015 or earlier. Unofficial marking on the floor was used for this prohibition.[76]
In July 2015,Praha 2prohibited Segways in the area ofVyšehradFortress. A round sign with the text "SEGWAY" inside was used.[76][77]
Since 15 August 2015, the director general of theNational Libraryprohibited Segway riding in the area ofClementinuminPrague Old Town, however Segways were allowed to be led from the side.[78]Similarly, Segways were prohibited in the area of the Tyrš House atMalá Strana, the main building of theCzech organization of Sokol.
On the grounds of new legal definitions and authorization, on 19 July 2016, the Prague Council approved a decree (in force since 3 August 2016) that Segways (strictly speaking all "personal transporters" as defined by law) are forbidden in the wholePrague Conservation Area(Old Town,New Town,Hradčany,Malá Strana,Josefov,Vyšehrad) as well as in a broad center of the city: the whole district ofPrague 7(Holešoviceand part ofBubenečincluding Stromovka Park), big part ofPrague 4(Nusle,Podolí,Braník,Krč,Michle),Karlín, parts ofŽižkovandVinohradyetc.[79][80]However, the restriction became efficient after the prohibition road signs are installed. According to the marking project by TSK (the Prague road management organization), 610 zone signs were installed at 250 places, at the expense of 4 million CZK. Implementation of the marking should begin past the official comment procedure, in the second half of November 2016.[81]However, the official information campaign "Segway No Way" started in August already.[82]On 24 November 2016, the Magistrate gave its decision about the signage and the first such sign was installed on 25 November 2016, the remaining in the next two weeks.[83]
The Segway PT is classified as amoped(knallert). As such vehicles must be fitted with lights, license plates and mechanical brakes, the Segway is effectively banned from public roads.[84]A trial where the Segway would be classified as a bicycle has been announced running from 1 June 2010 to 1 April 2011. The trial was extended to 1 December 2011, and later to the end of 2014.[85]
In September 2015 authorities in Finland recommended that personal transporters should be made legal for use on roads, making a distinction between devices with a maximum speed of 15 km/h (9.3 mph) which would be treated as pedestrians and ones with a maximum speed of 25 km/h (16 mph) which would be treated as bicycles.[86]
Segway PTs are classified as low-power mopeds and therefore require license plates, effectively banning the use on public roads. On 31 March 2015, The Ministry of Transport and Communications of Finland started progress to propose changes to law to allow Segways under 25 km/h on sidewalks and reclassifying them as bicycles. Like bicycles, Segways would be required to includesafety reflectorsand a bell to alert pedestrians and the driver is required to wear a bicycle helmet.[87]
In 2017, 284 people were injured by Personal transporter and 5 were killed.[88]
Since 2019, France has specific regulations/law for Personal transporter.
Previously Segway PTs, also named "gyropode", were sometimes, but not always, considered as pedestrians and obey the same rules and laws. Nonetheless, Segways which do not have type certification to be driven as a motor vehicle are not part of any of the class of vehicle defined by the traffic code. For this reason, they have an unclear legal status.[89]
Riders must go with thedirection of traffic.[90]
InParis,motorized scooterriders could befinedfor riding onsidewalks(135euros) orparkingitantisocially(35 euros).[91]
France introduced in 2019 a change in the Code de la route specific for the Personal transporter, depending on the speed the Personal transporter can reach.
This new law
In Germany self-balancing hoverboards are not allowed on public streets.[92]
It is not legal to ride solowheels on public roads (includes sidewalks, parks, forest tracks, etc.) in Germany as of June 2017. Because it is considered as a type of motor vehicle the rider would need a test certificate from the Technical Inspection Agency (Technischer Überwachungsverein) to get insurance. Additionally, the driver would have to pay taxes according to the certificate. However, the Inspection Agency has no valid classification for it, no certificate can be obtained. Hence, riding a solowheel on public road would mean to ride without certificate, without insurance and to evade taxes. It may have severe penalties (up to one year in prison[93]) when a solowheel rider is caught by the police. In contrast, for the Seqway as a two-wheeled vehicle with handlebar, there is a classification that allows to get a certificate and thus, the compulsory insurance.
The Segway PT i2 is generally allowed on bicycle paths and public roads within city limits since 25 July 2009.[94]Outside city limits, the Segway may not be used onfederal motorways,federal highways,state roads, anddistrict roads. Bicycle lanes must be used if present. Riding a Segway on sidewalks and inpedestrian zonesfor city tours requires a special permit. The Segway is classified as an "electronic mobility aid", a new class of vehicle defined specifically for the Segway PT. Segways used on public roads must be equipped withfront and rear lighting,reflectors, a bell, and aninsurance plate.
TheKözponti Közlekedési Főfelügyelet(Central Traffic Authority Board) does not consider Segways to be vehicle, and considersskateboarders, and people moving luggage trolleys pedestrians. Segway riders may use sidewalks and follow rules for pedestrians.[95]
Segway PTs are permitted in most public places. They are permitted in certain areas on bicycle paths aroundDublinandCork.[citation needed]
Use of a Segway PT is allowed within city limits wherever pedestrians or bicycles are allowed, i.e., sidewalks, bicycle paths, parks, etc.[96]
Segway PTs are legal onbicycletrails and roads. They are the equivalent to electric bicycles and obey the same rules and laws.
In the Netherlands the use of self-balancing hoverboards is illegal on all public roads, it is only allowed on private property. The main reason given is that the vehicle is motorized but has no steering wheel and no place to sit. Therefore, the vehicle does not fall in any category allowed on public roads.[97]
In The Netherlands, any motorised skateboard is not permitted on public roads, including those driven by an electric motor.[98]
In April 2008, the Dutch Government announced that it would ease the ban it had imposed in January 2007 that made it illegal to use a Segway PT on public roads in the Netherlands.[99]Until recently[when?], a tolerance policy was in place due to the inability of the authorities to classify the Segway as a vehicle.[100]However, certain handicapped people, primarily heart and lung patients, are allowed to use the Segway, but only on the pavement. From 1 July 2008, anyone over the age of 16 is permitted to use a Segway on Dutch roads but users need to buy custom insurance.[101]Amsterdam police officers are testing the Segway. In Rotterdam, the Segway has been used regularly by police officers and city watches.
Because of the top speed of 20 km/h, the Segway was classified as a moped in Norway. Prior to 2014, there were requirements for registration, insurance, age limit, drivers licenses and helmets to operate a Segway in the country. Therefore, Segways were not originally able to be used legally on public or private roads or on private property in Norway.[102][103]Segways became legal in Norway on 1 July 2014 on all public roads with speed limits 20 km/h or less, sidewalks and bicycle lanes for ages 16 and older without requiring registration or insurance.[104]
From 20 May 2021, regulations on the movement of personal transport devices and electric scooters will apply.[105]They are included in Art. 33-33d of the Road Traffic Law. The driver of the personal transport device is obliged to use thecycle pathif it is designated for the direction in which it is moving or intends to turn. The driver of the personal transport device, when using the path for bicycles and pedestrians, is obliged to exercise particular caution and give way to pedestrians. He may use the footpath or road where there is no cycle path. If he uses them, he is obliged to drive at a speed close to that of a pedestrian, exercise particular caution, give way to a pedestrian and not obstruct his movement.[106]
Segway PTs are legal on public paths from age 18 (and below, when accompanied by adults) as an equivalent to pedestrian traffic[107]and are used bylocal police forces,[108]and by Polícia Marítima] (a Navy unit), for beach patrolling. They are also used (rented) by tour operators across the country, and by shopping security guards.
It was unlawful to use a Segway PT on any public road or pavement in Sweden until 18 December 2008 when the Segway was re-classified as acykel klass II(class 2 bicycle).[109][110]On 1 October 2010 the Segway and similar one person electrical vehicles were re-classified as bicycles.[citation needed]
As of September 1, 2022 it is no longer permitted to park the electric scooter on footpaths and cycle paths or to drive on footpaths and pavements.[111]
In Switzerland, devices with a maximum speed of 25 km/h (16 mph) have an age limit of age 14 years with a licence, and 16 years without a licence.[112]
The Segway PT is classified as a moped with usage of all bicycle circulation areas.[113]Only the PT i2 and x2 (SE) has been approved for use in Switzerland, no NineBot Elite or mini Pro. Every self-balancing vehicle must be fully redundant. The PT may be used on roads provided that it is equipped with a Swiss Road Kit and a license plate. The Swiss Road Kit has front and back lighting, a battery source, and a license plate holder. Use on sidewalks and pedestrian zones is prohibited. An exception is made for handicapped individuals, who must obtain in advance a special authorization from the Swiss Federal Roads Office. The Segway PT i180 may also be registered for use on specific request. However, the PT i180 must be equipped with a left/right turn indicator system before it may be admitted for road use.[citation needed]
In England and Wales use of these devices on a sidewalk is banned under Section 72 of theHighway Act 1835.[114]With reference to its use of thecarriagewayit falls into the category of 'motor vehicle' (defined as 'a mechanically propelled vehicle, intended or adapted for use on roads' by section 136 of theRoad Traffic Regulation Act 1984) (see[115]) and as such would be covered by the Road Vehicles (Construction & Use) Regulations 1986 and hence approval throughEuropean Community Whole Vehicle Type Approval.[116]The government has been petitioned to allow these devices on the road,[117]and trials are currently being carried out in a restricted number of towns allowing the use of rental (but not privately owned) electric scooters.[118]While in opposition in 2008, theConservativesandLiberal Democratslobbied theLabourGovernment to change the law to allow Segways to use public cycle lanes.[119]In July 2010, a man was charged under theHighway Act 1835inBarnsleyfor riding his Segway on the pavement, and was prosecuted and fined £75 in January 2011.[120][121][122]His conviction was upheld by the High Court on appeal.[123]
InScotland, it is illegal to ride on public pavements (sidewalks) under the Roads Act, 1984.[114]
InTorontomotorized vehicles are not allowed on sidewalks, except for mobility scooters for people who need them.[124]
Restrictions on motorized vehicle use are set by provinces individually. In Alberta, Segway PTs cannot legally be driven on public roads including sidewalks abutting public roads. Segways cannot legally be driven on city-owned bicycle paths in Calgary.[citation needed]Segways are allowed on private land with the landowner's permission. In British Columbia, Segways cannot legally be operated on B.C. roads or on sidewalks because they cannot be licensed or insured as a vehicle in B.C.[125]In Ontario, the Ministry of Transportation started a pilot program allowing Segways to be used by people 14 years or older with a disability, Canada Post door-to-door delivery personnel, and police officers. It was originally planned to end on 19 October 2011, but was extended by two years, and then extended again an additional five years (to 19 October 2018), due to limited participation. Prior to the end of the pilot program, the Ministry of Transportation will assess the data and information gathered from the pilot decide whether to allow Segways and how to legislate them.[126]
InCalifornia, as of 1 January 2016 'electrically motorized boards' can be used by those over 16 years old at speeds of up to 15 miles per hour (24 km/h) on streets where the speed limit is under 35 miles per hour (56 km/h) as long as they wear a helmet and comply withdrive/drug laws. Boards must bespeed limitedto 20 miles per hour (32 km/h), be designed for the transport of one person and have a power of less than 1000watts. Use of these devices on the sidewalk is left to cities and counties to decide. Having monitored this new law for 5 years,California Highway Patrolwill submit a final report to the legislature in 2021.[39]University of California, Los Angelesincluded Hoverboards in a general restriction on the use of bicycles, scooters and skateboards using walkways and hallways in November 2015.[127]
InNew York City, self-balancing hoverboards are banned under existing legislation; however, community advocates are working with lawmakers to legalize their use[128][129]but there is no current explanation from the lawmakers relating to electric skateboards.[130]
The Segway PT has been banned from use on sidewalks and in public transportation in a fewmunicipalitiesand the company has challenged bans and sought exemption from sidewalk restrictions in over 30 states.[citation needed]Advocacy groups for pedestrians and the blind in the US have been critical of Segway PT use: America Walks[131]and theAmerican Council of the Blindoppose allowing people, even those with disabilities, to drive the Segway PT on sidewalks and have actively lobbied against any such legislation.[132]Today, Segways are allowed on sidewalks in most states, though local municipalities may forbid them. Many states also allow them on bicycle lanes or on roads with speed limits of up to 25 mph (40 km/h).[133]
In 2011, the U.S. government Department of Justice—amending regulations that implement title II of theAmericans with Disabilities Act(ADA)—ruled that the Segway is an "other power-driven mobility device" and its use must be permitted unless the covered entity can demonstrate that users cannot operate the class of devices in accordance with legitimate safety requirements.[134]
A fact sheet published by the US Justice Department states: "People with mobility, circulatory, respiratory, or neurological disabilities use many kinds of devices for mobility. Some use walkers, canes, crutches, or braces. Some use manual or power wheelchairs or electric scooters. In addition, advances in technology have given rise to new devices, such as Segways that some people with disabilities use as mobility devices, including many veterans injured while serving in the military. And more advanced devices will inevitably be invented, providing more mobility options for people with disabilities." There is some allowance in only some very specific circumstances where usage would be considered unsafe.[135]Semi-ambulatory Americans have previously benefitted from Segway use, even in New York City.[136]Segs4Vetsprovides Segway PTs to permanently injured military veterans.[137]
San Franciscobanned the Segway PT from sidewalks over safety concerns in 2002.[138]The District of Columbia categorizes Segways as a "personal mobility device" which means Segway users follow D.C.'s bicycle laws, which do not require Segway users to wear helmets and other protective gear. Users are not allowed to wear headphones with the exception of hearing aids or other devices that only require the use of one ear.[139][140]
In Mexico there is no regulation that limits Segway use in public spaces.[141]
The authorities stated in late 2015 that self-balancing hoverboards must not be ridden on the carriageway or sidewalk in the state ofNew South Walessince they are categorised as motor vehicles but don't comply with any existing vehicle class. They did also say that "our road safety experts in the Centre for Road Safety are currently working with their counterparts across the country on national laws and safety standards for these personal electric transport devices, so we can figure out how and where people can use them safely".[142][143]Other states in Australia have yet to make a clear decision or announcement on legality and enforcement, and are relying on existing laws in place.[144]They are free to use on private property.[144]
In Australia laws are determined at the state & territory level, each differing in their adoption of theAustralian Road Rules. It is generally illegal to use Segway PTs in public places and on roads throughout Australia.
In theAustralian Capital Territory, use of Segways is illegal on roads and other public places, but, as of June 2012[update], was permitted around Canberra'sLake Burley Griffinand other tourist attractions, subject to training, safety equipment and speed limit requirements.[145][146]
In New South Wales, the Segway has been confirmed by theRoads & Traffic Authorityas being illegal on both roads and footpaths. "In simple terms, riders are way too exposed to mix with general traffic on a road and too fast, heavy and consequently dangerous to other users on footpaths or cycle paths."[147]Although this does not render them totally illegal (they may still, for example, be used on private property), their uses are limited enough that they are not sold to the general public. As of 2024, all forms of personal transporter are illegal for personal use in public areas such as roads, footpaths, parks, bike paths, shared paths etc.[148][149]
InQueensland, the use of the Segway became legal on 1 August 2013. Queensland transport MinisterScott Emersonnoted that it makes sense for Segways to be allowed on public paths across Queensland, given users wear helmets.
InWestern Australia, the law enables Electric Personal Transporters (EPT) (Segways) to be used as part of a supervised commercial tour, being run by an operator that holds the appropriate approvals. You may use an EPT on private property. Tour operators should approach the Local Authority where they wish to operate the tour. Local authorities have ultimate responsibility for approving tour operators within their respective areas.[150][151]
In New Zealand the Segway PT is classed as amobility device, in the same category as a mobility scooter or electric wheelchair. Mobility Devices must be ridden on footpaths where possible, at a speed that does not endanger others, and give way to pedestrians.[152]This ruling might not be consistently applied: in 2011, police inTaupōhad to stop using Segways because there is no separate vehicle classification that applies to them, requiring their registration as roadworthy in the same manner as cars.[153]
|
https://en.wikipedia.org/wiki/Personal_transporter
|
Inmathematics, ahomothety(orhomothecy, orhomogeneous dilation) is atransformationof anaffine spacedetermined by a pointScalled itscenterand a nonzero numberkcalled itsratio, which sends pointXto a pointX′by the rule,[1]
Using position vectors:
In case ofS=O{\displaystyle S=O}(Origin):
which is auniform scalingand shows the meaning of special choices fork{\displaystyle k}:
For1/k{\displaystyle 1/k}one gets theinversemapping defined byk{\displaystyle k}.
InEuclidean geometryhomotheties are thesimilaritiesthat fix a point and either preserve (ifk>0{\displaystyle k>0}) or reverse (ifk<0{\displaystyle k<0}) the direction of all vectors. Together with thetranslations, all homotheties of an affine (or Euclidean) space form agroup, the group ofdilationsorhomothety-translations. These are precisely theaffine transformationswith the property that the image of every linegis a lineparalleltog.
Inprojective geometry, a homothetic transformation is a similarity transformation (i.e., fixes a given elliptic involution) that leaves the line at infinity pointwiseinvariant.[2]
In Euclidean geometry, a homothety of ratiok{\displaystyle k}multipliesdistancesbetween points by|k|{\displaystyle |k|},areasbyk2{\displaystyle k^{2}}and volumes by|k|3{\displaystyle |k|^{3}}. Herek{\displaystyle k}is theratio of magnificationordilation factororscale factororsimilitude ratio. Such a transformation can be called anenlargementif the scale factor exceeds 1. The above-mentioned fixed pointSis calledhomothetic centerorcenter of similarityorcenter of similitude.
The term, coined by French mathematicianMichel Chasles, is derived from twoGreekelements: the prefixhomo-(όμο'similar'}; andtransl.grc– transl.thesis(Θέσις)'position'). It describes the relationship between two figures of the same shape and orientation. For example, twoRussian dollslooking in the same direction can be considered homothetic.
Homotheties are used to scale the contents of computer screens; for example, smartphones, notebooks, and laptops.
The following properties hold in any dimension.
A homothety has the following properties:
Both properties show:
Derivation of the properties:In order to make calculations easy it is assumed that the centerS{\displaystyle S}is the origin:x→kx{\displaystyle \mathbf {x} \to k\mathbf {x} }. A lineg{\displaystyle g}with parametric representationx=p+tv{\displaystyle \mathbf {x} =\mathbf {p} +t\mathbf {v} }is mapped onto the point setg′{\displaystyle g'}with equationx=k(p+tv)=kp+tkv{\displaystyle \mathbf {x} =k(\mathbf {p} +t\mathbf {v} )=k\mathbf {p} +tk\mathbf {v} }, which is a line parallel tog{\displaystyle g}.
The distance of two pointsP:p,Q:q{\displaystyle P:\mathbf {p} ,\;Q:\mathbf {q} }is|p−q|{\displaystyle |\mathbf {p} -\mathbf {q} |}and|kp−kq|=|k||p−q|{\displaystyle |k\mathbf {p} -k\mathbf {q} |=|k||\mathbf {p} -\mathbf {q} |}the distance between their images. Hence, theratio(quotient) of two line segments remains unchanged.
In case ofS≠O{\displaystyle S\neq O}the calculation is analogous but a little extensive.
Consequences: A triangle is mapped on asimilarone. The homothetic image of acircleis a circle. The image of anellipseis a similar one. i.e. the ratio of the two axes is unchanged.
If for a homothety with centerS{\displaystyle S}the imageQ1{\displaystyle Q_{1}}of a pointP1{\displaystyle P_{1}}is given (see diagram) then the imageQ2{\displaystyle Q_{2}}of a second pointP2{\displaystyle P_{2}}, which lies not on lineSP1{\displaystyle SP_{1}}can be constructed graphically using the intercept theorem:Q2{\displaystyle Q_{2}}is the common point th two linesP1P2¯{\displaystyle {\overline {P_{1}P_{2}}}}andSP2¯{\displaystyle {\overline {SP_{2}}}}. The image of a point collinear withP1,Q1{\displaystyle P_{1},Q_{1}}can be determined usingP2,Q2{\displaystyle P_{2},Q_{2}}.
Before computers became ubiquitous, scalings of drawings were done by using apantograph, a tool similar to acompass.
Construction and geometrical background:
Because of|SQ0|/|SP0|=|Q0Q|/|PP0|{\displaystyle |SQ_{0}|/|SP_{0}|=|Q_{0}Q|/|PP_{0}|}(see diagram) one gets from theintercept theoremthat the pointsS,P,Q{\displaystyle S,P,Q}are collinear (lie on a line) and equation|SQ|=k|SP|{\displaystyle |SQ|=k|SP|}holds. That shows: the mappingP→Q{\displaystyle P\to Q}is a homothety with centerS{\displaystyle S}and ratiok{\displaystyle k}.
Derivation:
For the compositionσ2σ1{\displaystyle \sigma _{2}\sigma _{1}}of the two homothetiesσ1,σ2{\displaystyle \sigma _{1},\sigma _{2}}with centersS1,S2{\displaystyle S_{1},S_{2}}with
one gets by calculation for the image of pointX:x{\displaystyle X:\mathbf {x} }:
Hence, the composition is
is afixpoint(is not moved) and the composition
is ahomothetywith centerS3{\displaystyle S_{3}}and ratiok1k2{\displaystyle k_{1}k_{2}}.S3{\displaystyle S_{3}}lies on lineS1S2¯{\displaystyle {\overline {S_{1}S_{2}}}}.
Derivation:
The composition of the homothety
which is a homothety with centers′=s+v1−k{\displaystyle \mathbf {s} '=\mathbf {s} +{\frac {\mathbf {v} }{1-k}}}and ratiok{\displaystyle k}.
The homothetyσ:x→s+k(x−s){\displaystyle \sigma :\mathbf {x} \to \mathbf {s} +k(\mathbf {x} -\mathbf {s} )}with centerS=(u,v){\displaystyle S=(u,v)}can be written as the composition of a homothety with centerO{\displaystyle O}and a translation:
Henceσ{\displaystyle \sigma }can be represented inhomogeneous coordinatesby the matrix:
A pure homothetylinear transformationis alsoconformalbecause it is composed of translation and uniform scale.
|
https://en.wikipedia.org/wiki/Homothetic_transformation
|
Instatistics,shrinkageis the reduction in the effects of sampling variation. Inregression analysis, a fitted relationship appears to perform less well on a new data set than on the data set used for fitting.[1]In particular the value of thecoefficient of determination'shrinks'. This idea is complementary tooverfittingand, separately, to the standard adjustment made in the coefficient of determination to compensate for the subjective effects of further sampling, like controlling for the potential of new explanatory terms improving the model by chance: that is, the adjustment formula itself provides "shrinkage." But the adjustment formula yields an artificial shrinkage.
Ashrinkage estimatoris anestimatorthat, either explicitly or implicitly, incorporates the effects of shrinkage. In loose terms this means that a naive or raw estimate is improved by combining it with other information. The term relates to the notion that the improved estimate is made closer to the value supplied by the 'other information' than the raw estimate. In this sense, shrinkage is used toregularizeill-posedinferenceproblems.
Shrinkage is implicit inBayesian inferenceand penalized likelihood inference, and explicit inJames–Stein-type inference. In contrast, simple types ofmaximum-likelihoodandleast-squares estimationprocedures do not include shrinkage effects, although they can be used within shrinkage estimation schemes.
Many standard estimators can beimproved, in terms ofmean squared error(MSE), by shrinking them towards zero (or any other finite constant value). In other words, the improvement in the estimate from the corresponding reduction in the width of the confidence interval can outweigh the worsening of the estimate introduced by biasing the estimate towards zero (seebias-variance tradeoff).
Assume that the expected value of the raw estimate is not zero and consider other estimators obtained by multiplying the raw estimate by a certain parameter. A value for this parameter can be specified so as to minimize the MSE of the new estimate. For this value of the parameter, the new estimate will have a smaller MSE than the raw one, and thus it has been improved. An effect here may be to convert anunbiasedraw estimate to an improved biased one.
An example arises in the estimation of the populationvariancebysample variance. For a sample size ofn, the use of a divisorn−1 in the usual formula (Bessel's correction) gives an unbiased estimator, while other divisors have lower MSE, at the expense of bias. The optimal choice of divisor (weighting of shrinkage) depends on theexcess kurtosisof the population, as discussed atmean squared error: variance, but one can always do better (in terms of MSE) than the unbiased estimator; for the normal distribution a divisor ofn+1 gives one which has the minimum mean squared error.
Types ofregressionthat involve shrinkage estimates includeridge regression, where coefficients derived from a regular least squares regression are brought closer to zero by multiplying by a constant (theshrinkage factor), andlasso regression, where coefficients are brought closer to zero by adding or subtracting a constant.
The use of shrinkage estimators in the context of regression analysis, where there may be a large number of explanatory variables, has been described by Copas.[2]Here the values of the estimated regression coefficients are shrunk towards zero with the effect of reducing the mean square error of predicted values from the model when applied to new data. A later paper by Copas[3]applies shrinkage in a context where the problem is to predict a binary response on the basis of binary explanatory variables.
Hausser and Strimmer "develop a James-Stein-type shrinkage estimator, resulting in a procedure that is highly efficient statistically as well as computationally. Despite its simplicity, it outperforms eight other entropy estimation procedures across a diverse range of sampling scenarios and data-generating models, even in cases of severe undersampling. ... method is fully analytic and hence computationally inexpensive. Moreover, procedure simultaneously provides estimates of the entropy and of the cell frequencies. The proposed shrinkage estimators of entropy and mutual information, as well as all other investigated entropy estimators, have been implemented in R (R Development Core Team, 2008). A corresponding R package 'entropy' was deposited in the R archive CRAN under the GNU General Public License."[4][5]
Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Shrinkage_estimator
|
The following is a list of products, services, and apps provided byGoogle. Active, soon-to-be discontinued, and discontinued products, services, tools, hardware, and other applications are broken out into designated sections.
Applications that are no longer in development and scheduled to be discontinued in the future:
Google has retired many offerings, either because of obsolescence, integration into other Google products, or lack of interest.[21]Google's discontinued offerings are colloquially referred to as Google Graveyard.[22][23]
|
https://en.wikipedia.org/wiki/Titan_M
|
Code wordmay refer to:
|
https://en.wikipedia.org/wiki/Codeword
|
Password notification emailor password recovery email is a commonpassword recoverytechnique used bywebsites. If a user forgets theirpassword, a password recoveryemailis sent which contains enough information for the user to access theiraccountagain. This method of password retrieval relies on the assumption that only the legitimate owner of the account has access to the inbox for that particular email address.
The process is often initiated by the user clicking on a forgotten password link on the website where, after entering theirusernameor email address, the password notification email would be automatically sent to the inbox of the account holder. This email may contain a temporary password or aURLthat can be followed to enter a new password for that account. The new password or the URL often contain a randomly generatedstringof text that can only be obtained by reading that particular email.[1]
Another method used is to send all or part of the original password in the email. Sending only a few characters of the password can help the user to remember their original password without having to reveal the whole password to them.
The main issue is that the contents of the password notification email can be easily discovered by anyone with access to the inbox of the account owner.[2]This could be as a result ofshoulder surfingor if the inbox itself is not password protected. The contents could then be used to compromise the security of the account. The user would therefore have the responsibility of either securely deleting the email or ensuring that its contents are not revealed to anyone else. A partial solution to this problem, is to cause any links contained within the email to expire after a period of time, making the email useless if it is not used quickly after it is sent.
Any method that sends part of the original password means that the password is stored in plain text and leaves the password open to an attack from hackers.[3]This is why it is typical for newer sites to create a new password generate a token. If the site gets hacked, the password contained within could be used to access other accounts used by the user, if that user had chosen to use the same password for two or more accounts. Additionally, emails areoften not secure. Unless an email had beenencryptedprior to being sent, its contents could be read by anyone whoeavesdropson the email.
|
https://en.wikipedia.org/wiki/Password_notification_e-mail
|
Semantic unificationis the process of unifying lexically different concept representations that are judged to have the same semantic content (i.e., meaning). In business processes, the conceptual semantic unification is defined as "the mapping of two expressions onto an expression in an exchange format which is equivalent to the given expression".[1]
Semantic unification has since been applied to the fields ofbusiness processesandworkflow management. In the early 1990s Charles Petri[full citation needed]at Stanford University[full citation needed]introduced the term "semantic unification" for business models, later references could be found in[2]and later formalized in Fawsy Bendeck's dissertation.[3]Petri introduced the term 'pragmatic semantic unification" to refer to the approaches in which the results are tested against a running application using the semantic mappings.[4]In this pragmatic approach, the accuracy of the mapping is not as important as its usability.
In general, semantic unification as used in business processes is employed to find a common unified concept that matches two lexicalized expressions into the same interpretation.[citation needed]
Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Semantic_unification
|
TheHH-suiteis anopen-source softwarepackage for sensitiveproteinsequence searching. It contains programs that can search for similar protein sequences in protein sequence databases. Sequence searches are a standard tool in modern biology with which the function of unknown proteins can be inferred from the functions of proteins with similar sequences.HHsearchandHHblitsare two main programs in the package and the entry point to its search function, the latter being a faster iteration.[2][3]HHpredis an online server forprotein structure predictionthat uses homology information from HH-suite.[4]
The HH-suite searches for sequences usinghidden Markov models(HMMs). The name comes from the fact that it performs HMM-HMM alignments. Among the most popular methods for protein sequence matching, the programs have been cited more than 5000 times total according toGoogle Scholar.[5]
Proteins are central players in all of life's processes. Understanding them is central to understanding molecular processes in cells. This is particularly important in order to understand the origin of diseases. But for a large fraction of the approximately 20 000 human proteins the structures and functions remain unknown. Many proteins have been investigated in model organisms such as many bacteria, baker's yeast, fruit flies, zebra fish or mice, for which experiments can be often done more easily than with human cells. To predict the function, structure, or other properties of a protein for which only its sequence of amino acids is known, the protein sequence is compared to the sequences of other proteins in public databases. If a protein with sufficiently similar sequence is found, the two proteins are likely to be evolutionarily related ("homologous"). In that case, they are likely to share similar structures and functions. Therefore, if a protein with a sufficiently similar sequence and with known functions and/or structure can be found by the sequence search, the unknown protein's functions, structure, and domain composition can be predicted. Such predictions greatly facilitate the determination of the function or structure by targeted validation experiments.
Sequence searches are frequently performed by biologists to infer the function of an unknown protein from its sequence. For this purpose, the protein's sequence is compared to the sequences of other proteins in public databases and its function is deduced from those of the most similar sequences. Often, no sequences with annotated functions can be found in such a search. In this case, more sensitive methods are required to identify more remotely related proteins orprotein families. From these relationships, hypotheses about the protein's functions,structure, anddomain compositioncan be inferred. HHsearch performs searches with a protein sequence through databases. The HHpred server and the HH-suite software package offer many popular, regularly updated databases, such as theProtein Data Bank, as well as theInterPro,Pfam,COG, andSCOPdatabases.
Modern sensitive methods for protein search utilize sequence profiles. They may be used to compare a sequence to a profile, or in more advanced cases such as HH-suite, to match among profiles.[2][6][7][8]Profiles and alignments are themselves derived from matches, using for examplePSI-BLASTor HHblits. Aposition-specific scoring matrix(PSSM) profile contains for each position in the query sequence the similarity score for the 20 amino acids. The profiles are derived frommultiple sequence alignments(MSAs), in which related proteins are written together (aligned), such that the frequencies of amino acids in each position can be interpreted as probabilities for amino acids in new related proteins, and be used to derive the "similarity scores". Because profiles contain much more information than a single sequence (e.g. the position-specific degree of conservation), profile-profile comparison methods are much more powerful than sequence-sequence comparison methods likeBLASTor profile-sequence comparison methods like PSI-BLAST.[6]
HHpred and HHsearch represent query and database proteins byprofile hidden Markov models(HMMs), an extension of PSSM sequence profiles that also records position-specific amino acid insertion and deletion frequencies. HHsearch searches a database of HMMs with a query HMM. Before starting the search through the actual database of HMMs, HHsearch/HHpred builds amultiple sequence alignmentof sequences related to the query sequence/MSA using the HHblits program. From this alignment, a profile HMM is calculated. The databases contain HMMs that are precalculated in the same fashion using PSI-BLAST. The output of HHpred and HHsearch is a ranked list of database matches (including E-values and probabilities for a true relationship) and the pairwise query-database sequence alignments.
HHblits, a part of the HH-suite since 2001, builds high-qualitymultiple sequence alignments(MSAs) starting from a single query sequence or a MSA. As in PSI-BLAST, it works iteratively, repeatedly constructing new query profiles by adding the results found in the previous round. It matches against a pre-built HMM databases derived from protein sequence databases, each representing a "cluster" of related proteins. In the case of HHblits, such matches are done on the level of HMM-HMM profiles, which grants additional sensitivity. Its prefiltering reduces the tens of millions HMMs to match against to a few thousands of them, thus speeding up the slow HMM-HMM comparison process.[3]
The HH-suite comes with a number of pre-built profile HMMs that can be searched using HHblits and HHsearch, among them a clustered version of theUniProtdatabase, of theProtein Data Bankof proteins with known structures, ofPfamprotein family alignments, ofSCOPstructural protein domains, and many more.[9]
Applications of HHpred and HHsearch include protein structure prediction, complex structure prediction, function prediction, domain prediction, domain boundary prediction, and evolutionary classification of proteins.[10]
HHsearch is often used forhomology modeling, that is, to build a model of the structure of a query protein for which only the sequence is known: For that purpose, a database of proteins with known structures such as theprotein data bankis searched for "template" proteins similar to the query protein. If such a template protein is found, the structure of the protein of interest can be predicted based on a pairwisesequence alignmentof the query with the template protein sequence. For example, a search through the PDB database of proteins with solved 3D structure takes a few minutes. If a significant match with a protein of known structure (a "template") is found in the PDB database, HHpred allows the user to build a homology model using theMODELLERsoftware, starting from the pairwise query-template alignment.
HHpred servers have been ranked among the best servers duringCASP7, 8, and 9, for blind protein structure prediction experiments. In CASP9, HHpredA, B, and C were ranked 1st, 2nd, and 3rd out of 81 participating automatic structure prediction servers in template-based modeling[11]and 6th, 7th, 8th on all 147 targets, while being much faster than the best 20 servers.[12]InCASP8, HHpred was ranked 7th on all targets and 2nd on the subset of single domain proteins, while still being more than 50 times faster than the top-ranked servers.[4]
In addition to HHsearch and HHblits, the HH-suite contains programs and perl scripts for format conversion, filtering of MSAs, generation of profile HMMs, the addition of secondary structure predictions to MSAs, the extraction of alignments from program output, and the generation of customized databases.
The HMM-HMM alignment algorithm of HHblits and HHsearch was significantly accelerated usingvector instructionsin version 3 of the HH-suite.[13]
|
https://en.wikipedia.org/wiki/HH-suite
|
Inprobability theoryandstatistics,varianceis theexpected valueof thesquared deviation from the meanof arandom variable. Thestandard deviation(SD) is obtained as the square root of the variance. Variance is a measure ofdispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the secondcentral momentof adistribution, and thecovarianceof the random variable with itself, and it is often represented byσ2{\displaystyle \sigma ^{2}},s2{\displaystyle s^{2}},Var(X){\displaystyle \operatorname {Var} (X)},V(X){\displaystyle V(X)}, orV(X){\displaystyle \mathbb {V} (X)}.[1]
An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as theexpected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. Another disadvantage is that the variance is not finite for many distributions.
There are two distinct concepts that are both called "variance". One, as discussed above, is part of a theoreticalprobability distributionand is defined by an equation. The other variance is a characteristic of a set of observations. When variance is calculated from observations, those observations are typically measured from a real-world system. If all possible observations of the system are present, then the calculated variance is called the population variance. Normally, however, only a subset is available, and the variance calculated from this is called the sample variance. The variance calculated from a sample is considered an estimate of the full population variance. There are multiple ways to calculate an estimate of the population variance, as discussed in the section below.
The two kinds of variance are closely related. To see how, consider that a theoretical probability distribution can be used as a generator of hypothetical observations. If an infinite number of observations are generated using a distribution, then the sample variance calculated from that infinite set will match the value calculated using the distribution's equation for variance. Variance has a central role in statistics, where some ideas that use it includedescriptive statistics,statistical inference,hypothesis testing,goodness of fit, andMonte Carlo sampling.
The variance of a random variableX{\displaystyle X}is theexpected valueof thesquared deviation from the meanofX{\displaystyle X},μ=E[X]{\displaystyle \mu =\operatorname {E} [X]}:Var(X)=E[(X−μ)2].{\displaystyle \operatorname {Var} (X)=\operatorname {E} \left[(X-\mu )^{2}\right].}This definition encompasses random variables that are generated by processes that arediscrete,continuous,neither, or mixed. The variance can also be thought of as thecovarianceof a random variable with itself:
Var(X)=Cov(X,X).{\displaystyle \operatorname {Var} (X)=\operatorname {Cov} (X,X).}The variance is also equivalent to the secondcumulantof a probability distribution that generatesX{\displaystyle X}. The variance is typically designated asVar(X){\displaystyle \operatorname {Var} (X)}, or sometimes asV(X){\displaystyle V(X)}orV(X){\displaystyle \mathbb {V} (X)}, or symbolically asσX2{\displaystyle \sigma _{X}^{2}}or simplyσ2{\displaystyle \sigma ^{2}}(pronounced "sigmasquared"). The expression for the variance can be expanded as follows:Var(X)=E[(X−E[X])2]=E[X2−2XE[X]+E[X]2]=E[X2]−2E[X]E[X]+E[X]2=E[X2]−2E[X]2+E[X]2=E[X2]−E[X]2{\displaystyle {\begin{aligned}\operatorname {Var} (X)&=\operatorname {E} \left[{\left(X-\operatorname {E} [X]\right)}^{2}\right]\\[4pt]&=\operatorname {E} \left[X^{2}-2X\operatorname {E} [X]+\operatorname {E} [X]^{2}\right]\\[4pt]&=\operatorname {E} \left[X^{2}\right]-2\operatorname {E} [X]\operatorname {E} [X]+\operatorname {E} [X]^{2}\\[4pt]&=\operatorname {E} \left[X^{2}\right]-2\operatorname {E} [X]^{2}+\operatorname {E} [X]^{2}\\[4pt]&=\operatorname {E} \left[X^{2}\right]-\operatorname {E} [X]^{2}\end{aligned}}}
In other words, the variance ofXis equal to the mean of the square ofXminus the square of the mean ofX. This equation should not be used for computations usingfloating-point arithmetic, because it suffers fromcatastrophic cancellationif the two components of the equation are similar in magnitude. For other numerically stable alternatives, seealgorithms for calculating variance.
If the generator of random variableX{\displaystyle X}isdiscretewithprobability mass functionx1↦p1,x2↦p2,…,xn↦pn{\displaystyle x_{1}\mapsto p_{1},x_{2}\mapsto p_{2},\ldots ,x_{n}\mapsto p_{n}}, then
Var(X)=∑i=1npi⋅(xi−μ)2,{\displaystyle \operatorname {Var} (X)=\sum _{i=1}^{n}p_{i}\cdot {\left(x_{i}-\mu \right)}^{2},}
whereμ{\displaystyle \mu }is the expected value. That is,
μ=∑i=1npixi.{\displaystyle \mu =\sum _{i=1}^{n}p_{i}x_{i}.}
(When such a discreteweighted varianceis specified by weights whose sum is not 1, then one divides by the sum of the weights.)
The variance of a collection ofn{\displaystyle n}equally likely values can be written as
Var(X)=1n∑i=1n(xi−μ)2{\displaystyle \operatorname {Var} (X)={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}}
whereμ{\displaystyle \mu }is the average value. That is,
μ=1n∑i=1nxi.{\displaystyle \mu ={\frac {1}{n}}\sum _{i=1}^{n}x_{i}.}
The variance of a set ofn{\displaystyle n}equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all pairwise squared distances of points from each other:[2]
Var(X)=1n2∑i=1n∑j=1n12(xi−xj)2=1n2∑i∑j>i(xi−xj)2.{\displaystyle \operatorname {Var} (X)={\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {1}{2}}{\left(x_{i}-x_{j}\right)}^{2}={\frac {1}{n^{2}}}\sum _{i}\sum _{j>i}{\left(x_{i}-x_{j}\right)}^{2}.}
If the random variableX{\displaystyle X}has aprobability density functionf(x){\displaystyle f(x)}, andF(x){\displaystyle F(x)}is the correspondingcumulative distribution function, then
Var(X)=σ2=∫R(x−μ)2f(x)dx=∫Rx2f(x)dx−2μ∫Rxf(x)dx+μ2∫Rf(x)dx=∫Rx2dF(x)−2μ∫RxdF(x)+μ2∫RdF(x)=∫Rx2dF(x)−2μ⋅μ+μ2⋅1=∫Rx2dF(x)−μ2,{\displaystyle {\begin{aligned}\operatorname {Var} (X)=\sigma ^{2}&=\int _{\mathbb {R} }{\left(x-\mu \right)}^{2}f(x)\,dx\\[4pt]&=\int _{\mathbb {R} }x^{2}f(x)\,dx-2\mu \int _{\mathbb {R} }xf(x)\,dx+\mu ^{2}\int _{\mathbb {R} }f(x)\,dx\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-2\mu \int _{\mathbb {R} }x\,dF(x)+\mu ^{2}\int _{\mathbb {R} }\,dF(x)\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-2\mu \cdot \mu +\mu ^{2}\cdot 1\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-\mu ^{2},\end{aligned}}}
or equivalently,
Var(X)=∫Rx2f(x)dx−μ2,{\displaystyle \operatorname {Var} (X)=\int _{\mathbb {R} }x^{2}f(x)\,dx-\mu ^{2},}
whereμ{\displaystyle \mu }is the expected value ofX{\displaystyle X}given by
μ=∫Rxf(x)dx=∫RxdF(x).{\displaystyle \mu =\int _{\mathbb {R} }xf(x)\,dx=\int _{\mathbb {R} }x\,dF(x).}
In these formulas, the integrals with respect todx{\displaystyle dx}anddF(x){\displaystyle dF(x)}areLebesgueandLebesgue–Stieltjesintegrals, respectively.
If the functionx2f(x){\displaystyle x^{2}f(x)}isRiemann-integrableon every finite interval[a,b]⊂R,{\displaystyle [a,b]\subset \mathbb {R} ,}then
Var(X)=∫−∞+∞x2f(x)dx−μ2,{\displaystyle \operatorname {Var} (X)=\int _{-\infty }^{+\infty }x^{2}f(x)\,dx-\mu ^{2},}
where the integral is animproper Riemann integral.
Theexponential distributionwith parameterλ> 0 is a continuous distribution whoseprobability density functionis given byf(x)=λe−λx{\displaystyle f(x)=\lambda e^{-\lambda x}}on the interval[0, ∞). Its mean can be shown to beE[X]=∫0∞xλe−λxdx=1λ.{\displaystyle \operatorname {E} [X]=\int _{0}^{\infty }x\lambda e^{-\lambda x}\,dx={\frac {1}{\lambda }}.}
Usingintegration by partsand making use of the expected value already calculated, we have:E[X2]=∫0∞x2λe−λxdx=[−x2e−λx]0∞+∫0∞2xe−λxdx=0+2λE[X]=2λ2.{\displaystyle {\begin{aligned}\operatorname {E} \left[X^{2}\right]&=\int _{0}^{\infty }x^{2}\lambda e^{-\lambda x}\,dx\\&={\left[-x^{2}e^{-\lambda x}\right]}_{0}^{\infty }+\int _{0}^{\infty }2xe^{-\lambda x}\,dx\\&=0+{\frac {2}{\lambda }}\operatorname {E} [X]\\&={\frac {2}{\lambda ^{2}}}.\end{aligned}}}
Thus, the variance ofXis given byVar(X)=E[X2]−E[X]2=2λ2−(1λ)2=1λ2.{\displaystyle \operatorname {Var} (X)=\operatorname {E} \left[X^{2}\right]-\operatorname {E} [X]^{2}={\frac {2}{\lambda ^{2}}}-\left({\frac {1}{\lambda }}\right)^{2}={\frac {1}{\lambda ^{2}}}.}
A fairsix-sided diecan be modeled as a discrete random variable,X, with outcomes 1 through 6, each with equal probability 1/6. The expected value ofXis(1+2+3+4+5+6)/6=7/2.{\displaystyle (1+2+3+4+5+6)/6=7/2.}Therefore, the variance ofXisVar(X)=∑i=1616(i−72)2=16((−5/2)2+(−3/2)2+(−1/2)2+(1/2)2+(3/2)2+(5/2)2)=3512≈2.92.{\displaystyle {\begin{aligned}\operatorname {Var} (X)&=\sum _{i=1}^{6}{\frac {1}{6}}\left(i-{\frac {7}{2}}\right)^{2}\\[5pt]&={\frac {1}{6}}\left((-5/2)^{2}+(-3/2)^{2}+(-1/2)^{2}+(1/2)^{2}+(3/2)^{2}+(5/2)^{2}\right)\\[5pt]&={\frac {35}{12}}\approx 2.92.\end{aligned}}}
The general formula for the variance of the outcome,X, of ann-sideddie isVar(X)=E(X2)−(E(X))2=1n∑i=1ni2−(1n∑i=1ni)2=(n+1)(2n+1)6−(n+12)2=n2−112.{\displaystyle {\begin{aligned}\operatorname {Var} (X)&=\operatorname {E} \left(X^{2}\right)-(\operatorname {E} (X))^{2}\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}i^{2}-\left({\frac {1}{n}}\sum _{i=1}^{n}i\right)^{2}\\[5pt]&={\frac {(n+1)(2n+1)}{6}}-\left({\frac {n+1}{2}}\right)^{2}\\[4pt]&={\frac {n^{2}-1}{12}}.\end{aligned}}}
The following table lists the variance for some commonly used probability distributions.
Variance is non-negative because the squares are positive or zero:Var(X)≥0.{\displaystyle \operatorname {Var} (X)\geq 0.}
The variance of a constant is zero.Var(a)=0.{\displaystyle \operatorname {Var} (a)=0.}
Conversely, if the variance of a random variable is 0, then it isalmost surelya constant. That is, it always has the same value:Var(X)=0⟺∃a:P(X=a)=1.{\displaystyle \operatorname {Var} (X)=0\iff \exists a:P(X=a)=1.}
If a distribution does not have a finite expected value, as is the case for theCauchy distribution, then the variance cannot be finite either. However, some distributions may not have a finite variance, despite their expected value being finite. An example is aPareto distributionwhoseindexk{\displaystyle k}satisfies1<k≤2.{\displaystyle 1<k\leq 2.}
The general formula for variance decomposition or thelaw of total varianceis: IfX{\displaystyle X}andY{\displaystyle Y}are two random variables, and the variance ofX{\displaystyle X}exists, then
Var[X]=E(Var[X∣Y])+Var(E[X∣Y]).{\displaystyle \operatorname {Var} [X]=\operatorname {E} (\operatorname {Var} [X\mid Y])+\operatorname {Var} (\operatorname {E} [X\mid Y]).}
Theconditional expectationE(X∣Y){\displaystyle \operatorname {E} (X\mid Y)}ofX{\displaystyle X}givenY{\displaystyle Y}, and theconditional varianceVar(X∣Y){\displaystyle \operatorname {Var} (X\mid Y)}may be understood as follows. Given any particular valueyof the random variableY, there is a conditional expectationE(X∣Y=y){\displaystyle \operatorname {E} (X\mid Y=y)}given the eventY=y. This quantity depends on the particular valuey; it is a functiong(y)=E(X∣Y=y){\displaystyle g(y)=\operatorname {E} (X\mid Y=y)}. That same function evaluated at the random variableYis the conditional expectationE(X∣Y)=g(Y).{\displaystyle \operatorname {E} (X\mid Y)=g(Y).}
In particular, ifY{\displaystyle Y}is a discrete random variable assuming possible valuesy1,y2,y3…{\displaystyle y_{1},y_{2},y_{3}\ldots }with corresponding probabilitiesp1,p2,p3…,{\displaystyle p_{1},p_{2},p_{3}\ldots ,}, then in the formula for total variance, the first term on the right-hand side becomes
E(Var[X∣Y])=∑ipiσi2,{\displaystyle \operatorname {E} (\operatorname {Var} [X\mid Y])=\sum _{i}p_{i}\sigma _{i}^{2},}
whereσi2=Var[X∣Y=yi]{\displaystyle \sigma _{i}^{2}=\operatorname {Var} [X\mid Y=y_{i}]}. Similarly, the second term on the right-hand side becomes
Var(E[X∣Y])=∑ipiμi2−(∑ipiμi)2=∑ipiμi2−μ2,{\displaystyle \operatorname {Var} (\operatorname {E} [X\mid Y])=\sum _{i}p_{i}\mu _{i}^{2}-\left(\sum _{i}p_{i}\mu _{i}\right)^{2}=\sum _{i}p_{i}\mu _{i}^{2}-\mu ^{2},}
whereμi=E[X∣Y=yi]{\displaystyle \mu _{i}=\operatorname {E} [X\mid Y=y_{i}]}andμ=∑ipiμi{\displaystyle \mu =\sum _{i}p_{i}\mu _{i}}. Thus the total variance is given by
Var[X]=∑ipiσi2+(∑ipiμi2−μ2).{\displaystyle \operatorname {Var} [X]=\sum _{i}p_{i}\sigma _{i}^{2}+\left(\sum _{i}p_{i}\mu _{i}^{2}-\mu ^{2}\right).}
A similar formula is applied inanalysis of variance, where the corresponding formula is
MStotal=MSbetween+MSwithin;{\displaystyle {\mathit {MS}}_{\text{total}}={\mathit {MS}}_{\text{between}}+{\mathit {MS}}_{\text{within}};}
hereMS{\displaystyle {\mathit {MS}}}refers to the Mean of the Squares. Inlinear regressionanalysis the corresponding formula is
MStotal=MSregression+MSresidual.{\displaystyle {\mathit {MS}}_{\text{total}}={\mathit {MS}}_{\text{regression}}+{\mathit {MS}}_{\text{residual}}.}
This can also be derived from the additivity of variances, since the total (observed) score is the sum of the predicted score and the error score, where the latter two are uncorrelated.
Similar decompositions are possible for the sum of squared deviations (sum of squares,SS{\displaystyle {\mathit {SS}}}):SStotal=SSbetween+SSwithin,{\displaystyle {\mathit {SS}}_{\text{total}}={\mathit {SS}}_{\text{between}}+{\mathit {SS}}_{\text{within}},}SStotal=SSregression+SSresidual.{\displaystyle {\mathit {SS}}_{\text{total}}={\mathit {SS}}_{\text{regression}}+{\mathit {SS}}_{\text{residual}}.}
The population variance for a non-negative random variable can be expressed in terms of thecumulative distribution functionFusing
2∫0∞u(1−F(u))du−[∫0∞(1−F(u))du]2.{\displaystyle 2\int _{0}^{\infty }u(1-F(u))\,du-{\left[\int _{0}^{\infty }(1-F(u))\,du\right]}^{2}.}
This expression can be used to calculate the variance in situations where the CDF, but not thedensity, can be conveniently expressed.
The secondmomentof a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e.argminmE((X−m)2)=E(X){\displaystyle \mathrm {argmin} _{m}\,\mathrm {E} \left(\left(X-m\right)^{2}\right)=\mathrm {E} (X)}. Conversely, if a continuous functionφ{\displaystyle \varphi }satisfiesargminmE(φ(X−m))=E(X){\displaystyle \mathrm {argmin} _{m}\,\mathrm {E} (\varphi (X-m))=\mathrm {E} (X)}for all random variablesX, then it is necessarily of the formφ(x)=ax2+b{\displaystyle \varphi (x)=ax^{2}+b}, wherea> 0. This also holds in the multidimensional case.[3]
Unlike theexpected absolute deviation, the variance of a variable has units that are the square of the units of the variable itself. For example, a variable measured in meters will have a variance measured in meters squared. For this reason, describing data sets via theirstandard deviationorroot mean square deviationis often preferred over using the variance. In the dice example the standard deviation is√2.9≈ 1.7, slightly larger than the expected absolute deviation of 1.5.
The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalizationcovariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be morerobustas it is less sensitive tooutliersarising frommeasurement anomaliesor an undulyheavy-tailed distribution.
Variance isinvariantwith respect to changes in alocation parameter. That is, if a constant is added to all values of the variable, the variance is unchanged:Var(X+a)=Var(X).{\displaystyle \operatorname {Var} (X+a)=\operatorname {Var} (X).}
If all values are scaled by a constant, the variance isscaledby the square of that constant:Var(aX)=a2Var(X).{\displaystyle \operatorname {Var} (aX)=a^{2}\operatorname {Var} (X).}
The variance of a sum of two random variables is given byVar(aX+bY)=a2Var(X)+b2Var(Y)+2abCov(X,Y)Var(aX−bY)=a2Var(X)+b2Var(Y)−2abCov(X,Y){\displaystyle {\begin{aligned}\operatorname {Var} (aX+bY)&=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)+2ab\,\operatorname {Cov} (X,Y)\\[1ex]\operatorname {Var} (aX-bY)&=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)-2ab\,\operatorname {Cov} (X,Y)\end{aligned}}}
whereCov(X,Y){\displaystyle \operatorname {Cov} (X,Y)}is thecovariance.
In general, for the sum ofN{\displaystyle N}random variables{X1,…,XN}{\displaystyle \{X_{1},\dots ,X_{N}\}}, the variance becomes:Var(∑i=1NXi)=∑i,j=1NCov(Xi,Xj)=∑i=1NVar(Xi)+∑i,j=1,i≠jNCov(Xi,Xj),{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\sum _{i,j=1}^{N}\operatorname {Cov} (X_{i},X_{j})=\sum _{i=1}^{N}\operatorname {Var} (X_{i})+\sum _{i,j=1,i\neq j}^{N}\operatorname {Cov} (X_{i},X_{j}),}see also generalBienaymé's identity.
These results lead to the variance of alinear combinationas:
Var(∑i=1NaiXi)=∑i,j=1NaiajCov(Xi,Xj)=∑i=1Nai2Var(Xi)+∑i≠jaiajCov(Xi,Xj)=∑i=1Nai2Var(Xi)+2∑1≤i<j≤NaiajCov(Xi,Xj).{\displaystyle {\begin{aligned}\operatorname {Var} \left(\sum _{i=1}^{N}a_{i}X_{i}\right)&=\sum _{i,j=1}^{N}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})\\&=\sum _{i=1}^{N}a_{i}^{2}\operatorname {Var} (X_{i})+\sum _{i\neq j}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})\\&=\sum _{i=1}^{N}a_{i}^{2}\operatorname {Var} (X_{i})+2\sum _{1\leq i<j\leq N}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j}).\end{aligned}}}
If the random variablesX1,…,XN{\displaystyle X_{1},\dots ,X_{N}}are such thatCov(Xi,Xj)=0,∀(i≠j),{\displaystyle \operatorname {Cov} (X_{i},X_{j})=0\ ,\ \forall \ (i\neq j),}then they are said to beuncorrelated. It follows immediately from the expression given earlier that if the random variablesX1,…,XN{\displaystyle X_{1},\dots ,X_{N}}are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically:
Var(∑i=1NXi)=∑i=1NVar(Xi).{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\sum _{i=1}^{N}\operatorname {Var} (X_{i}).}
Since independent random variables are always uncorrelated (seeCovariance § Uncorrelatedness and independence), the equation above holds in particular when the random variablesX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}are independent. Thus, independence is sufficient but not necessary for the variance of the sum to equal the sum of the variances.
DefineX{\displaystyle X}as a column vector ofn{\displaystyle n}random variablesX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}, andc{\displaystyle c}as a column vector ofn{\displaystyle n}scalarsc1,…,cn{\displaystyle c_{1},\ldots ,c_{n}}. Therefore,cTX{\displaystyle c^{\mathsf {T}}X}is alinear combinationof these random variables, wherecT{\displaystyle c^{\mathsf {T}}}denotes thetransposeofc{\displaystyle c}. Also letΣ{\displaystyle \Sigma }be thecovariance matrixofX{\displaystyle X}. The variance ofcTX{\displaystyle c^{\mathsf {T}}X}is then given by:[4]
Var(cTX)=cTΣc.{\displaystyle \operatorname {Var} \left(c^{\mathsf {T}}X\right)=c^{\mathsf {T}}\Sigma c.}
This implies that the variance of the mean can be written as (with a column vector of ones)
Var(x¯)=Var(1n1′X)=1n21′Σ1.{\displaystyle \operatorname {Var} \left({\bar {x}}\right)=\operatorname {Var} \left({\frac {1}{n}}1'X\right)={\frac {1}{n^{2}}}1'\Sigma 1.}
One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) ofuncorrelatedrandom variables is the sum of their variances:
Var(∑i=1nXi)=∑i=1nVar(Xi).{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)=\sum _{i=1}^{n}\operatorname {Var} (X_{i}).}
This statement is called theBienayméformula[5]and was discovered in 1853.[6][7]It is often made with the stronger condition that the variables areindependent, but being uncorrelated suffices. So if all the variables have the same variance σ2, then, since division bynis a linear transformation, this formula immediately implies that the variance of their mean is
Var(X¯)=Var(1n∑i=1nXi)=1n2∑i=1nVar(Xi)=1n2nσ2=σ2n.{\displaystyle \operatorname {Var} \left({\overline {X}}\right)=\operatorname {Var} \left({\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right)={\frac {1}{n^{2}}}\sum _{i=1}^{n}\operatorname {Var} \left(X_{i}\right)={\frac {1}{n^{2}}}n\sigma ^{2}={\frac {\sigma ^{2}}{n}}.}
That is, the variance of the mean decreases whennincreases. This formula for the variance of the mean is used in the definition of thestandard errorof the sample mean, which is used in thecentral limit theorem.
To prove the initial statement, it suffices to show that
Var(X+Y)=Var(X)+Var(Y).{\displaystyle \operatorname {Var} (X+Y)=\operatorname {Var} (X)+\operatorname {Var} (Y).}
The general result then follows by induction. Starting with the definition,
Var(X+Y)=E[(X+Y)2]−(E[X+Y])2=E[X2+2XY+Y2]−(E[X]+E[Y])2.{\displaystyle {\begin{aligned}\operatorname {Var} (X+Y)&=\operatorname {E} \left[(X+Y)^{2}\right]-(\operatorname {E} [X+Y])^{2}\\[5pt]&=\operatorname {E} \left[X^{2}+2XY+Y^{2}\right]-(\operatorname {E} [X]+\operatorname {E} [Y])^{2}.\end{aligned}}}
Using the linearity of theexpectation operatorand the assumption of independence (or uncorrelatedness) ofXandY, this further simplifies as follows:
Var(X+Y)=E[X2]+2E[XY]+E[Y2]−(E[X]2+2E[X]E[Y]+E[Y]2)=E[X2]+E[Y2]−E[X]2−E[Y]2=Var(X)+Var(Y).{\displaystyle {\begin{aligned}\operatorname {Var} (X+Y)&=\operatorname {E} {\left[X^{2}\right]}+2\operatorname {E} [XY]+\operatorname {E} {\left[Y^{2}\right]}-\left(\operatorname {E} [X]^{2}+2\operatorname {E} [X]\operatorname {E} [Y]+\operatorname {E} [Y]^{2}\right)\\[5pt]&=\operatorname {E} \left[X^{2}\right]+\operatorname {E} \left[Y^{2}\right]-\operatorname {E} [X]^{2}-\operatorname {E} [Y]^{2}\\[5pt]&=\operatorname {Var} (X)+\operatorname {Var} (Y).\end{aligned}}}
In general, the variance of the sum ofnvariables is the sum of theircovariances:
Var(∑i=1nXi)=∑i=1n∑j=1nCov(Xi,Xj)=∑i=1nVar(Xi)+2∑1≤i<j≤nCov(Xi,Xj).{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)=\sum _{i=1}^{n}\sum _{j=1}^{n}\operatorname {Cov} \left(X_{i},X_{j}\right)=\sum _{i=1}^{n}\operatorname {Var} \left(X_{i}\right)+2\sum _{1\leq i<j\leq n}\operatorname {Cov} \left(X_{i},X_{j}\right).}
(Note: The second equality comes from the fact thatCov(Xi,Xi) = Var(Xi).)
Here,Cov(⋅,⋅){\displaystyle \operatorname {Cov} (\cdot ,\cdot )}is thecovariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric. This formula is used in the theory ofCronbach's alphainclassical test theory.
So, if the variables have equal varianceσ2and the averagecorrelationof distinct variables isρ, then the variance of their mean is
Var(X¯)=σ2n+n−1nρσ2.{\displaystyle \operatorname {Var} \left({\overline {X}}\right)={\frac {\sigma ^{2}}{n}}+{\frac {n-1}{n}}\rho \sigma ^{2}.}
This implies that the variance of the mean increases with the average of the correlations. In other words, additional correlated observations are not as effective as additional independent observations at reducing theuncertainty of the mean. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to
Var(X¯)=1n+n−1nρ.{\displaystyle \operatorname {Var} \left({\overline {X}}\right)={\frac {1}{n}}+{\frac {n-1}{n}}\rho .}
This formula is used in theSpearman–Brown prediction formulaof classical test theory. This converges toρifngoes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have
limn→∞Var(X¯)=ρ.{\displaystyle \lim _{n\to \infty }\operatorname {Var} \left({\overline {X}}\right)=\rho .}
Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though thelaw of large numbersstates that the sample mean will converge for independent variables.
There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample sizeNis a random variable whose variation adds to the variation ofX, such that,[8]Var(∑i=1NXi)=E[N]Var(X)+Var(N)(E[X])2{\displaystyle \operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\operatorname {E} \left[N\right]\operatorname {Var} (X)+\operatorname {Var} (N)(\operatorname {E} \left[X\right])^{2}}which follows from thelaw of total variance.
IfNhas aPoisson distribution, thenE[N]=Var(N){\displaystyle \operatorname {E} [N]=\operatorname {Var} (N)}with estimatorn=N. So, the estimator ofVar(∑i=1nXi){\displaystyle \operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)}becomesnSx2+nX¯2{\displaystyle n{S_{x}}^{2}+n{\bar {X}}^{2}}, givingSE(X¯)=Sx2+X¯2n{\displaystyle \operatorname {SE} ({\bar {X}})={\sqrt {\frac {{S_{x}}^{2}+{\bar {X}}^{2}}{n}}}}(seestandard error of the sample mean).
The scaling property and the Bienaymé formula, along with the property of thecovarianceCov(aX,bY) =abCov(X,Y)jointly imply that
Var(aX±bY)=a2Var(X)+b2Var(Y)±2abCov(X,Y).{\displaystyle \operatorname {Var} (aX\pm bY)=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)\pm 2ab\,\operatorname {Cov} (X,Y).}
This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, ifXandYare uncorrelated and the weight ofXis two times the weight ofY, then the weight of the variance ofXwill be four times the weight of the variance ofY.
The expression above can be extended to a weighted sum of multiple variables:
Var(∑inaiXi)=∑i=1nai2Var(Xi)+2∑1≤i∑<j≤naiajCov(Xi,Xj){\displaystyle \operatorname {Var} \left(\sum _{i}^{n}a_{i}X_{i}\right)=\sum _{i=1}^{n}a_{i}^{2}\operatorname {Var} (X_{i})+2\sum _{1\leq i}\sum _{<j\leq n}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})}
If two variables X and Y areindependent, the variance of their product is given by[9]Var(XY)=[E(X)]2Var(Y)+[E(Y)]2Var(X)+Var(X)Var(Y).{\displaystyle \operatorname {Var} (XY)=[\operatorname {E} (X)]^{2}\operatorname {Var} (Y)+[\operatorname {E} (Y)]^{2}\operatorname {Var} (X)+\operatorname {Var} (X)\operatorname {Var} (Y).}
Equivalently, using the basic properties of expectation, it is given by
Var(XY)=E(X2)E(Y2)−[E(X)]2[E(Y)]2.{\displaystyle \operatorname {Var} (XY)=\operatorname {E} \left(X^{2}\right)\operatorname {E} \left(Y^{2}\right)-[\operatorname {E} (X)]^{2}[\operatorname {E} (Y)]^{2}.}
In general, if two variables are statistically dependent, then the variance of their product is given by:Var(XY)=E[X2Y2]−[E(XY)]2=Cov(X2,Y2)+E(X2)E(Y2)−[E(XY)]2=Cov(X2,Y2)+(Var(X)+[E(X)]2)(Var(Y)+[E(Y)]2)−[Cov(X,Y)+E(X)E(Y)]2{\displaystyle {\begin{aligned}\operatorname {Var} (XY)={}&\operatorname {E} \left[X^{2}Y^{2}\right]-[\operatorname {E} (XY)]^{2}\\[5pt]={}&\operatorname {Cov} \left(X^{2},Y^{2}\right)+\operatorname {E} (X^{2})\operatorname {E} \left(Y^{2}\right)-[\operatorname {E} (XY)]^{2}\\[5pt]={}&\operatorname {Cov} \left(X^{2},Y^{2}\right)+\left(\operatorname {Var} (X)+[\operatorname {E} (X)]^{2}\right)\left(\operatorname {Var} (Y)+[\operatorname {E} (Y)]^{2}\right)\\[5pt]&-[\operatorname {Cov} (X,Y)+\operatorname {E} (X)\operatorname {E} (Y)]^{2}\end{aligned}}}
Thedelta methoduses second-orderTaylor expansionsto approximate the variance of a function of one or more random variables: seeTaylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by
Var[f(X)]≈(f′(E[X]))2Var[X]{\displaystyle \operatorname {Var} \left[f(X)\right]\approx \left(f'(\operatorname {E} \left[X\right])\right)^{2}\operatorname {Var} \left[X\right]}
provided thatfis twice differentiable and that the mean and variance ofXare finite.
Real-world observations such as the measurements of yesterday's rain throughout the day typically cannot be complete sets of all possible observations that could be made. As such, the variance calculated from the finite set will in general not match the variance that would have been calculated from the full population of possible observations. This means that oneestimatesthe mean and variance from a limited set of observations by using anestimatorequation. The estimator is a function of thesampleofnobservationsdrawn without observational bias from the wholepopulationof potential observations. In this example, the sample would be the set of actual measurements of yesterday's rainfall from available rain gauges within the geography of interest.
The simplest estimators for population mean and population variance are simply the mean and variance of the sample, thesample meanand(uncorrected) sample variance– these areconsistent estimators(they converge to the value of the whole population as the number of samples increases) but can be improved. Most simply, the sample variance is computed as the sum ofsquared deviationsabout the (sample) mean, divided bynas the number of samples.However, using values other thannimproves the estimator in various ways. Four common values for the denominator aren,n− 1,n+ 1, andn− 1.5:nis the simplest (the variance of the sample),n− 1 eliminates bias,[10]n+ 1 minimizesmean squared errorfor the normal distribution,[11]andn− 1.5 mostly eliminates bias inunbiased estimation of standard deviationfor the normal distribution.[12]
Firstly, if the true population mean is unknown, then the sample variance (which uses the sample mean in place of the true mean) is abiased estimator: it underestimates the variance by a factor of (n− 1) /n; correcting this factor, resulting in the sum of squared deviations about the sample mean divided byn-1 instead ofn, is calledBessel's correction.[10]The resulting estimator is unbiased and is called the(corrected) sample varianceorunbiased sample variance. If the mean is determined in some other way than from the same samples used to estimate the variance, then this bias does not arise, and the variance can safely be estimated as that of the samples about the (independently known) mean.
Secondly, the sample variance does not generally minimizemean squared errorbetween sample variance and population variance. Correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on theexcess kurtosisof the population (seemean squared error: variance) and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger thann− 1) and is a simple example of ashrinkage estimator: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing byn+ 1 (instead ofn− 1 orn) minimizes mean squared error.[11]The resulting estimator is biased, however, and is known as thebiased sample variation.
In general, thepopulation varianceof afinitepopulationof sizeNwith valuesxiis given byσ2=1N∑i=1N(xi−μ)2=1N∑i=1N(xi2−2μxi+μ2)=(1N∑i=1Nxi2)−2μ(1N∑i=1Nxi)+μ2=E[xi2]−μ2{\displaystyle {\begin{aligned}\sigma ^{2}&={\frac {1}{N}}\sum _{i=1}^{N}{\left(x_{i}-\mu \right)}^{2}={\frac {1}{N}}\sum _{i=1}^{N}\left(x_{i}^{2}-2\mu x_{i}+\mu ^{2}\right)\\[5pt]&=\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-2\mu \left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)+\mu ^{2}\\[5pt]&=\operatorname {E} [x_{i}^{2}]-\mu ^{2}\end{aligned}}}
where the population mean isμ=E[xi]=1N∑i=1Nxi{\textstyle \mu =\operatorname {E} [x_{i}]={\frac {1}{N}}\sum _{i=1}^{N}x_{i}}andE[xi2]=(1N∑i=1Nxi2){\textstyle \operatorname {E} [x_{i}^{2}]=\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)}, whereE{\textstyle \operatorname {E} }is theexpectation valueoperator.
The population variance can also be computed using[13]
σ2=1N2∑i<j(xi−xj)2=12N2∑i,j=1N(xi−xj)2.{\displaystyle \sigma ^{2}={\frac {1}{N^{2}}}\sum _{i<j}\left(x_{i}-x_{j}\right)^{2}={\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}-x_{j}\right)^{2}.}
(The right side has duplicate terms in the sum while the middle side has only unique terms to sum.) This is true because12N2∑i,j=1N(xi−xj)2=12N2∑i,j=1N(xi2−2xixj+xj2)=12N∑j=1N(1N∑i=1Nxi2)−(1N∑i=1Nxi)(1N∑j=1Nxj)+12N∑i=1N(1N∑j=1Nxj2)=12(σ2+μ2)−μ2+12(σ2+μ2)=σ2.{\displaystyle {\begin{aligned}&{\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}{\left(x_{i}-x_{j}\right)}^{2}\\[5pt]={}&{\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}^{2}-2x_{i}x_{j}+x_{j}^{2}\right)\\[5pt]={}&{\frac {1}{2N}}\sum _{j=1}^{N}\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)\left({\frac {1}{N}}\sum _{j=1}^{N}x_{j}\right)+{\frac {1}{2N}}\sum _{i=1}^{N}\left({\frac {1}{N}}\sum _{j=1}^{N}x_{j}^{2}\right)\\[5pt]={}&{\frac {1}{2}}\left(\sigma ^{2}+\mu ^{2}\right)-\mu ^{2}+{\frac {1}{2}}\left(\sigma ^{2}+\mu ^{2}\right)\\[5pt]={}&\sigma ^{2}.\end{aligned}}}
The population variance matches the variance of the generating probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations.
In many practical situations, the true variance of a population is not knowna prioriand must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on asampleof the population.[14]This is generally referred to assample varianceorempirical variance. Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution.
We take asample with replacementofnvaluesY1, ...,Ynfrom the population of sizeN, wheren<N, and estimate the variance on the basis of this sample.[15]Directly taking the variance of the sample data gives the average of thesquared deviations:[16]
S~Y2=1n∑i=1n(Yi−Y¯)2=(1n∑i=1nYi2)−Y¯2=1n2∑i,j:i<j(Yi−Yj)2.{\displaystyle {\tilde {S}}_{Y}^{2}={\frac {1}{n}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}=\left({\frac {1}{n}}\sum _{i=1}^{n}Y_{i}^{2}\right)-{\overline {Y}}^{2}={\frac {1}{n^{2}}}\sum _{i,j\,:\,i<j}\left(Y_{i}-Y_{j}\right)^{2}.}
(See the sectionPopulation variancefor the derivation of this formula.) Here,Y¯{\displaystyle {\overline {Y}}}denotes thesample mean:Y¯=1n∑i=1nYi.{\displaystyle {\overline {Y}}={\frac {1}{n}}\sum _{i=1}^{n}Y_{i}.}
Since theYiare selected randomly, bothY¯{\displaystyle {\overline {Y}}}andS~Y2{\displaystyle {\tilde {S}}_{Y}^{2}}arerandom variables. Their expected values can be evaluated by averaging over the ensemble of all possible samples{Yi}of sizenfrom the population. ForS~Y2{\displaystyle {\tilde {S}}_{Y}^{2}}this gives:E[S~Y2]=E[1n∑i=1n(Yi−1n∑j=1nYj)2]=1n∑i=1nE[Yi2−2nYi∑j=1nYj+1n2∑j=1nYj∑k=1nYk]=1n∑i=1n(E[Yi2]−2n(∑j≠iE[YiYj]+E[Yi2])+1n2∑j=1n∑k≠jnE[YjYk]+1n2∑j=1nE[Yj2])=1n∑i=1n(n−2nE[Yi2]−2n∑j≠iE[YiYj]+1n2∑j=1n∑k≠jnE[YjYk]+1n2∑j=1nE[Yj2])=1n∑i=1n[n−2n(σ2+μ2)−2n(n−1)μ2+1n2n(n−1)μ2+1n(σ2+μ2)]=n−1nσ2.{\displaystyle {\begin{aligned}\operatorname {E} [{\tilde {S}}_{Y}^{2}]&=\operatorname {E} \left[{\frac {1}{n}}\sum _{i=1}^{n}{\left(Y_{i}-{\frac {1}{n}}\sum _{j=1}^{n}Y_{j}\right)}^{2}\right]\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\operatorname {E} \left[Y_{i}^{2}-{\frac {2}{n}}Y_{i}\sum _{j=1}^{n}Y_{j}+{\frac {1}{n^{2}}}\sum _{j=1}^{n}Y_{j}\sum _{k=1}^{n}Y_{k}\right]\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\left(\operatorname {E} \left[Y_{i}^{2}\right]-{\frac {2}{n}}\left(\sum _{j\neq i}\operatorname {E} \left[Y_{i}Y_{j}\right]+\operatorname {E} \left[Y_{i}^{2}\right]\right)+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\sum _{k\neq j}^{n}\operatorname {E} \left[Y_{j}Y_{k}\right]+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\operatorname {E} \left[Y_{j}^{2}\right]\right)\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\left({\frac {n-2}{n}}\operatorname {E} \left[Y_{i}^{2}\right]-{\frac {2}{n}}\sum _{j\neq i}\operatorname {E} \left[Y_{i}Y_{j}\right]+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\sum _{k\neq j}^{n}\operatorname {E} \left[Y_{j}Y_{k}\right]+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\operatorname {E} \left[Y_{j}^{2}\right]\right)\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\left[{\frac {n-2}{n}}\left(\sigma ^{2}+\mu ^{2}\right)-{\frac {2}{n}}(n-1)\mu ^{2}+{\frac {1}{n^{2}}}n(n-1)\mu ^{2}+{\frac {1}{n}}\left(\sigma ^{2}+\mu ^{2}\right)\right]\\[5pt]&={\frac {n-1}{n}}\sigma ^{2}.\end{aligned}}}
Hereσ2=E[Yi2]−μ2{\textstyle \sigma ^{2}=\operatorname {E} [Y_{i}^{2}]-\mu ^{2}}derived in the section ispopulation varianceandE[YiYj]=E[Yi]E[Yj]=μ2{\textstyle \operatorname {E} [Y_{i}Y_{j}]=\operatorname {E} [Y_{i}]\operatorname {E} [Y_{j}]=\mu ^{2}}due to independency ofYi{\textstyle Y_{i}}andYj{\textstyle Y_{j}}.
HenceS~Y2{\textstyle {\tilde {S}}_{Y}^{2}}gives an estimate of the population varianceσ2{\textstyle \sigma ^{2}}that is biased by a factor ofn−1n{\textstyle {\frac {n-1}{n}}}because the expectation value ofS~Y2{\textstyle {\tilde {S}}_{Y}^{2}}is smaller than the population variance (true variance) by that factor. For this reason,S~Y2{\textstyle {\tilde {S}}_{Y}^{2}}is referred to as thebiased sample variance.
Correcting for this bias yields theunbiased sample variance, denotedS2{\displaystyle S^{2}}:
S2=nn−1S~Y2=nn−1[1n∑i=1n(Yi−Y¯)2]=1n−1∑i=1n(Yi−Y¯)2{\displaystyle S^{2}={\frac {n}{n-1}}{\tilde {S}}_{Y}^{2}={\frac {n}{n-1}}\left[{\frac {1}{n}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}\right]={\frac {1}{n-1}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}}
Either estimator may be simply referred to as thesample variancewhen the version can be determined by context. The same proof is also applicable for samples taken from a continuous probability distribution.
The use of the termn− 1is calledBessel's correction, and it is also used insample covarianceand thesample standard deviation(the square root of variance). The square root is aconcave functionand thus introduces negative bias (byJensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. Theunbiased estimation of standard deviationis a technically involved problem, though for the normal distribution using the termn− 1.5yields an almost unbiased estimator.
The unbiased sample variance is aU-statisticfor the functionf(y1,y2) = (y1−y2)2/2, meaning that it is obtained by averaging a 2-sample statistic over 2-element subsets of the population.
For a set of numbers {10, 15, 30, 45, 57, 52, 63, 72, 81, 93, 102, 105}, if this set is the whole data population for some measurement, then variance is the population variance 932.743 as the sum of the squared deviations about the mean of this set, divided by 12 as the number of the set members. If the set is a sample from the whole population, then the unbiased sample variance can be calculated as 1017.538 that is the sum of the squared deviations about the mean of the sample, divided by 11 instead of 12. A function VAR.S inMicrosoft Excelgives the unbiased sample variance while VAR.P is for population variance.
Being a function ofrandom variables, the sample variance is itself a random variable, and it is natural to study its distribution. In the case thatYiare independent observations from anormal distribution,Cochran's theoremshows that theunbiased sample varianceS2follows a scaledchi-squared distribution(see also:asymptotic propertiesand anelementary proof):[17](n−1)S2σ2∼χn−12{\displaystyle (n-1){\frac {S^{2}}{\sigma ^{2}}}\sim \chi _{n-1}^{2}}
whereσ2is thepopulation variance. As a direct consequence, it follows thatE(S2)=E(σ2n−1χn−12)=σ2,{\displaystyle \operatorname {E} \left(S^{2}\right)=\operatorname {E} \left({\frac {\sigma ^{2}}{n-1}}\chi _{n-1}^{2}\right)=\sigma ^{2},}
and[18]
Var[S2]=Var(σ2n−1χn−12)=σ4(n−1)2Var(χn−12)=2σ4n−1.{\displaystyle \operatorname {Var} \left[S^{2}\right]=\operatorname {Var} \left({\frac {\sigma ^{2}}{n-1}}\chi _{n-1}^{2}\right)={\frac {\sigma ^{4}}{{\left(n-1\right)}^{2}}}\operatorname {Var} \left(\chi _{n-1}^{2}\right)={\frac {2\sigma ^{4}}{n-1}}.}
IfYiare independent and identically distributed, but not necessarily normally distributed, then[19]
E[S2]=σ2,Var[S2]=σ4n(κ−1+2n−1)=1n(μ4−n−3n−1σ4),{\displaystyle \operatorname {E} \left[S^{2}\right]=\sigma ^{2},\quad \operatorname {Var} \left[S^{2}\right]={\frac {\sigma ^{4}}{n}}\left(\kappa -1+{\frac {2}{n-1}}\right)={\frac {1}{n}}\left(\mu _{4}-{\frac {n-3}{n-1}}\sigma ^{4}\right),}
whereκis thekurtosisof the distribution andμ4is the fourthcentral moment.
If the conditions of thelaw of large numbershold for the squared observations,S2is aconsistent estimatorofσ2. One can see indeed that the variance of the estimator tends asymptotically to zero. An asymptotically equivalent formula was given in Kenney and Keeping (1951:164), Rose and Smith (2002:264), and Weisstein (n.d.).[20][21][22]
Samuelson's inequalityis a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated.[23]Values must lie within the limitsy¯±σY(n−1)1/2.{\displaystyle {\bar {y}}\pm \sigma _{Y}(n-1)^{1/2}.}
It has been shown[24]that for a sample {yi} of positive real numbers,
σy2≤2ymax(A−H),{\displaystyle \sigma _{y}^{2}\leq 2y_{\max }(A-H),}
whereymaxis the maximum of the sample,Ais the arithmetic mean,His theharmonic meanof the sample andσy2{\displaystyle \sigma _{y}^{2}}is the (biased) variance of the sample.
This bound has been improved, and it is known that variance is bounded by
σy2≤ymax(A−H)(ymax−A)ymax−H,σy2≥ymin(A−H)(A−ymin)H−ymin,{\displaystyle {\begin{aligned}\sigma _{y}^{2}&\leq {\frac {y_{\max }(A-H)(y_{\max }-A)}{y_{\max }-H}},\\[1ex]\sigma _{y}^{2}&\geq {\frac {y_{\min }(A-H)(A-y_{\min })}{H-y_{\min }}},\end{aligned}}}
whereyminis the minimum of the sample.[25]
TheF-test of equality of variancesand thechi square testsare adequate when the sample is normally distributed. Non-normality makes testing for the equality of two or more variances more difficult.
Several non parametric tests have been proposed: these include the Barton–David–Ansari–Freund–Siegel–Tukey test, theCapon test,Mood test, theKlotz testand theSukhatme test. The Sukhatme test applies to two variances and requires that bothmediansbe known and equal to zero. The Mood, Klotz, Capon and Barton–David–Ansari–Freund–Siegel–Tukey tests also apply to two variances. They allow the median to be unknown but do require that the two medians are equal.
TheLehmann testis a parametric test of two variances. Of this test there are several variants known. Other tests of the equality of variances include theBox test, theBox–Anderson testand theMoses test.
Resampling methods, which include thebootstrapand thejackknife, may be used to test the equality of variances.
The variance of a probability distribution is analogous to themoment of inertiainclassical mechanicsof a corresponding mass distribution along a line, with respect to rotation about its center of mass.[26]It is because of this analogy that such things as the variance are calledmomentsofprobability distributions.[26]The covariance matrix is related to themoment of inertia tensorfor multivariate distributions. The moment of inertia of a cloud ofnpoints with a covariance matrix ofΣ{\displaystyle \Sigma }is given by[citation needed]I=n(13×3tr(Σ)−Σ).{\displaystyle I=n\left(\mathbf {1} _{3\times 3}\operatorname {tr} (\Sigma )-\Sigma \right).}
This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line. Suppose many points are close to thexaxis and distributed along it. The covariance matrix might look likeΣ=[100000.10000.1].{\displaystyle \Sigma ={\begin{bmatrix}10&0&0\\0&0.1&0\\0&0&0.1\end{bmatrix}}.}
That is, there is the most variance in thexdirection. Physicists would consider this to have a low momentaboutthexaxis so the moment-of-inertia tensor isI=n[0.200010.100010.1].{\displaystyle I=n{\begin{bmatrix}0.2&0&0\\0&10.1&0\\0&0&10.1\end{bmatrix}}.}
Thesemivarianceis calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation:Semivariance=1n∑i:xi<μ(xi−μ)2{\displaystyle {\text{Semivariance}}={\frac {1}{n}}\sum _{i:x_{i}<\mu }{\left(x_{i}-\mu \right)}^{2}}It is also described as a specific measure in different fields of application. For skewed distributions, the semivariance can provide additional information that a variance does not.[27]
For inequalities associated with the semivariance, seeChebyshev's inequality § Semivariances.
The termvariancewas first introduced byRonald Fisherin his 1918 paperThe Correlation Between Relatives on the Supposition of Mendelian Inheritance:[28]
The great body of available statistics show us that the deviations of ahuman measurementfrom its mean follow very closely theNormal Law of Errors, and, therefore, that the variability may be uniformly measured by thestandard deviationcorresponding to thesquare rootof themean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviationsσ1{\displaystyle \sigma _{1}}andσ2{\displaystyle \sigma _{2}}, it is found that the distribution, when both causes act together, has a standard deviationσ12+σ22{\displaystyle {\sqrt {\sigma _{1}^{2}+\sigma _{2}^{2}}}}. It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance...
Ifx{\displaystyle x}is a scalarcomplex-valued random variable, with values inC,{\displaystyle \mathbb {C} ,}then its variance isE[(x−μ)(x−μ)∗],{\displaystyle \operatorname {E} \left[(x-\mu )(x-\mu )^{*}\right],}wherex∗{\displaystyle x^{*}}is thecomplex conjugateofx.{\displaystyle x.}This variance is a real scalar.
IfX{\displaystyle X}is avector-valued random variable, with values inRn,{\displaystyle \mathbb {R} ^{n},}and thought of as a column vector, then a natural generalization of variance isE[(X−μ)(X−μ)T],{\displaystyle \operatorname {E} \left[(X-\mu ){(X-\mu )}^{\mathsf {T}}\right],}whereμ=E(X){\displaystyle \mu =\operatorname {E} (X)}andXT{\displaystyle X^{\mathsf {T}}}is the transpose ofX, and so is a row vector. The result is apositive semi-definite square matrix, commonly referred to as thevariance-covariance matrix(or simply as thecovariance matrix).
IfX{\displaystyle X}is a vector- and complex-valued random variable, with values inCn,{\displaystyle \mathbb {C} ^{n},}then thecovariance matrix isE[(X−μ)(X−μ)†],{\displaystyle \operatorname {E} \left[(X-\mu ){(X-\mu )}^{\dagger }\right],}whereX†{\displaystyle X^{\dagger }}is theconjugate transposeofX.{\displaystyle X.}[citation needed]This matrix is also positive semi-definite and square.
Another generalization of variance for vector-valued random variablesX{\displaystyle X}, which results in a scalar value rather than in a matrix, is thegeneralized variancedet(C){\displaystyle \det(C)}, thedeterminantof the covariance matrix. The generalized variance can be shown to be related to the multidimensional scatter of points around their mean.[29]
A different generalization is obtained by considering the equation for the scalar variance,Var(X)=E[(X−μ)2]{\displaystyle \operatorname {Var} (X)=\operatorname {E} \left[(X-\mu )^{2}\right]}, and reinterpreting(X−μ)2{\displaystyle (X-\mu )^{2}}as the squaredEuclidean distancebetween the random variable and its mean, or, simply as the scalar product of the vectorX−μ{\displaystyle X-\mu }with itself. This results inE[(X−μ)T(X−μ)]=tr(C),{\displaystyle \operatorname {E} \left[(X-\mu )^{\mathsf {T}}(X-\mu )\right]=\operatorname {tr} (C),}which is thetraceof the covariance matrix.
|
https://en.wikipedia.org/wiki/Variance#Properties
|
In cryptography,security levelis a measure of the strength that acryptographic primitive— such as acipherorhash function— achieves. Security level is usually expressed as a number of "bitsof security" (alsosecurity strength),[1]wheren-bit security means that the attacker would have to perform 2noperations to break it,[2]but other methods have been proposed that more closely model the costs for an attacker.[3]This allows for convenient comparison between algorithms and is useful when combining multiple primitives in ahybrid cryptosystem, so there is no clear weakest link. For example,AES-128 (key size128 bits) is designed to offer a 128-bit security level, which is considered roughly equivalent to aRSAusing 3072-bit key.
In this context,security claimortarget security levelis the security level that a primitive was initially designed to achieve, although "security level" is also sometimes used in those contexts. When attacks are found that have lower cost than the security claim, the primitive is consideredbroken.[4][5]
Symmetric algorithms usually have a strictly defined security claim. Forsymmetric ciphers, it is typically equal to thekey sizeof the cipher — equivalent to thecomplexityof abrute-force attack.[5][6]Cryptographic hash functionswith output size ofnbits usually have acollision resistancesecurity leveln/2 and apreimage resistanceleveln. This is because the generalbirthday attackcan always find collisions in 2n/2steps.[7]For example,SHA-256offers 128-bit collision resistance and 256-bit preimage resistance.
However, there are some exceptions to this. ThePhelixand Helix are 256-bit ciphers offering a 128-bit security level.[5][8]The SHAKE variants ofSHA-3are also different: for a 256-bit output size, SHAKE-128 provides 128-bit security level for both collision and preimage resistance.[9]
The design of most asymmetric algorithms (i.e.public-key cryptography) relies on neatmathematical problemsthat are efficient to compute in one direction, but inefficient to reverse by the attacker. However, attacks against current public-key systems are always faster thanbrute-force searchof the key space. Their security level isn't set at design time, but represents acomputational hardness assumption, which is adjusted to match the best currently known attack.[6]
Various recommendations have been published that estimate the security level of asymmetric algorithms, which differ slightly due to different methodologies.
The following table are examples of typical security levels for types of algorithms as found in s5.6.1.1 of the US NIST SP-800-57 Recommendation for Key Management.[16]: Table 2
Under NIST recommendation, a key of a given security level should only be transported under protection using an algorithm of equivalent or higher security level.[14]
The security level is given for the cost of breaking one target, not the amortized cost for group of targets. It takes 2128operations to find a AES-128 key, yet the same number of amortized operations is required for any numbermof keys. On the other hand, breakingmECC keys using the rho method require sqrt(m) times the base cost.[15][17]
A cryptographic primitive is considered broken when an attack is found to have less than its advertised level of security. However, not all such attacks are practical: most currently demonstrated attacks take fewer than 240operations, which translates to a few hours on an average PC. The costliest demonstrated attack on hash functions is the 261.2attack on SHA-1, which took 2 months on 900GTX 970GPUs, and cost US$75,000 (although the researchers estimate only $11,000 was needed to find a collision).[18]
Aumasson draws the line between practical and impractical attacks at 280operations. He proposes a new terminology:[19]
|
https://en.wikipedia.org/wiki/Cryptographic_strength
|
Inalgebraic geometry, theNéron model(orNéron minimal model, orminimal model)
for anabelian varietyAKdefined over the field of fractionsKof a Dedekind domainRis the "push-forward" ofAKfrom Spec(K) to Spec(R), in other words the "best possible" group schemeARdefined overRcorresponding toAK.
They were introduced byAndré Néron(1961,1964) for abelian varieties over the quotient field of a Dedekind domainRwith perfect residue fields, andRaynaud (1966)extended this construction to semiabelian varieties over all Dedekind domains.
Suppose thatRis aDedekind domainwith field of fractionsK, and suppose thatAKis a smooth separated scheme overK(such as an abelian variety). Then aNéron modelofAKis defined to be asmoothseparatedschemeARoverRwith fiberAKthat is universal in the following sense.
In particular, the canonical mapAR(R)→AK(K){\displaystyle A_{R}(R)\to A_{K}(K)}is an isomorphism. If a Néron model exists then it is unique up to unique isomorphism.
In terms of sheaves, any schemeAover Spec(K) represents a sheaf on the category of schemes smooth over Spec(K) with the smooth Grothendieck topology, and this has a pushforward by the injection map from Spec(K) to Spec(R), which is a sheaf over Spec(R). If this pushforward is representable by a scheme, then this scheme is the Néron model ofA.
In general the schemeAKneed not have any Néron model.
For abelian varietiesAKNéron models exist and are unique (up to unique isomorphism) and are commutative quasi-projectivegroup schemesoverR. The fiber of a Néron model over aclosed pointof Spec(R) is a smooth commutativealgebraic group, but need not be an abelian variety: for example, it may be disconnected or a torus. Néron models exist as well for certain commutative groups other than abelian varieties such as tori, but these are only locally of finite type. Néron models do not exist for the additive group.
The Néron model of an elliptic curveAKoverKcan be constructed as follows. First form the minimal model overRin the sense of algebraic (or arithmetic) surfaces. This is a regular proper surface overRbut is not in general smooth overRor a group scheme overR. Its subscheme of smooth points overRis the Néron model, which is a smooth group scheme overRbut not necessarily proper overR. The fibers in general may have several irreducible components, and to form the Néron model one discards all multiple components, all points where two components intersect, and all singular points of the components.
Tate's algorithmcalculates thespecial fiberof the Néron model of an elliptic curve, or more precisely the fibers of the minimal surface containing the Néron model.
|
https://en.wikipedia.org/wiki/N%C3%A9ron_minimal_model
|
Ininformation theory, asoft-decision decoderis a kind ofdecoding method– a class ofalgorithmused to decode data that has been encoded with anerror correcting code. Whereas ahard-decision decoderoperates on data that take on a fixed set of possible values (typically 0 or 1 in a binary code), the inputs to a soft-decision decoder may take on a whole range of values in-between. This extra information indicates the reliability of each input data point, and is used to form better estimates of the original data. Therefore, a soft-decision decoder will typically perform better in the presence of corrupted data than its hard-decision counterpart.[1]
Soft-decision decoders are often used inViterbi decodersandturbo codedecoders.
This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Soft-decision_decoder
|
Inlinear algebra, the order-rKrylov subspacegenerated by ann-by-nmatrixAand a vectorbof dimensionnis thelinear subspacespannedby theimagesofbunder the firstrpowers ofA(starting fromA0=I{\displaystyle A^{0}=I}), that is,[1][2]
The concept is named after Russian applied mathematician and naval engineerAlexei Krylov, who published a paper about the concept in 1931.[3]
Krylov subspaces are used in algorithms for finding approximate solutions to high-dimensionallinear algebra problems.[2]Manylinear dynamical systemtests incontrol theory, especially those related tocontrollabilityandobservability, involve checking the rank of the Krylov subspace. These tests are equivalent to finding the span of theGramiansassociated with the system/output maps so the uncontrollable and unobservable subspaces are simply the orthogonal complement to the Krylov subspace.[4]
Moderniterative methodssuch asArnoldi iterationcan be used for finding one (or a few) eigenvalues of largesparse matricesor solving large systems of linear equations. They try to avoid matrix-matrix operations, but rather multiply vectors by the matrix and work with the resulting vectors. Starting with a vectorb{\displaystyle b}, one computesAb{\displaystyle Ab}, then one multiplies that vector byA{\displaystyle A}to findA2b{\displaystyle A^{2}b}and so on. All algorithms that work this way are referred to as Krylov subspace methods; they are among the most successful methods currently available in numerical linear algebra. These methods can be used in situations where there is an algorithm to compute the matrix-vector multiplication without there being an explicit representation ofA{\displaystyle A}, giving rise toMatrix-free methods.
Because the vectors usually soon become almostlinearly dependentdue to the properties ofpower iteration, methods relying on Krylov subspace frequently involve someorthogonalizationscheme, such asLanczos iterationforHermitian matricesorArnoldi iterationfor more general matrices.
The best known Krylov subspace methods are theConjugate gradient,IDR(s)(Induced dimension reduction),GMRES(generalized minimum residual),BiCGSTAB(biconjugate gradient stabilized),QMR(quasi minimal residual),TFQMR(transpose-free QMR) andMINRES(minimal residual method).
|
https://en.wikipedia.org/wiki/Krylov_subspace
|
Ininformation science, anontologyencompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or alldomains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to asapplied ontology.[1]
Everyacademic disciplineor field, in creating its terminology, thereby lays the groundwork for an ontology. Each uses ontological assumptions to frame explicit theories, research and applications. Improved ontologies may improve problem solving within that domain,interoperabilityof data systems, and discoverability of data. Translating research papers within every field is a problem made easier when experts from different countries maintain acontrolled vocabularyofjargonbetween each of their languages.[2]For instance, thedefinition and ontology of economicsis a primary concern inMarxist economics,[3]but also in othersubfields of economics.[4]An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining whatcapital assetsare at risk and by how much (seerisk management).
What ontologies in bothinformation scienceandphilosophyhave in common is the attempt to represent entities, including both objects and events, with all their interdependent properties and relations, according to a system of categories. In both fields, there is considerable work on problems ofontology engineering(e.g.,QuineandKripkein philosophy,SowaandGuarinoin information science),[5]and debates concerning to what extentnormativeontology is possible (e.g.,foundationalismandcoherentismin philosophy,BFOandCycin artificial intelligence).
Applied ontologyis considered by some as a successor to prior work in philosophy. However many current efforts are more concerned with establishingcontrolled vocabulariesof narrow domains than with philosophicalfirst principles, or with questions such as the mode of existence offixed essencesor whether enduring objects (e.g.,perdurantismandendurantism) may be ontologically more primary thanprocesses.Artificial intelligencehas retained considerable attention regardingapplied ontologyin subfields likenatural language processingwithinmachine translationandknowledge representation, but ontology editors are being used often in a range of fields, including biomedical informatics,[6]industry.[7]Such efforts often use ontology editing tools such asProtégé.[8]
Ontologyis a branch ofphilosophyand intersects areas such asmetaphysics,epistemology, andphilosophy of language, as it considers how knowledge, language, and perception relate to the nature of reality.Metaphysicsdeals with questions like "what exists?" and "what is the nature of reality?". One of five traditional branches of philosophy, metaphysics is concerned with exploring existence through properties, entities and relations such as those betweenparticularsanduniversals,intrinsic and extrinsic properties, oressenceandexistence. Metaphysics has been an ongoing topic of discussion since recorded history.
Thecompoundwordontologycombinesonto-, from theGreekὄν,on(gen.ὄντος,ontos), i.e. "being; that which is", which is thepresentparticipleof theverbεἰμί,eimí, i.e. "to be, I am", and-λογία,-logia, i.e. "logical discourse", seeclassical compoundsfor this type of word formation.[9][10]
While theetymologyis Greek, the oldest extant record of the word itself, theNeo-Latinformontologia, appeared in 1606 in the workOgdoas ScholasticabyJacob Lorhard(Lorhardus) and in 1613 in theLexicon philosophicumbyRudolf Göckel(Goclenius).[11]
The first occurrence in English ofontologyas recorded by theOED(Oxford English Dictionary, online edition, 2008) came inArcheologia Philosophica NovaorNew Principles of PhilosophybyGideon Harvey.
Since the mid-1970s, researchers in the field ofartificial intelligence(AI) have recognized thatknowledge engineeringis the key to building large and powerful AI systems[citation needed]. AI researchers argued that they could create new ontologies ascomputational modelsthat enable certain kinds ofautomated reasoning, which was onlymarginally successful. In the 1980s, the AI community began to use the termontologyto refer to both a theory of a modeled world and a component ofknowledge-based systems. In particular, David Powers introduced the wordontologyto AI to refer to real world or robotic grounding,[12][13]publishing in 1990 literature reviews emphasizing grounded ontology in association with the call for papers for a AAAI Summer Symposium Machine Learning of Natural Language and Ontology, with an expanded version published in SIGART Bulletin and included as a preface to the proceedings.[14]Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy.[15]
In 1993, the widely cited web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" byTom Gruber[16]usedontologyas a technical term incomputer scienceclosely related to earlier idea ofsemantic networksandtaxonomies. Gruber introduced the term asa specification of a conceptualization:
An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy.[17]
Attempting to distance ontologies from taxonomies and similar efforts inknowledge modelingthat rely onclassesandinheritance, Gruber stated (1993):
Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited toconservative definitions, that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world (Enderton, 1972). To specify a conceptualization, one needs to state axioms thatdoconstrain the possible interpretations for the defined terms.[16]
Recent experimental ontology frameworks have also explored resonance-based AI-human co-evolution structures, such as IAMF (Illumination AI Matrix Framework). Though not yet widely adopted in academic discourse, such models propose phased approaches to ethical harmonization and structural emergence.[18]
As refinement of Gruber's definition Feilmayr and Wöß (2016) stated: "An ontology is a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity."[19]
Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes and relations.
A domain ontology (or domain-specific ontology) represents concepts which belong to a realm of the world, such as biology or politics. Each domain ontology typically models domain-specific definitions of terms. For example, the wordcardhas many different meanings. An ontology about the domain ofpokerwould model the "playing card" meaning of the word, while an ontology about the domain ofcomputer hardwarewould model the "punched card" and "video card" meanings.
Since domain ontologies are written by different people, they represent concepts in very specific and unique ways, and are often incompatible within the same project. As systems that rely on domain ontologies expand, they often need to merge domain ontologies by hand-tuning each entity or using a combination of software merging and hand-tuning. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.)[citation needed].
At present, merging ontologies that are not developed from a commonupper ontologyis a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same upper ontology to provide a set of basic elements with which to specify the meanings of the domain ontology entities can be merged with less effort. There are studies on generalized techniques for merging ontologies,[20]but this area of research is still ongoing, and it is a recent event to see the issue sidestepped by having multiple domain ontologies using the same upper ontology like theOBO Foundry.
An upper ontology (or foundation ontology) is a model of the commonly shared relations and objects that are generally applicable across a wide range of domain ontologies. It usually employs acore glossarythat overarches the terms and associated object descriptions as they are used in various relevant domain ontologies.
Standardized upper ontologies available for use includeBFO,BORO method,Dublin Core,GFO,Cyc,SUMO,UMBEL, andDOLCE.[21][22]WordNethas been considered an upper ontology by some and has been used as a linguistic tool for learning domain ontologies.[23]
TheGellishontology is an example of a combination of an upper and a domain ontology.
A survey of ontology visualization methods is presented by Katifori et al.[24]An updated survey of ontology visualization methods and tools was published by Dudás et al.[25]The most established ontology visualization methods, namely indented tree and graph visualization are evaluated by Fu et al.[26]A visual language for ontologies represented inOWLis specified by theVisual Notation for OWL Ontologies (VOWL).[27]
Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain.[28]It is a subfield ofknowledge engineeringthat studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them.[29][30]
Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include:
Ontology editorsare applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or moreontology languages.
Aspects of ontology editors include: visual navigation possibilities within theknowledge model,inference enginesandinformation extraction; support for modules; the import and export of foreignknowledge representationlanguages forontology matching; and the support of meta-ontologies such asOWL-S,Dublin Core, etc.[31]
Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction andtext mininghave been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges.[32]
Epistemological assumptions, which in research asks "What do you know? or "How do you know it?", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they have.[citation needed]
Anontology languageis aformal languageused to encode an ontology. There are a number of such languages for ontologies, both proprietary and standards-based:
The W3CLinking Open Data community projectcoordinates attempts to converge different ontologies into worldwideSemantic Web.
The development of ontologies has led to the emergence of services providing lists or directories of ontologies called ontology libraries.
The following are libraries of human-selected ontologies.
The following are both directories and search engines.
In general, ontologies can be used beneficially in several fields.
|
https://en.wikipedia.org/wiki/Ontology_(information_science)#Domain_ontology
|
AJones diagramis a type ofCartesian graphdeveloped byLoyd A. Jonesin the 1940s, where each axis represents a differentvariable. In a Jones diagram opposite directions of an axis represent different quantities, unlike in a Cartesian graph where they represent positive or negativesignsof the same quantity. The Jones diagram therefore represents four variables. Each quadrant shares the vertical axis with its horizontal neighbor, and the horizontal axis with the vertical neighbor. For example, the top left quadrant shares its vertical axis with the top right quadrant, and the horizontal axis with the bottom left quadrant. The overall system response is inquadrantI; the variables that contribute to it are in quadrants II through IV.
A common application of Jones diagrams is inphotography, specifically in displaying sensitivity to light with what are also called "tone reproductiondiagrams". These diagrams are used in the design of photographic systems (film,paper, etc.) to determine the relationship between the light a viewer would see at the time a photo was taken to the light that a viewer would see looking at the finished photograph.
The Jones diagram concept can be used for variables that depend successively on each other. Jones's original diagram used eleven quadrants[how?]to show all the elements of his photographic system.
This photography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Jones_diagram
|
Incryptography, ahybrid cryptosystemis one which combines the convenience of apublic-key cryptosystemwith the efficiency of asymmetric-key cryptosystem.[1]Public-key cryptosystems are convenient in that they do not require the sender and receiver to share acommon secretin order to communicate securely.[2]However, they often rely on complicated mathematical computations and are thus generally much more inefficient than comparable symmetric-key cryptosystems. In many applications, the high cost of encrypting long messages in a public-key cryptosystem can be prohibitive. This is addressed by hybrid systems by using a combination of both.[3]
A hybrid cryptosystem can be constructed using any two separate cryptosystems:
The hybrid cryptosystem is itself a public-key system, whose public and private keys are the same as in the key encapsulation scheme.[4]
Note that for very long messages the bulk of the work in encryption/decryption is done by the more efficient symmetric-key scheme, while the inefficient public-key scheme is used only to encrypt/decrypt a short key value.[3]
All practical implementations of public key cryptography today employ the use of a hybrid system. Examples include theTLSprotocol[5]and theSSHprotocol,[6]that use a public-key mechanism for key exchange (such asDiffie-Hellman) and a symmetric-key mechanism for data encapsulation (such asAES). TheOpenPGP[7]file format and thePKCS#7[8]file format are other examples.
Hybrid Public Key Encryption (HPKE, published asRFC 9180) is a modern standard for generic hybrid encryption. HPKE is used within multiple IETF protocols, includingMLSand TLS Encrypted Hello.
Envelope encryption is an example of a usage of hybrid cryptosystems incloud computing. In a cloud context, hybrid cryptosystems also enable centralizedkey management.[9][10]
To encrypt a message addressed to Alice in a hybrid cryptosystem, Bob does the following:
To decrypt this hybrid ciphertext, Alice does the following:
If both the key encapsulation and data encapsulation schemes in a hybrid cryptosystem are secure againstadaptive chosen ciphertext attacks, then the hybrid scheme inherits that property as well.[4]However, it is possible to construct a hybrid scheme secure against adaptive chosen ciphertext attacks even if the key encapsulation has a slightly weakened security definition (though the security of the data encapsulation must be slightly stronger).[12]
Envelope encryption is term used for encrypting with a hybrid cryptosystem used by all majorcloud service providers,[9]often as part of a centralizedkey managementsystem in cloud computing.[13]
Envelope encryption gives names to the keys used in hybrid encryption: Data Encryption Keys (abbreviated DEK, and used to encrypt data) and Key Encryption Keys (abbreviated KEK, and used to encrypt the DEKs). In a cloud environment, encryption with envelope encryption involves generating a DEK locally, encrypting one's data using the DEK, and then issuing a request to wrap (encrypt) the DEK with a KEK stored in a potentially more secureservice. Then, this wrapped DEK and encrypted message constitute aciphertextfor the scheme. To decrypt a ciphertext, the wrapped DEK is unwrapped (decrypted) via a call to a service, and then the unwrapped DEK is used to decrypt the encrypted message.[10]In addition to the normal advantages of a hybrid cryptosystem, using asymmetric encryption for the KEK in a cloud context provides easier key management and separation of roles, but can be slower.[13]
In cloud systems, such asGoogle Cloud PlatformandAmazon Web Services, a key management system (KMS) can be available as a service.[13][10][14]In some cases, the key management system will store keys inhardware security modules, which are hardware systems that protect keys with hardware features like intrusion resistance.[15]This means that KEKs can also be more secure because they are stored on secure specialized hardware.[13]Envelope encryption makes centralized key management easier because a centralized key management system only needs to store KEKs, which occupy less space, and requests to the KMS only involve sending wrapped and unwrapped DEKs, which use less bandwidth than transmitting entire messages. Since one KEK can be used to encrypt many DEKs, this also allows for less storage space to be used in the KMS. This also allows for centralized auditing and access control at one point of access.[10]
|
https://en.wikipedia.org/wiki/Hybrid_encryption
|
Programming complexity(orsoftware complexity) is a term that includes software properties that affect internal interactions. Several commentators distinguish between the terms "complex" and "complicated". Complicated implies being difficult to understand, but ultimately knowable. Complex, by contrast, describes the interactions between entities. As the number of entities increases, the number of interactions between them increases exponentially, making it impossible to know and understand them all. Similarly, higher levels of complexity in software increase the risk of unintentionally interfering with interactions, thus increasing the risk of introducing defects when changing the software. In more extreme cases, it can make modifying the software virtually impossible.
The idea of linking software complexity to software maintainability has been explored extensively byProfessor Manny Lehman, who developed hisLaws of Software Evolution. He and his co-authorLes Beladyexplored numeroussoftware metricsthat could be used to measure the state of software, eventually concluding that the only practical solution is to use deterministic complexity models.[1]
The complexity of an existing program determines the complexity of changing the program. Problem complexity can be divided into two categories:[2]
Several measures of software complexity have been proposed. Many of these, although yielding a good representation of complexity, do not lend themselves to easy measurement. Some of the more commonly used metrics are
Several other metrics can be used to measure programming complexity:
Tesler's Lawis anadageinhuman–computer interactionstating that everyapplicationhas an inherent amount of complexity that cannot be removed or hidden.
Chidamber and Kemerer[4]proposed a set of programing complexity metrics widely used in measurements and academic articles: weighted methods per class, coupling between object classes, response for a class, number of children, depth of inheritance tree, and lack of cohesion of methods, described below:
|
https://en.wikipedia.org/wiki/Programming_complexity
|
Incomputational geometry, aDelaunay triangulationorDelone triangulationof a set of points in the plane subdivides theirconvex hull[1]into triangles whosecircumcirclesdo not contain any of the points; that is, each circumcircle has its generating points on its circumference, but all other points in the set are outside of it. This maximizes the size of the smallest angle in any of the triangles, and tends to avoidsliver triangles.
The triangulation is named afterBoris Delaunayfor his work on it from 1934.[2]
If the points all lie on a straight line, the notion of triangulation becomesdegenerateand there is no Delaunay triangulation. For four or more points on the same circle (e.g., the vertices of a rectangle) the Delaunay triangulation is not unique: each of the two possible triangulations that split thequadrangleinto two triangles satisfies the "Delaunay condition", i.e., the requirement that the circumcircles of all triangles have empty interiors.
By considering circumscribed spheres, the notion of Delaunay triangulation extends to three and higher dimensions. Generalizations are possible tometricsother thanEuclidean distance. However, in these cases a Delaunay triangulation is not guaranteed to exist or be unique.
The Delaunaytriangulationof adiscretepoint setPin general position corresponds to thedual graphof theVoronoi diagramforP.
Thecircumcentersof Delaunay triangles are the vertices of the Voronoi diagram.
In the 2D case, the Voronoi vertices are connected via edges, that can be derived from adjacency-relationships of the Delaunay triangles: If two triangles share an edge in the Delaunay triangulation, their circumcenters are to be connected with an edge in the Voronoi tesselation.
Special cases where this relationship does not hold, or is ambiguous, include cases like:
For a setPof points in the (d-dimensional)Euclidean space, aDelaunay triangulationis atriangulationDT(P)such that no point inPis inside thecircum-hypersphereof anyd-simplexinDT(P). It is known[2]that there exists a unique Delaunay triangulation forPifPis a set of points ingeneral position; that is, the affine hull ofPisd-dimensional and no set ofd+ 2points inPlie on the boundary of a ball whose interior does not intersectP.
The problem of finding the Delaunay triangulation of a set of points ind-dimensionalEuclidean spacecan be converted to the problem of finding theconvex hullof a set of points in (d+ 1)-dimensional space. This may be done by giving each pointpan extra coordinate equal to|p|2, thus turning it into a hyper-paraboloid (this is termed "lifting"); taking the bottom side of the convex hull (as the top end-cap faces upwards away from the origin, and must be discarded); and mapping back tod-dimensional space by deleting the last coordinate. As the convex hull is unique, so is the triangulation, assuming all facets of the convex hull aresimplices. Nonsimplicial facets only occur whend+ 2of the original points lie on the samed-hypersphere, i.e., the points are not in general position.[3]
Letnbe the number of points anddthe number of dimensions.
From the above properties an important feature arises: Looking at two triangles△ABD, △BCDwith the common edgeBD(see figures), if the sum of the anglesα + γ ≤ 180°, the triangles meet the Delaunay condition.
This is an important property because it allows the use of aflippingtechnique. If two triangles do not meet the Delaunay condition, switching the common edgeBDfor the common edgeACproduces two triangles that do meet the Delaunay condition:
This operation is called aflip, and can be generalised to three and higher dimensions.[8]
Many algorithms for computing Delaunay triangulations rely on fast operations for detecting when a point is within a triangle's circumcircle and an efficient data structure for storing triangles and edges. In two dimensions, one way to detect if pointDlies in the circumcircle ofA, B, Cis to evaluate thedeterminant:[9]
WhenA, B, Care sorted in acounterclockwiseorder, this determinant is positive only ifDlies inside the circumcircle.
As mentioned above, if a triangle is non-Delaunay, we can flip one of its edges. This leads to a straightforward algorithm: construct any triangulation of the points, and then flip edges until no triangle is non-Delaunay. Unfortunately, this can takeΩ(n2)edge flips.[10]While this algorithm can be generalised to three and higher dimensions, its convergence is not guaranteed in these cases, as it is conditioned to the connectedness of the underlyingflip graph: this graph is connected for two-dimensional sets of points, but may be disconnected in higher dimensions.[8]
The most straightforward way of efficiently computing the Delaunay triangulation is to repeatedly add one vertex at a time, retriangulating the affected parts of the graph. When a vertexvis added, we split in three the triangle that containsv, then we apply the flip algorithm. Done naïvely, this will takeO(n)time: we search through all the triangles to find the one that containsv, then we potentially flip away every triangle. Then the overall runtime isO(n2).
If we insert vertices in random order, it turns out (by a somewhat intricate proof) that each insertion will flip, on average, onlyO(1)triangles – although sometimes it will flip many more.[11]This still leaves the point location time to improve. We can store the history of the splits and flips performed: each triangle stores a pointer to the two or three triangles that replaced it. To find the triangle that containsv, we start at a root triangle, and follow the pointer that points to a triangle that containsv, until we find a triangle that has not yet been replaced. On average, this will also takeO(logn)time. Over all vertices, then, this takesO(nlogn)time.[12]While the technique extends to higher dimension (as proved by Edelsbrunner and Shah[13]), the runtime can be exponential in the dimension even if the final Delaunay triangulation is small.
TheBowyer–Watson algorithmprovides another approach for incremental construction. It gives an alternative to edge flipping for computing the Delaunay triangles containing a newly inserted vertex.
Unfortunately the flipping-based algorithms are generally hard to parallelize, since adding some certain point (e.g. the center point of a wagon wheel) can lead to up toO(n)consecutive flips. Blelloch et al.[14]proposed another version of incremental algorithm based on rip-and-tent, which is practical and highly parallelized with polylogarithmicspan.
Adivide and conquer algorithmfor triangulations in two dimensions was developed by Lee and Schachter and improved byGuibasandStolfi[9][15]and later by Dwyer.[16]In this algorithm, one recursively draws a line to split the vertices into two sets. The Delaunay triangulation is computed for each set, and then the two sets are merged along the splitting line. Using some clever tricks, the merge operation can be done in timeO(n), so the total running time isO(nlogn).[17]
For certain types of point sets, such as a uniform random distribution, by intelligently picking the splitting lines the expected time can be reduced toO(nlog logn)while still maintaining worst-case performance.
A divide and conquer paradigm to performing a triangulation inddimensions is presented in "DeWall: A fast divide and conquer Delaunay triangulation algorithm in Ed" by P. Cignoni, C. Montani, R. Scopigno.[18]
The divide and conquer algorithm has been shown to be the fastest DT generation technique sequentially.[19][20]
Sweephull[21]is a hybrid technique for 2D Delaunay triangulation that uses a radially propagating sweep-hull, and a flipping algorithm. The sweep-hull is created sequentially by iterating a radially-sorted set of 2D points, and connecting triangles to the visible part of the convex hull, which gives a non-overlapping triangulation. One can build a convex hull in this manner so long as the order of points guarantees no point would fall within the triangle. But, radially sorting should minimize flipping by being highly Delaunay to start. This is then paired with a final iterative triangle flipping step.
TheEuclidean minimum spanning treeof a set of points is a subset of the Delaunay triangulation of the same points,[22]and this can be exploited to compute it efficiently.
For modellingterrainor other objects given apoint cloud, the Delaunay triangulation gives a nice set of triangles to use as polygons in the model. In particular, the Delaunay triangulation avoids narrow triangles (as they have large circumcircles compared to their area). Seetriangulated irregular network.
Delaunay triangulations can be used to determine the density or intensity of points samplings by means of theDelaunay tessellation field estimator (DTFE).
Delaunay triangulations are often used togenerate meshesfor space-discretised solvers such as thefinite element methodand thefinite volume methodof physics simulation, because of the angle guarantee and because fast triangulation algorithms have been developed. Typically, the domain to be meshed is specified as a coarsesimplicial complex; for the mesh to be numerically stable, it must be refined, for instance by usingRuppert's algorithm.
The increasing popularity offinite element methodandboundary element methodtechniques increases the incentive to improve automatic meshing algorithms. However, all of these algorithms can create distorted and even unusable grid elements. Fortunately, several techniques exist which can take an existing mesh and improve its quality. For example, smoothing (also referred to as mesh refinement) is one such method, which repositions nodes to minimize element distortion. Thestretched grid methodallows the generation of pseudo-regular meshes that meet the Delaunay criteria easily and quickly in a one-step solution.
Constrained Delaunay triangulationhas found applications inpath planningin automated driving and topographic surveying.[23]
|
https://en.wikipedia.org/wiki/Delaunay_triangulation
|
Inmathematics, aparametric equationexpresses several quantities, such as thecoordinatesof apoint, asfunctionsof one or severalvariablescalledparameters.[1]
In the case of a single parameter, parametric equations are commonly used to express thetrajectoryof a moving point, in which case, the parameter is often, but not necessarily, time, and the point describes acurve, called aparametric curve. In the case of two parameters, the point describes asurface, called aparametric surface. In all cases, the equations are collectively called aparametric representation,[2]orparametric system,[3]orparameterization(also spelledparametrization,parametrisation) of the object.[1][4][5]
For example, the equationsx=costy=sint{\displaystyle {\begin{aligned}x&=\cos t\\y&=\sin t\end{aligned}}}form a parametric representation of theunit circle, wheretis the parameter: A point(x,y)is on the unit circleif and only ifthere is a value oftsuch that these two equations generate that point. Sometimes the parametric equations for the individualscalaroutput variables are combined into a single parametric equation invectors:
(x,y)=(cost,sint).{\displaystyle (x,y)=(\cos t,\sin t).}
Parametric representations are generally nonunique (see the "Examples in two dimensions" section below), so the same quantities may be expressed by a number of different parameterizations.[1]
In addition to curves and surfaces, parametric equations can describemanifoldsandalgebraic varietiesof higherdimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension isoneandoneparameter is used, for surfaces dimensiontwoandtwoparameters, etc.).
Parametric equations are commonly used inkinematics, where thetrajectoryof an object is represented by equations depending on time as the parameter. Because of this application, a single parameter is often labeledt; however, parameters can represent other physical quantities (such as geometric variables) or can be selected arbitrarily for convenience. Parameterizations are non-unique; more than one set of parametric equations can specify the same curve.[6]
Converting a set of parametric equations to a singleimplicit equationinvolves eliminating the variabletfrom the simultaneous equationsx=f(t),y=g(t).{\displaystyle x=f(t),\ y=g(t).}This process is calledimplicitization. If one of these equations can be solved fort, the expression obtained can be substituted into the other equation to obtain an equation involvingxandyonly: Solvingy=g(t){\displaystyle y=g(t)}to obtaint=g−1(y){\displaystyle t=g^{-1}(y)}and using this inx=f(t){\displaystyle x=f(t)}gives the explicit equationx=f(g−1(y)),{\displaystyle x=f(g^{-1}(y)),}while more complicated cases will give an implicit equation of the formh(x,y)=0.{\displaystyle h(x,y)=0.}
If the parametrization is given byrational functionsx=p(t)r(t),y=q(t)r(t),{\displaystyle x={\frac {p(t)}{r(t)}},\qquad y={\frac {q(t)}{r(t)}},}
wherep,q, andrare set-wisecoprimepolynomials, aresultantcomputation allows one to implicitize. More precisely, the implicit equation is theresultantwith respect totofxr(t) –p(t)andyr(t) –q(t).
In higher dimensions (either more than two coordinates or more than one parameter), the implicitization of rational parametric equations may by done withGröbner basiscomputation; seeGröbner basis § Implicitization in higher dimension.
To take the example of the circle of radiusa, the parametric equationsx=acos(t)y=asin(t){\displaystyle {\begin{aligned}x&=a\cos(t)\\y&=a\sin(t)\end{aligned}}}
can be implicitized in terms ofxandyby way of thePythagorean trigonometric identity. With
xa=cos(t)ya=sin(t){\displaystyle {\begin{aligned}{\frac {x}{a}}&=\cos(t)\\{\frac {y}{a}}&=\sin(t)\\\end{aligned}}}andcos(t)2+sin(t)2=1,{\displaystyle \cos(t)^{2}+\sin(t)^{2}=1,}we get(xa)2+(ya)2=1,{\displaystyle \left({\frac {x}{a}}\right)^{2}+\left({\frac {y}{a}}\right)^{2}=1,}and thusx2+y2=a2,{\displaystyle x^{2}+y^{2}=a^{2},}
which is the standard equation of a circle centered at the origin.
The simplest equation for aparabola,y=x2{\displaystyle y=x^{2}}
can be (trivially) parameterized by using a free parametert, and settingx=t,y=t2for−∞<t<∞.{\displaystyle x=t,y=t^{2}\quad \mathrm {for} -\infty <t<\infty .}
More generally, any curve given by an explicit equationy=f(x){\displaystyle y=f(x)}
can be (trivially) parameterized by using a free parametert, and settingx=t,y=f(t)for−∞<t<∞.{\displaystyle x=t,y=f(t)\quad \mathrm {for} -\infty <t<\infty .}
A more sophisticated example is the following. Consider the unit circle which is described by the ordinary (Cartesian) equationx2+y2=1.{\displaystyle x^{2}+y^{2}=1.}
This equation can be parameterized as follows:(x,y)=(cos(t),sin(t))for0≤t<2π.{\displaystyle (x,y)=(\cos(t),\;\sin(t))\quad \mathrm {for} \ 0\leq t<2\pi .}
With the Cartesian equation it is easier to check whether a point lies on the circle or not. With the parametric version it is easier to obtain points on a plot.
In some contexts, parametric equations involving onlyrational functions(that is fractions of twopolynomials) are preferred, if they exist. In the case of the circle, such arational parameterizationisx=1−t21+t2y=2t1+t2.{\displaystyle {\begin{aligned}x&={\frac {1-t^{2}}{1+t^{2}}}\\y&={\frac {2t}{1+t^{2}}}\,.\end{aligned}}}
With this pair of parametric equations, the point(−1, 0)is not represented by arealvalue oft, but by thelimitofxandywhenttends toinfinity.
Anellipsein canonical position (center at origin, major axis along thex-axis) with semi-axesaandbcan be represented parametrically asx=acosty=bsint.{\displaystyle {\begin{aligned}x&=a\,\cos t\\y&=b\,\sin t\,.\end{aligned}}}
An ellipse in general position can be expressed asx=Xc+acostcosφ−bsintsinφy=Yc+acostsinφ+bsintcosφ{\displaystyle {\begin{alignedat}{4}x={}&&X_{\mathrm {c} }&+a\,\cos t\,\cos \varphi {}&&-b\,\sin t\,\sin \varphi \\y={}&&Y_{\mathrm {c} }&+a\,\cos t\,\sin \varphi {}&&+b\,\sin t\,\cos \varphi \end{alignedat}}}
as the parametertvaries from0to2π. Here(Xc,Yc)is the center of the ellipse, andφis the angle between thex-axis and the major axis of the ellipse.
Both parameterizations may be maderationalby using thetangent half-angle formulaand settingtant2=u.{\textstyle \tan {\frac {t}{2}}=u\,.}
ALissajous curveis similar to an ellipse, but thexandysinusoidsare not in phase. In canonical position, a Lissajous curve is given byx=acos(kxt)y=bsin(kyt){\displaystyle {\begin{aligned}x&=a\,\cos(k_{x}t)\\y&=b\,\sin(k_{y}t)\end{aligned}}}wherekxandkyare constants describing the number of lobes of the figure.
An east-west openinghyperbolacan be represented parametrically by
x=asect+hy=btant+k,{\displaystyle {\begin{aligned}x&=a\sec t+h\\y&=b\tan t+k\,,\end{aligned}}}
or,rationally
x=a1+t21−t2+hy=b2t1−t2+k.{\displaystyle {\begin{aligned}x&=a{\frac {1+t^{2}}{1-t^{2}}}+h\\y&=b{\frac {2t}{1-t^{2}}}+k\,.\end{aligned}}}
A north-south opening hyperbola can be represented parametrically as
x=btant+hy=asect+k,{\displaystyle {\begin{aligned}x&=b\tan t+h\\y&=a\sec t+k\,,\end{aligned}}}
or, rationally
x=b2t1−t2+hy=a1+t21−t2+k.{\displaystyle {\begin{aligned}x&=b{\frac {2t}{1-t^{2}}}+h\\y&=a{\frac {1+t^{2}}{1-t^{2}}}+k\,.\end{aligned}}}
In all these formulae(h,k)are the center coordinates of the hyperbola,ais the length of the semi-major axis, andbis the length of the semi-minor axis. Note that in the rational forms of these formulae, the points(−a, 0)and(0 ,−a), respectively, are not represented by a real value oft, but are the limit ofxandyasttends to infinity.
Ahypotrochoidis a curve traced by a point attached to a circle of radiusrrolling around the inside of a fixed circle of radiusR, where the point is at a distancedfrom the center of the interior circle.
The parametric equations for the hypotrochoids are:
x(θ)=(R−r)cosθ+dcos(R−rrθ)y(θ)=(R−r)sinθ−dsin(R−rrθ).{\displaystyle {\begin{aligned}x(\theta )&=(R-r)\cos \theta +d\cos \left({R-r \over r}\theta \right)\\y(\theta )&=(R-r)\sin \theta -d\sin \left({R-r \over r}\theta \right)\,.\end{aligned}}}
Some examples:
Parametric equations are convenient for describingcurvesin higher-dimensional spaces. For example:
x=acos(t)y=asin(t)z=bt{\displaystyle {\begin{aligned}x&=a\cos(t)\\y&=a\sin(t)\\z&=bt\,\end{aligned}}}
describes a three-dimensional curve, thehelix, with a radius ofaand rising by2πbunits per turn. The equations are identical in theplaneto those for a circle.
Such expressions as the one above are commonly written as
r(t)=(x(t),y(t),z(t))=(acos(t),asin(t),bt),{\displaystyle {\begin{aligned}\mathbf {r} (t)&=(x(t),y(t),z(t))\\&=(a\cos(t),a\sin(t),bt)\,,\end{aligned}}}
whereris a three-dimensional vector.
Atoruswith major radiusRand minor radiusrmay be defined parametrically as
x=cos(t)(R+rcos(u)),y=sin(t)(R+rcos(u)),z=rsin(u).{\displaystyle {\begin{aligned}x&=\cos(t)\left(R+r\cos(u)\right),\\y&=\sin(t)\left(R+r\cos(u)\right),\\z&=r\sin(u)\,.\end{aligned}}}
where the two parameterstanduboth vary between0and2π.
Asuvaries from0to2πthe point on the surface moves about a short circle passing through the hole in the torus. Astvaries from0to2πthe point on the surface moves about a long circle around the hole in the torus.
The parametric equation of the line through the point(x0,y0,z0){\displaystyle \left(x_{0},y_{0},z_{0}\right)}and parallel to the vectorai^+bj^+ck^{\displaystyle a{\hat {\mathbf {i} }}+b{\hat {\mathbf {j} }}+c{\hat {\mathbf {k} }}}is[7]
x=x0+aty=y0+btz=z0+ct{\displaystyle {\begin{aligned}x&=x_{0}+at\\y&=y_{0}+bt\\z&=z_{0}+ct\end{aligned}}}
Inkinematics, objects' paths through space are commonly described as parametric curves, with each spatial coordinate depending explicitly on an independent parameter (usually time). Used in this way, the set of parametric equations for the object's coordinates collectively constitute avector-valued functionfor position. Such parametric curves can then beintegratedanddifferentiatedtermwise. Thus, if a particle's position is described parametrically asr(t)=(x(t),y(t),z(t)),{\displaystyle \mathbf {r} (t)=(x(t),y(t),z(t))\,,}
then itsvelocitycan be found asv(t)=r′(t)=(x′(t),y′(t),z′(t)),{\displaystyle {\begin{aligned}\mathbf {v} (t)&=\mathbf {r} '(t)\\&=(x'(t),y'(t),z'(t))\,,\end{aligned}}}
and itsaccelerationasa(t)=v′(t)=r″(t)=(x″(t),y″(t),z″(t)).{\displaystyle {\begin{aligned}\mathbf {a} (t)&=\mathbf {v} '(t)=\mathbf {r} ''(t)\\&=(x''(t),y''(t),z''(t))\,.\end{aligned}}}
Another important use of parametric equations is in the field ofcomputer-aided design(CAD).[8]For example, consider the following three representations, all of which are commonly used to describeplanar curves.
Each representation has advantages and drawbacks for CAD applications.
The explicit representation may be very complicated, or even may not exist. Moreover, it does not behave well undergeometric transformations, and in particular underrotations. On the other hand, as a parametric equation and an implicit equation may easily be deduced from an explicit representation, when a simple explicit representation exists, it has the advantages of both other representations.
Implicit representations may make it difficult to generate points on the curve, and even to decide whether there are real points. On the other hand, they are well suited for deciding whether a given point is on a curve, or whether it is inside or outside of a closed curve.
Such decisions may be difficult with a parametric representation, but parametric representations are best suited for generating points on a curve, and for plotting it.[9]
Numerous problems ininteger geometrycan be solved using parametric equations. A classical such solution isEuclid's parametrization ofright trianglessuch that the lengths of their sidesa,band their hypotenusecarecoprime integers. Asaandbare not both even (otherwisea,bandcwould not be coprime), one may exchange them to haveaeven, and the parameterization is then
a=2mnb=m2−n2c=m2+n2,{\displaystyle {\begin{aligned}a&=2mn\\b&=m^{2}-n^{2}\\c&=m^{2}+n^{2}\,,\end{aligned}}}
where the parametersmandnare positive coprime integers that are not both odd.
By multiplyinga,bandcby an arbitrary positive integer, one gets a parametrization of all right triangles whose three sides have integer lengths.
Asystem ofmlinear equationsinnunknowns isunderdeterminedif it has more than one solution. This occurs when thematrixof the system and itsaugmented matrixhave the samerankrandr<n. In this case, one can selectn−runknowns as parameters and represent all solutions as a parametric equation where all unknowns are expressed aslinear combinationsof the selected ones. That is, if the unknowns arex1,…,xn,{\displaystyle x_{1},\ldots ,x_{n},}one can reorder them for expressing the solutions as[10]
x1=β1+∑j=r+1nα1,jxj⋮xr=βr+∑j=r+1nαr,jxjxr+1=xr+1⋮xn=xn.{\displaystyle {\begin{aligned}x_{1}&=\beta _{1}+\sum _{j=r+1}^{n}\alpha _{1,j}x_{j}\\\vdots \\x_{r}&=\beta _{r}+\sum _{j=r+1}^{n}\alpha _{r,j}x_{j}\\x_{r+1}&=x_{r+1}\\\vdots \\x_{n}&=x_{n}.\end{aligned}}}
Such a parametric equation is called aparametric formof the solution of the system.[10]
The standard method for computing a parametric form of the solution is to useGaussian eliminationfor computing areduced row echelon formof the augmented matrix. Then the unknowns that can be used as parameters are the ones that correspond to columns not containing anyleading entry(that is the left most non zero entry in a row or the matrix), and the parametric form can be straightforwardly deduced.[10]
|
https://en.wikipedia.org/wiki/Parametric_equation
|
Themature minor doctrineis a rule of law found in theUnited StatesandCanadaaccepting that anunemancipated minorpatientmay possess thematurityto choose or reject a particularhealth caretreatment, sometimes without the knowledge or agreement of parents, and should be permitted to do so.[1]It is now generally considered a form of patientsrights; formerly, themature minor rulewas largely seen as protectinghealth care providersfromcriminaland civilclaimsby parents of minors at least 15 years old.[2]
Jurisdictions maycodifyan age of medical consent, accept the judgment oflicensedproviders regarding an individual minor, or accept a formalcourt decisionfollowing a request that a patient be designated a mature minor, or may rely on some combination. For example, patients at least 16 may be assumed to be mature minors for this purpose,[3]patients aged 13 to 15 may be designated so by licensed providers, and pre-teen patients may be so-designated after evaluation by anagencyorcourt. The mature minor doctrine is sometimes connected with enforcingconfidentialityof minor patients from their parents.[4]
In the United States, a typical statute lists:"Who may consent [or withhold consent for] surgical or medical treatment or procedures."
By definition, a "mature minor" has been found to have thecapacityfordecisional autonomy, or the right to make decisions including whether to undergo risky but potentially life-saving medical decisions alone, without parental approval.[7]By contrast, "medical emancipation" formally releases children from some parental involvement requirements but does not necessarily grant that decision making to children themselves. Pursuant to statute, several jurisdictions grant medical emancipation to a minor who has becomepregnantor requiressexual-healthservices, thereby permitting medical treatment without parental consent and, often, confidentiality from parents. A limitedguardianshipmay be appointed to make medical decisions for the medically emancipated minor and the minor may not be permitted to refuse or even choose treatment.[8]
One significant early U.S. case,Smith v. Seibly, 72 Wn.2d 16, 431P.2d719 (1967), before theWashington Supreme Court, establishes precedent on the mature minor doctrine. The plaintiff, Albert G. Smith, an 18-year-old married father, was suffering frommyasthenia gravis, a progressive disease. Because of this, Smith expressed concern that his wife might become burdened in caring for him, for their existing child and possibly for additional children. On March 9, 1961, while still 18, Smith requested avasectomy. His doctor requiredwritten consent, which Smith provided, and the surgery was performed. Later, after reaching Washington's statutoryage of majority, then 21, the doctor was sued by Smith, who now claimed that he had been a minor and thus unable to grant surgical or medical consent. The Court rejected Smith's argument: "Thus, age, intelligence, maturity, training, experience, economic independence or lack thereof, general conduct as an adult and freedom from the control of parents are all factors to be considered in such a case [involving consent to surgery]."
The court further quoted another recently decided case,Grannum v. Berard, 70 Wn.2d 304, 307, 422P.2d812 (1967): "The mental capacity necessary to consent to a surgical operation is a question of fact to be determined from the circumstances of each individual case." The court explicitly stated that a minor may grant surgical consent even without formal emancipation.
Especially since the 1970s, older pediatric patients sought to make autonomous decisions regarding their own treatment, and sometimes sued successfully to do so.[9]The decades of accumulated evidence tended to demonstrate that children are capable of participating in medical decision-making in a meaningful way;[10][11]and legal and medical communities have demonstrated an increasing willingness to formally affirm decisions made by young people, even regarding life and death.[12]
Religious beliefs have repeatedly influenced a patient's decision to choose treatment or not. In a case in 1989 in Illinois, a 17-year-old femaleJehovah's Witnesswas permitted to refuse necessary life saving treatments.[13]
In 1990, theUnited States Congresspassed thePatient Self-Determination Act; even though key provisions apply only to patients over age 18,[14]the legislation advanced patient involvement in decision-making. TheWest Virginia Supreme Court, inBelcher v. Charleston Area Medical Center(1992) defined a "mature minor" exception toparental consent, according consideration to seven factors to be weighed regarding such a minor: age, ability, experience, education, exhibited judgment, conduct, and appreciation of relevant risks and consequences.[15][16]
The 2000s and 2010s experienceda number of outbreaksof vaccine-preventable diseases, such as the2019–2020 measles outbreaks, which were fueled in part by vaccine hesitancy. This prompted minors to seek vaccinations over objections from their parents.[17][18]Beginning in the 2020s during theCOVID-19 pandemic, minors also began seeking out theCOVID-19 vaccineover the objections of their vaccine-hesitant parents.[19]This has led to proposals and bills allowing minor to consent to be administered with any approved vaccine.[20]
TheSupreme Court of Canadarecognized mature minor doctrine in 2009 inA.C. v. Manitoba[2009] SCC 30; in provinces and territories lacking relevant statutes, common law is presumed to be applied.[21]
Several states permit minors to legally consent to general medical treatment (routine, nonemergency care, especially when the risk of treatment is considered to be low) without parental consent or over parental objections, when the minor is at least 14 years old.[25]In addition, many other states allow minors to consent to medical procedures under a more limited set of circumstances. These include providing limited minor autonomy only in enumerated cases, such asblood donation,substance abuse,sexual and reproductive health(includingabortionandsexually transmitted infections), or for emergency medical services. Many states also exempt specific groups of minors from parental consent, such ashomeless youth,emancipated minors, minor parents, ormarried minors.[26]Further complicating matters is the interaction between state tort law, state contract law, and federal law, depending on if the clinic accepts federal funding underTitle XorMedicaid.[26]
In the United States,bodily integrityhas long been considered acommon law right; The Supreme Court in 1990 (Cruzan v. Director, Missouri Department of Health) allowed that "constitutionally protectedliberty interestin refusing unwanted medical treatment may be inferred" in theDue ProcessClauseof theFourteenth Amendment to the United States Constitution, but the Court refrained from explicitly establishing what would have been a newly enumerated right. Nevertheless, lower courts have increasingly held that competent patients have the right to refuse any treatment for themselves.[31]
In 1989, theSupreme Court of Illinoisinterpreted theSupreme Court of the United Statesto have already adopted major aspects of mature minor doctrine, concluding,
In 2016 the case of "In re Z.M." was heard in Maryland regarding a minor's right to refuse chemotherapy.[33]
In Connecticut, Cassandra C. a seventeen-year-old, was ordered by the Connecticut Supreme Court to receive treatment. The court decided that Cassandra was not mature enough to make medical decisions.[34][13]
In 2009, theSupreme Court of Canadaruling inA.C. v. Manitoba[2009] SCC 30 (CanLII) found that childrenmaymake life and death decisions about their medical treatment. In the majority opinion,JusticeRosalie Abellawrote:
A "dissenting"[35]opinion by JusticeIan Binniewould have gone further:
Analysts note that the Canadian decision merely requires that younger patients be permitted ahearing, and still allows a judge to "decide whether or not to order a medical procedure on an unwilling minor".[37]
|
https://en.wikipedia.org/wiki/Mature_minor_doctrine
|
Bi-quinary coded decimalis anumeral encoding schemeused in manyabacusesand in someearly computers, notably theColossus.[2]The termbi-quinaryindicates that the code comprises both a two-state (bi) and a five-state (quinary) component. The encoding resembles that used by many abacuses, with four beads indicating the five values either from 0 through 4 or from 5 through 9 and another bead indicating which of those ranges (which can alternatively be thought of as +5).
Several human languages, most notablyFulaandWolofalso use biquinary systems. For example, the Fula word for 6,jowi e go'o, literally meansfive [plus] one.Roman numeralsuse a symbolic, rather than positional, bi-quinary base, even thoughLatinis completely decimal.
The Korean finger counting systemChisanbopuses a bi-quinary system, where each finger represents a one and a thumb represents a five, allowing one to count from 0 to 99 with two hands.
One advantage of one bi-quinary encoding scheme on digital computers is that it must have two bits set (one in the binary field and one in the quinary field), providing a built-inchecksumto verify if the number is valid or not. (Stuck bits happened frequently with computers usingmechanical relays.)
Several different representations of bi-quinary coded decimal have been used by different machines. The two-state component is encoded as one or twobits, and the five-state component is encoded using three to five bits. Some examples are:
TheIBM 650uses seven bits: twobibits (0 and 5) and fivequinarybits (0, 1, 2, 3, 4), with error checking.
Exactly onebibit and onequinarybit is set in a valid digit. The bi-quinary encoding of the internal workings of the machine are evident in the arrangement of its lights – thebibits form the top of a T for each digit, and thequinarybits form the vertical stem.
TheRemington Rand 409has five bits: onequinarybit (tube) for each of 1, 3, 5, and 7 - only one of these would be on at the time. The fifthbibit represented 9 if none of the others were on; otherwise it added 1 to the value represented by the otherquinarybit. The machine was sold in the two modelsUNIVAC 60andUNIVAC 120.
TheUNIVAC Solid Stateuses four bits: onebibit (5), three binary codedquinarybits (4 2 1)[4][5][6][7][8][9]and oneparity check bit
TheUNIVAC LARChas four bits:[9]onebibit (5), threeJohnson counter-codedquinarybits and one parity check bit.
|
https://en.wikipedia.org/wiki/Bi-quinary_coded_decimal
|
Afalse awakeningis a vivid and convincingdreamaboutawakeningfromsleep, while the dreamer in reality continues to sleep. After a false awakening, subjects often dream they are performing their daily morning routine such as showering or eating breakfast. False awakenings, mainly those in which one dreams that they have awoken from a sleep that featured dreams, take on aspects of adouble dreamor adream within a dream. A classic example in fiction is the double false awakening of the protagonist inGogol'sPortrait(1835).
Studies have shown that false awakening is closely related tolucid dreamingthat often transforms into one another. The only differentiating feature between them is that the dreamer has a logical understanding of the dream in a lucid dream, while that is not the case in a false awakening.[1]
Once one realizes they are falsely awakened, they either wake up or begin lucid dreaming.[1]
A false awakening may occur following a dream or following alucid dream(one in which the dreamer has been aware of dreaming). Particularly, if the false awakening follows a lucid dream, the false awakening may turn into a "pre-lucid dream",[2]that is, one in which the dreamer may start to wonder if they are really awake and may or may not come to the correct conclusion. In a study byHarvardpsychologistDeirdre Barrett, 2,000 dreams from 200 subjects were examined and it was found that false awakenings and lucidity were significantly more likely to occur within the same dream or within different dreams of the same night. False awakenings often preceded lucidity as a cue, but they could also follow the realization of lucidity, often losing it in the process.[3]
Because the mind still dreams after a false awakening, there may be more than one false awakening in a single dream. Subjects may dream they wake up, eat breakfast, brush their teeth, and so on; suddenly awake again in bed (still in a dream), begin morning rituals again, awaken again, and so forth. The philosopherBertrand Russellclaimed to have experienced "about a hundred" false awakenings in succession while coming around from a general anesthetic.[4]
Giorgio Buzzisuggests that FAs may indicate the occasional re-appearing of a vestigial (or anyway anomalous) REM sleep in the context of disturbed or hyperaroused sleep (lucid dreaming,sleep paralysis, or situations of high anticipation). This peculiar form of REM sleep permits the replay of unaltered experiential memories, thus providing a unique opportunity to study how waking experiences interact with the hypothesized predictive model of the world. In particular, it could permit to catch a glimpse of the protoconscious world without the distorting effect of ordinary REM sleep.[5]
In accordance with the proposed hypothesis, a high prevalence of FAs could be expected in children, whose "REM sleep machinery" might be less developed.[5]
Gibson's dreamprotoconsciousnesstheory states that false awakening is shaped on some fixed patterns depicting real activities, especially the day-to-day routine. False awakening is often associated with highly realistic environmental details of the familiar events like the day-to-day activities or autobiographic andepisodicmoments.[5]
Certain aspects of life may be dramatized or out of place in false awakenings. Things may seem wrong: details, like the painting on a wall, not being able to talk or difficulty reading (reportedly, reading in lucid dreams is often difficult or impossible).[6]A common theme in false awakenings is visiting the bathroom, upon which the dreamer will see that their reflection in the mirror is distorted (which can be an opportunity for lucidity, but usually resulting in wakefulness).
Celia Greensuggested a distinction should be made between two types of false awakening:[2]
Type 1 is the more common, in which the dreamer seems to wake up, but not necessarily in realistic surroundings; that is, not in their own bedroom. A pre-lucid dream may ensue. More commonly, dreamers will believe they have awakened, and then either genuinely wake up in their own bed or "fall back asleep" in the dream.
A common false awakening is a "late for work" scenario. A person may "wake up" in a typical room, with most things looking normal, and realize they overslept and missed the start time at work or school. Clocks, if found in the dream, will show time indicating that fact. The resulting panic is often strong enough to truly awaken the dreamer (much like from anightmare).
Another common Type 1 example of false awakening can result in bedwetting. In this scenario, the dreamer has had a false awakening and while in the state of dream has performed all the traditional behaviors that precede urinating – arising from bed, walking to the bathroom, and sitting down on the toilet or walking up to a urinal. The dreamer may then urinate and suddenly wake up to find they have wet themselves.
The Type 2 false awakening seems to be considerably less common. Green characterized it as follows:
The subject appears to wake up in a realistic manner but to an atmosphere of suspense.... The dreamer's surroundings may at first appear normal, and they may gradually become aware of something uncanny in the atmosphere, and perhaps of unwanted [unusual] sounds and movements, or they may "awake" immediately to a "stressed" and "stormy" atmosphere. In either case, the end result would appear to be characterized by feelings of suspense, excitement or apprehension.[7]
Charles McCreerydraws attention to the similarity between this description and the description by the German psychopathologistKarl Jaspers(1923) of the so-called "primary delusionary experience" (a general feeling that precedes more specific delusory belief).[8]Jaspers wrote:
Patients feel uncanny and that there is something suspicious afoot. Everything gets anew meaning. The environment is somehow different—not to a gross degree—perception is unaltered in itself but there is some change which envelops everything with a subtle, pervasive and strangely uncertain light.... Something seems in the air which the patient cannot account for, a distrustful, uncomfortable, uncanny tension invades him.[9]
McCreery suggests this phenomenological similarity is not coincidental and results from the idea that both phenomena, the Type 2 false awakening and the primary delusionary experience, are phenomena of sleep.[10]He suggests that the primary delusionary experience, like other phenomena of psychosis such as hallucinations and secondary or specific delusions, represents an intrusion into waking consciousness of processes associated withstage 1 sleep. It is suggested that the reason for these intrusions is that the psychotic subject is in a state ofhyperarousal, a state that can lead to whatIan Oswaldcalled "microsleeps" in waking life.[11]
Other researchers doubt that these are clearly distinguished types, as opposed to being points on a subtle spectrum.[12]
The clinical and neurophysiological descriptions of false awakening are rare. One notable report by Takeuchiet al.,[13]was considered by some experts as a case of false awakening. It depicts ahypnagogichallucinationof an unpleasant and fearful feeling of presence in sleeping lab with perception of having risen from the bed. Thepolysomnographyshowed abundant trains of alpha rhythm onEEG(sometimes blocked byREMsmixed withslow eye movementsand low muscle tone). Conversely, the two experiences of FA monitored here were close to regular REM sleep. Even quantitative analysis clearly shows theta waves predominantly, suggesting that these two experiences are a product of adreamingrather than a fully conscious brain.[14]
The clinical and neurophysiological characteristics of false awakening are
|
https://en.wikipedia.org/wiki/False_awakening
|
Speech recognitionis aninterdisciplinarysubfield ofcomputer scienceandcomputational linguisticsthat developsmethodologiesand technologies that enable the recognition andtranslationof spoken language into text by computers. It is also known asautomatic speech recognition(ASR),computer speech recognitionorspeech-to-text(STT). It incorporates knowledge and research in thecomputer science,linguisticsandcomputer engineeringfields. The reverse process isspeech synthesis.
Some speech recognition systems require "training" (also called "enrollment") where an individual speaker reads text or isolatedvocabularyinto the system. The system analyzes the person's specific voice and uses it to fine-tune the recognition of that person's speech, resulting in increased accuracy. Systems that do not use training are called "speaker-independent"[1]systems. Systems that use training are called "speaker dependent".
Speech recognition applications includevoice user interfacessuch as voice dialing (e.g. "call home"), call routing (e.g. "I would like to make a collect call"),domoticappliance control, search key words (e.g. find a podcast where particular words were spoken), simple data entry (e.g., entering a credit card number), preparation of structured documents (e.g. a radiology report), determining speaker characteristics,[2]speech-to-text processing (e.g.,word processorsoremails), andaircraft(usually termeddirect voice input). Automaticpronunciation assessmentis used in education such as for spoken language learning.
The termvoice recognition[3][4][5]orspeaker identification[6][7][8]refers to identifying the speaker, rather than what they are saying.Recognizing the speakercan simplify the task oftranslating speechin systems that have been trained on a specific person's voice or it can be used toauthenticateor verify the identity of a speaker as part of a security process.
From the technology perspective, speech recognition has a long history with several waves of major innovations. Most recently, the field has benefited from advances indeep learningandbig data. The advances are evidenced not only by the surge of academic papers published in the field, but more importantly by the worldwide industry adoption of a variety of deep learning methods in designing and deploying speech recognition systems.
The key areas of growth were: vocabulary size, speaker independence, and processing speed.
Raj Reddywas the first person to take on continuous speech recognition as a graduate student atStanford Universityin the late 1960s. Previous systems required users to pause after each word. Reddy's system issued spoken commands for playingchess.
Around this time Soviet researchers invented thedynamic time warping(DTW) algorithm and used it to create a recognizer capable of operating on a 200-word vocabulary.[15]DTW processed speech by dividing it into short frames, e.g. 10ms segments, and processing each frame as a single unit. Although DTW would be superseded by later algorithms, the technique carried on. Achieving speaker independence remained unsolved at this time period.
During the late 1960sLeonard Baumdeveloped the mathematics ofMarkov chainsat theInstitute for Defense Analysis. A decade later, at CMU, Raj Reddy's studentsJames BakerandJanet M. Bakerbegan using thehidden Markov model(HMM) for speech recognition.[20]James Baker had learned about HMMs from a summer job at the Institute of Defense Analysis during his undergraduate education.[21]The use of HMMs allowed researchers to combine different sources of knowledge, such as acoustics, language, and syntax, in a unified probabilistic model.
The 1980s also saw the introduction of then-gramlanguage model.
Much of the progress in the field is owed to the rapidly increasing capabilities of computers. At the end of the DARPA program in 1976, the best computer available to researchers was thePDP-10with 4 MB ram.[28]It could take up to 100 minutes to decode just 30 seconds of speech.[29]
Two practical products were:
By this point, the vocabulary of the typical commercial speech recognition system was larger than the average human vocabulary.[28]Raj Reddy's former student,Xuedong Huang, developed theSphinx-IIsystem at CMU. The Sphinx-II system was the first to do speaker-independent, large vocabulary, continuous speech recognition and it had the best performance in DARPA's 1992 evaluation. Handling continuous speech with a large vocabulary was a major milestone in the history of speech recognition. Huang went on to found thespeech recognition group at Microsoftin 1993. Raj Reddy's studentKai-Fu Leejoined Apple where, in 1992, he helped develop a speech interface prototype for the Apple computer known as Casper.
Lernout & Hauspie, a Belgium-based speech recognition company, acquired several other companies, including Kurzweil Applied Intelligence in 1997 and Dragon Systems in 2000. The L&H speech technology was used in theWindows XPoperating system. L&H was an industry leader until an accounting scandal brought an end to the company in 2001. The speech technology from L&H was bought by ScanSoft which becameNuancein 2005.Appleoriginally licensed software from Nuance to provide speech recognition capability to its digital assistantSiri.[34]
In the 2000s DARPA sponsored two speech recognition programs: Effective Affordable Reusable Speech-to-Text (EARS) in 2002 andGlobal Autonomous Language Exploitation(GALE). Four teams participated in the EARS program:IBM, a team led byBBNwithLIMSIandUniv. of Pittsburgh,Cambridge University, and a team composed ofICSI,SRIandUniversity of Washington. EARS funded the collection of the Switchboard telephonespeech corpuscontaining 260 hours of recorded conversations from over 500 speakers.[35]The GALE program focused onArabicandMandarinbroadcast news speech.Google's first effort at speech recognition came in 2007 after hiring some researchers from Nuance.[36]The first product wasGOOG-411, a telephone based directory service. The recordings from GOOG-411 produced valuable data that helped Google improve their recognition systems.Google Voice Searchis now supported in over 30 languages.
In the United States, theNational Security Agencyhas made use of a type of speech recognition forkeyword spottingsince at least 2006.[37]This technology allows analysts to search through large volumes of recorded conversations and isolate mentions of keywords. Recordings can be indexed and analysts can run queries over the database to find conversations of interest. Some government research programs focused on intelligence applications of speech recognition, e.g. DARPA's EARS's program andIARPA'sBabel program.
In the early 2000s, speech recognition was still dominated by traditional approaches such ashidden Markov modelscombined with feedforwardartificial neural networks.[38]Today, however, many aspects of speech recognition have been taken over by adeep learningmethod calledLong short-term memory(LSTM), arecurrent neural networkpublished bySepp Hochreiter&Jürgen Schmidhuberin 1997.[39]LSTM RNNs avoid thevanishing gradient problemand can learn "Very Deep Learning" tasks[40]that require memories of events that happened thousands of discrete time steps ago, which is important for speech.
Around 2007, LSTM trained by Connectionist Temporal Classification (CTC)[41]started to outperform traditional speech recognition in certain applications.[42]In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available throughGoogle Voiceto all smartphone users.[43]Transformers, a type of neural network based solely on "attention", have been widely adopted in computer vision[44][45]and language modeling,[46][47]sparking the interest of adapting such models to new domains, including speech recognition.[48][49][50]Some recent papers reported superior performance levels using transformer models for speech recognition, but these models usually require large scale training datasets to reach high performance levels.
The use of deep feedforward (non-recurrent) networks foracoustic modelingwas introduced during the later part of 2009 byGeoffrey Hintonand his students at the University of Toronto and by Li Deng[51]and colleagues at Microsoft Research, initially in the collaborative work between Microsoft and the University of Toronto which was subsequently expanded to include IBM and Google (hence "The shared views of four research groups" subtitle in their 2012 review paper).[52][53][54]A Microsoft research executive called this innovation "the most dramatic change in accuracy since 1979".[55]In contrast to the steady incremental improvements of the past few decades, the application of deep learning decreased word error rate by 30%.[55]This innovation was quickly adopted across the field. Researchers have begun to use deep learning techniques for language modeling as well.
In the long history of speech recognition, both shallow form and deep form (e.g. recurrent nets) of artificial neural networks had been explored for many years during 1980s, 1990s and a few years into the 2000s.[56][57][58]But these methods never won over the non-uniform internal-handcraftingGaussian mixture model/hidden Markov model(GMM-HMM) technology based on generative models of speech trained discriminatively.[59]A number of key difficulties had been methodologically analyzed in the 1990s, including gradient diminishing[60]and weak temporal correlation structure in the neural predictive models.[61][62]All these difficulties were in addition to the lack of big training data and big computing power in these early days. Most speech recognition researchers who understood such barriers hence subsequently moved away from neural nets to pursue generative modeling approaches until the recent resurgence of deep learning starting around 2009–2010 that had overcome all these difficulties. Hinton et al. and Deng et al. reviewed part of this recent history about how their collaboration with each other and then with colleagues across four groups (University of Toronto, Microsoft, Google, and IBM) ignited a renaissance of applications of deep feedforward neural networks for speech recognition.[53][54][63][64]
By early 2010sspeechrecognition, also called voice recognition[65][66][67]was clearly differentiated fromspeakerrecognition, and speaker independence was considered a major breakthrough. Until then, systems required a "training" period. A 1987 ad for a doll had carried the tagline "Finally, the doll that understands you." – despite the fact that it was described as "which children could train to respond to their voice".[12]
In 2017, Microsoft researchers reached a historical human parity milestone of transcribing conversational telephony speech on the widely benchmarked Switchboard task. Multiple deep learning models were used to optimize speech recognition accuracy. The speech recognition word error rate was reported to be as low as 4 professional human transcribers working together on the same benchmark, which was funded by IBM Watson speech team on the same task.[68]
Bothacoustic modelingandlanguage modelingare important parts of modern statistically based speech recognition algorithms. Hidden Markov models (HMMs) are widely used in many systems. Language modeling is also used in many other natural language processing applications such asdocument classificationorstatistical machine translation.
Modern general-purpose speech recognition systems are based on hidden Markov models. These are statistical models that output a sequence of symbols or quantities. HMMs are used in speech recognition because a speech signal can be viewed as a piecewise stationary signal or a short-time stationary signal. In a short time scale (e.g., 10 milliseconds), speech can be approximated as astationary process. Speech can be thought of as aMarkov modelfor many stochastic purposes.
Another reason why HMMs are popular is that they can be trained automatically and are simple and computationally feasible to use. In speech recognition, the hidden Markov model would output a sequence ofn-dimensional real-valued vectors (withnbeing a small integer, such as 10), outputting one of these every 10 milliseconds. The vectors would consist ofcepstralcoefficients, which are obtained by taking aFourier transformof a short time window of speech and decorrelating the spectrum using acosine transform, then taking the first (most significant) coefficients. The hidden Markov model will tend to have in each state a statistical distribution that is a mixture of diagonal covariance Gaussians, which will give a likelihood for each observed vector. Each word, or (for more general speech recognition systems), eachphoneme, will have a different output distribution; a hidden Markov model for a sequence of words or phonemes is made by concatenating the individual trained hidden Markov models for the separate words and phonemes.
Described above are the core elements of the most common, HMM-based approach to speech recognition. Modern speech recognition systems use various combinations of a number of standard techniques in order to improve results over the basic approach described above. A typical large-vocabulary system would needcontext dependencyfor thephonemes(so that phonemes with different left and right context would have different realizations as HMM states); it would usecepstral normalizationto normalize for a different speaker and recording conditions; for further speaker normalization, it might use vocal tract length normalization (VTLN) for male-female normalization andmaximum likelihood linear regression(MLLR) for more general speaker adaptation. The features would have so-calleddeltaanddelta-delta coefficientsto capture speech dynamics and in addition, might useheteroscedastic linear discriminant analysis(HLDA); or might skip the delta and delta-delta coefficients and usesplicingand anLDA-based projection followed perhaps byheteroscedasticlinear discriminant analysis or aglobal semi-tied co variancetransform (also known asmaximum likelihood linear transform, or MLLT). Many systems use so-called discriminative training techniques that dispense with a purely statistical approach to HMM parameter estimation and instead optimize some classification-related measure of the training data. Examples are maximummutual information(MMI), minimum classification error (MCE), and minimum phone error (MPE).
Decoding of the speech (the term for what happens when the system is presented with a new utterance and must compute the most likely source sentence) would probably use theViterbi algorithmto find the best path, and here there is a choice between dynamically creating a combination hidden Markov model, which includes both the acoustic and language model information and combining it statically beforehand (thefinite state transducer, or FST, approach).
A possible improvement to decoding is to keep a set of good candidates instead of just keeping the best candidate, and to use a better scoring function (re scoring) to rate these good candidates so that we may pick the best one according to this refined score. The set of candidates can be kept either as a list (theN-best listapproach) or as a subset of the models (alattice). Re scoring is usually done by trying to minimize theBayes risk[69](or an approximation thereof) Instead of taking the source sentence with maximal probability, we try to take the sentence that minimizes the expectancy of a given loss function with regards to all possible transcriptions (i.e., we take the sentence that minimizes the average distance to other possible sentences weighted by their estimated probability). The loss function is usually theLevenshtein distance, though it can be different distances for specific tasks; the set of possible transcriptions is, of course, pruned to maintain tractability. Efficient algorithms have been devised to re scorelatticesrepresented as weightedfinite state transducerswithedit distancesrepresented themselves as afinite state transducerverifying certain assumptions.[70]
Dynamic time warping is an approach that was historically used for speech recognition but has now largely been displaced by the more successful HMM-based approach.
Dynamic time warping is an algorithm for measuring similarity between two sequences that may vary in time or speed. For instance, similarities in walking patterns would be detected, even if in one video the person was walking slowly and if in another he or she were walking more quickly, or even if there were accelerations and deceleration during the course of one observation. DTW has been applied to video, audio, and graphics – indeed, any data that can be turned into a linear representation can be analyzed with DTW.
A well-known application has been automatic speech recognition, to cope with different speaking speeds. In general, it is a method that allows a computer to find an optimal match between two given sequences (e.g., time series) with certain restrictions. That is, the sequences are "warped" non-linearly to match each other. This sequence alignment method is often used in the context of hidden Markov models.
Neural networks emerged as an attractive acoustic modeling approach in ASR in the late 1980s. Since then, neural networks have been used in many aspects of speech recognition such as phoneme classification,[71]phoneme classification through multi-objective evolutionary algorithms,[72]isolated word recognition,[73]audiovisual speech recognition, audiovisual speaker recognition and speaker adaptation.
Neural networksmake fewer explicit assumptions about feature statistical properties than HMMs and have several qualities making them more attractive recognition models for speech recognition. When used to estimate the probabilities of a speech feature segment, neural networks allow discriminative training in a natural and efficient manner. However, in spite of their effectiveness in classifying short-time units such as individual phonemes and isolated words,[74]early neural networks were rarely successful for continuous recognition tasks because of their limited ability to model temporal dependencies.
One approach to this limitation was to use neural networks as a pre-processing, feature transformation or dimensionality reduction,[75]step prior to HMM based recognition. However, more recently, LSTM and related recurrent neural networks (RNNs),[39][43][76][77]Time Delay Neural Networks(TDNN's),[78]and transformers[48][49][50]have demonstrated improved performance in this area.
Deep neural networks and denoisingautoencoders[79]are also under investigation. A deep feedforward neural network (DNN) is anartificial neural networkwith multiple hidden layers of units between the input and output layers.[53]Similar to shallow neural networks, DNNs can model complex non-linear relationships. DNN architectures generate compositional models, where extra layers enable composition of features from lower layers, giving a huge learning capacity and thus the potential of modeling complex patterns of speech data.[80]
A success of DNNs in large vocabulary speech recognition occurred in 2010 by industrial researchers, in collaboration with academic researchers, where large output layers of the DNN based on context dependent HMM states constructed by decision trees were adopted.[81][82][83]See comprehensive reviews of this development and of the state of the art as of October 2014 in the recent Springer book from Microsoft Research.[84]See also the related background of automatic speech recognition and the impact of various machine learning paradigms, notably includingdeep learning, in
recent overview articles.[85][86]
One fundamental principle ofdeep learningis to do away with hand-craftedfeature engineeringand to use raw features. This principle was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linear filter-bank features,[87]showing its superiority over the Mel-Cepstral features which contain a few stages of fixed transformation from spectrograms.
The true "raw" features of speech, waveforms, have more recently been shown to produce excellent larger-scale speech recognition results.[88]
Since 2014, there has been much research interest in "end-to-end" ASR. Traditional phonetic-based (i.e., allHMM-based model) approaches required separate components and training for the pronunciation, acoustic, andlanguage model. End-to-end models jointly learn all the components of the speech recognizer. This is valuable since it simplifies the training process and deployment process. For example, an-gram language modelis required for all HMM-based systems, and a typical n-gram language model often takes several gigabytes in memory making them impractical to deploy on mobile devices.[89]Consequently, modern commercial ASR systems fromGoogleandApple(as of 2017[update]) are deployed on the cloud and require a network connection as opposed to the device locally.
The first attempt at end-to-end ASR was withConnectionist Temporal Classification(CTC)-based systems introduced byAlex GravesofGoogle DeepMindand Navdeep Jaitly of theUniversity of Torontoin 2014.[90]The model consisted ofrecurrent neural networksand a CTC layer. Jointly, the RNN-CTC model learns the pronunciation and acoustic model together, however it is incapable of learning the language due toconditional independenceassumptions similar to a HMM. Consequently, CTC models can directly learn to map speech acoustics to English characters, but the models make many common spelling mistakes and must rely on a separate language model to clean up the transcripts. Later,Baiduexpanded on the work with extremely large datasets and demonstrated some commercial success in Chinese Mandarin and English.[91]In 2016,University of OxfordpresentedLipNet,[92]the first end-to-end sentence-level lipreading model, using spatiotemporal convolutions coupled with an RNN-CTC architecture, surpassing human-level performance in a restricted grammar dataset.[93]A large-scale CNN-RNN-CTC architecture was presented in 2018 byGoogle DeepMindachieving 6 times better performance than human experts.[94]In 2019,Nvidialaunched two CNN-CTC ASR models, Jasper and QuarzNet, with an overall performance WER of 3%.[95][96]Similar to other deep learning applications,transfer learninganddomain adaptationare important strategies for reusing and extending the capabilities of deep learning models, particularly due to the high costs of training models from scratch, and the small size of available corpus in many languages and/or specific domains.[97][98][99]
An alternative approach to CTC-based models are attention-based models. Attention-based ASR models were introduced simultaneously by Chan et al. ofCarnegie Mellon UniversityandGoogle Brainand Bahdanau et al. of theUniversity of Montrealin 2016.[100][101]The model named "Listen, Attend and Spell" (LAS), literally "listens" to the acoustic signal, pays "attention" to different parts of the signal and "spells" out the transcript one character at a time. Unlike CTC-based models, attention-based models do not have conditional-independence assumptions and can learn all the components of a speech recognizer including the pronunciation, acoustic and language model directly. This means, during deployment, there is no need to carry around a language model making it very practical for applications with limited memory. By the end of 2016, the attention-based models have seen considerable success including outperforming the CTC models (with or without an external language model).[102]Various extensions have been proposed since the original LAS model. Latent Sequence Decompositions (LSD) was proposed byCarnegie Mellon University,MITandGoogle Brainto directly emit sub-word units which are more natural than English characters;[103]University of OxfordandGoogle DeepMindextended LAS to "Watch, Listen, Attend and Spell" (WLAS) to handle lip reading surpassing human-level performance.[104]
Typically a manual control input, for example by means of a finger control on the steering-wheel, enables the speech recognition system and this is signaled to the driver by an audio prompt. Following the audio prompt, the system has a "listening window" during which it may accept a speech input for recognition.[citation needed]
Simple voice commands may be used to initiate phone calls, select radio stations or play music from a compatible smartphone, MP3 player or music-loaded flash drive. Voice recognition capabilities vary between car make and model. Some of the most recent[when?]car models offer natural-language speech recognition in place of a fixed set of commands, allowing the driver to use full sentences and common phrases. With such systems there is, therefore, no need for the user to memorize a set of fixed command words.[citation needed]
Automaticpronunciationassessment is the use of speech recognition to verify the correctness of pronounced speech,[105]as distinguished from manual assessment by an instructor or proctor.[106]Also called speech verification, pronunciation evaluation, and pronunciation scoring, the main application of this technology is computer-aided pronunciation teaching (CAPT) when combined withcomputer-aided instructionforcomputer-assisted language learning(CALL), speechremediation, oraccent reduction. Pronunciation assessment does not determine unknown speech (as indictationorautomatic transcription) but instead, knowing the expected word(s) in advance, it attempts to verify the correctness of the learner's pronunciation and ideally theirintelligibilityto listeners,[107][108]sometimes along with often inconsequentialprosodysuch asintonation,pitch,tempo,rhythm, andstress.[109]Pronunciation assessment is also used inreading tutoring, for example in products such asMicrosoft Teams[110]and from Amira Learning.[111]Automatic pronunciation assessment can also be used to help diagnose and treatspeech disorderssuch asapraxia.[112]
Assessing authentic listener intelligibility is essential for avoiding inaccuracies fromaccentbias, especially in high-stakes assessments;[113][114][115]from words with multiple correct pronunciations;[116]and from phoneme coding errors in machine-readable pronunciation dictionaries.[117]In 2022, researchers found that some newer speech to text systems, based onend-to-end reinforcement learningto map audio signals directly into words, produce word and phrase confidence scores very closely correlated with genuine listener intelligibility.[118]In theCommon European Framework of Reference for Languages(CEFR) assessment criteria for "overall phonological control", intelligibility outweighs formally correct pronunciation at all levels.[119]
In thehealth caresector, speech recognition can be implemented in front-end or back-end of the medical documentation process. Front-end speech recognition is where the provider dictates into a speech-recognition engine, the recognized words are displayed as they are spoken, and the dictator is responsible for editing and signing off on the document. Back-end or deferred speech recognition is where the provider dictates into adigital dictationsystem, the voice is routed through a speech-recognition machine and the recognized draft document is routed along with the original voice file to the editor, where the draft is edited and report finalized. Deferred speech recognition is widely used in the industry currently.
One of the major issues relating to the use of speech recognition in healthcare is that theAmerican Recovery and Reinvestment Act of 2009(ARRA) provides for substantial financial benefits to physicians who utilize an EMR according to "Meaningful Use" standards. These standards require that a substantial amount of data be maintained by the EMR (now more commonly referred to as anElectronic Health Recordor EHR). The use of speech recognition is more naturally suited to the generation of narrative text, as part of a radiology/pathology interpretation, progress note or discharge summary: the ergonomic gains of using speech recognition to enter structured discrete data (e.g., numeric values or codes from a list or acontrolled vocabulary) are relatively minimal for people who are sighted and who can operate a keyboard and mouse.
A more significant issue is that most EHRs have not been expressly tailored to take advantage of voice-recognition capabilities. A large part of the clinician's interaction with the EHR involves navigation through the user interface using menus, and tab/button clicks, and is heavily dependent on keyboard and mouse: voice-based navigation provides only modest ergonomic benefits. By contrast, many highly customized systems for radiology or pathology dictation implement voice "macros", where the use of certain phrases – e.g., "normal report", will automatically fill in a large number of default values and/or generate boilerplate, which will vary with the type of the exam – e.g., a chest X-ray vs. a gastrointestinal contrast series for a radiology system.
Prolonged use of speech recognition software in conjunction withword processorshas shown benefits to short-term-memory restrengthening inbrain AVMpatients who have been treated withresection. Further research needs to be conducted to determine cognitive benefits for individuals whose AVMs have been treated using radiologic techniques.[citation needed]
Substantial efforts have been devoted in the last decade to the test and evaluation of speech recognition infighter aircraft. Of particular note have been the US program in speech recognition for theAdvanced Fighter Technology Integration (AFTI)/F-16aircraft (F-16 VISTA), the program in France forMirageaircraft, and other programs in the UK dealing with a variety of aircraft platforms. In these programs, speech recognizers have been operated successfully in fighter aircraft, with applications including setting radio frequencies, commanding an autopilot system, setting steer-point coordinates and weapons release parameters, and controlling flight display.
Working with Swedish pilots flying in theJAS-39Gripen cockpit, Englund (2004) found recognition deteriorated with increasingg-loads. The report also concluded that adaptation greatly improved the results in all cases and that the introduction of models for breathing was shown to improve recognition scores significantly. Contrary to what might have been expected, no effects of the broken English of the speakers were found. It was evident that spontaneous speech caused problems for the recognizer, as might have been expected. A restricted vocabulary, and above all, a proper syntax, could thus be expected to improve recognition accuracy substantially.[120]
TheEurofighter Typhoon, currently in service with the UKRAF, employs a speaker-dependent system, requiring each pilot to create a template. The system is not used for any safety-critical or weapon-critical tasks, such as weapon release or lowering of the undercarriage, but is used for a wide range of other cockpit functions. Voice commands are confirmed by visual and/or aural feedback. The system is seen as a major design feature in the reduction of pilotworkload,[121]and even allows the pilot to assign targets to his aircraft with two simple voice commands or to any of his wingmen with only five commands.[122]
Speaker-independent systems are also being developed and are under test for theF-35 Lightning II(JSF) and theAlenia Aermacchi M-346 Masterlead-in fighter trainer. These systems have produced word accuracy scores in excess of 98%.[123]
The problems of achieving high recognition accuracy under stress and noise are particularly relevant in thehelicopterenvironment as well as in the jet fighter environment. The acoustic noise problem is actually more severe in the helicopter environment, not only because of the high noise levels but also because the helicopter pilot, in general, does not wear afacemask, which would reduce acoustic noise in themicrophone. Substantial test and evaluation programs have been carried out in the past decade in speech recognition systems applications in helicopters, notably by theU.S. ArmyAvionics Research and Development Activity (AVRADA) and by the Royal Aerospace Establishment (RAE) in the UK. Work in France has included speech recognition in thePuma helicopter. There has also been much useful work inCanada. Results have been encouraging, and voice applications have included: control of communication radios, setting ofnavigationsystems, and control of an automated target handover system.
As in fighter applications, the overriding issue for voice in helicopters is the impact on pilot effectiveness. Encouraging results are reported for the AVRADA tests, although these represent only a feasibility demonstration in a test environment. Much remains to be done both in speech recognition and in overallspeech technologyin order to consistently achieve performance improvements in operational settings.
Training for air traffic controllers (ATC) represents an excellent application for speech recognition systems. Many ATC training systems currently require a person to act as a "pseudo-pilot", engaging in a voice dialog with the trainee controller, which simulates the dialog that the controller would have to conduct with pilots in a real ATC situation. Speech recognition andsynthesistechniques offer the potential to eliminate the need for a person to act as a pseudo-pilot, thus reducing training and support personnel. In theory, Air controller tasks are also characterized by highly structured speech as the primary output of the controller, hence reducing the difficulty of the speech recognition task should be possible. In practice, this is rarely the case. The FAA document 7110.65 details the phrases that should be used by air traffic controllers. While this document gives less than 150 examples of such phrases, the number of phrases supported by one of the simulation vendors speech recognition systems is in excess of 500,000.
The USAF, USMC, US Army, US Navy, and FAA as well as a number of international ATC training organizations such as the Royal Australian Air Force and Civil Aviation Authorities in Italy, Brazil, and Canada are currently using ATC simulators with speech recognition from a number of different vendors.[citation needed]
ASR is now commonplace in the field oftelephonyand is becoming more widespread in the field ofcomputer gamingand simulation. In telephony systems, ASR is now being predominantly used in contact centers by integrating it withIVRsystems. Despite the high level of integration with word processing in general personal computing, in the field of document production, ASR has not seen the expected increases in use.
The improvement of mobile processor speeds has made speech recognition practical insmartphones. Speech is used mostly as a part of a user interface, for creating predefined or custom speech commands.
People with disabilities can benefit from speech recognition programs. For individuals that are Deaf or Hard of Hearing, speech recognition software is used to automatically generate a closed-captioning of conversations such as discussions in conference rooms, classroom lectures, and/or religious services.[124]
Students who are blind (seeBlindness and education) or have very low vision can benefit from using the technology to convey words and then hear the computer recite them, as well as use a computer by commanding with their voice, instead of having to look at the screen and keyboard.[125]
Students who are physically disabled have aRepetitive strain injury/other injuries to the upper extremities can be relieved from having to worry about handwriting, typing, or working with scribe on school assignments by using speech-to-text programs. They can also utilize speech recognition technology to enjoy searching the Internet or using a computer at home without having to physically operate a mouse and keyboard.[125]
Speech recognition can allow students with learning disabilities to become better writers. By saying the words aloud, they can increase the fluidity of their writing, and be alleviated of concerns regarding spelling, punctuation, and other mechanics of writing.[126]Also, seeLearning disability.
The use of voice recognition software, in conjunction with a digital audio recorder and a personal computer running word-processing software has proven to be positive for restoring damaged short-term memory capacity, in stroke and craniotomy individuals.
Speech recognition is also very useful for people who have difficulty using their hands, ranging from mild repetitive stress injuries to involve disabilities that preclude using conventional computer input devices. In fact, people who used the keyboard a lot and developedRSIbecame an urgent early market for speech recognition.[127][128]Speech recognition is used indeaftelephony, such as voicemail to text,relay services, andcaptioned telephone. Individuals with learning disabilities who have problems with thought-to-paper communication (essentially they think of an idea but it is processed incorrectly causing it to end up differently on paper) can possibly benefit from the software but the technology is not bug proof.[129]Also the whole idea of speak to text can be hard for intellectually disabled person's due to the fact that it is rare that anyone tries to learn the technology to teach the person with the disability.[130]
This type of technology can help those with dyslexia but other disabilities are still in question. The effectiveness of the product is the problem that is hindering it from being effective. Although a kid may be able to say a word depending on how clear they say it the technology may think they are saying another word and input the wrong one. Giving them more work to fix, causing them to have to take more time with fixing the wrong word.[131]
The performance of speech recognition systems is usually evaluated in terms of accuracy and speed.[136][137]Accuracy is usually rated withword error rate(WER), whereas speed is measured with thereal time factor. Other measures of accuracy includeSingle Word Error Rate(SWER) andCommand Success Rate(CSR).
Speech recognition by machine is a very complex problem, however. Vocalizations vary in terms of accent, pronunciation, articulation, roughness, nasality, pitch, volume, and speed. Speech is distorted by a background noise and echoes, electrical characteristics. Accuracy of speech recognition may vary with the following:[138][citation needed]
As mentioned earlier in this article, the accuracy of speech recognition may vary depending on the following factors:
With discontinuous speech full sentences separated by silence are used, therefore it becomes easier to recognize the speech as well as with isolated speech.With continuous speech naturally spoken sentences are used, therefore it becomes harder to recognize the speech, different from both isolated and discontinuous speech.
Constraints are often represented by grammar.
Speech recognition is a multi-leveled pattern recognition task.
e.g. Known word pronunciations or legal word sequences, which can compensate for errors or uncertainties at a lower level;
For telephone speech the sampling rate is 8000 samples per second;
computed every 10 ms, with one 10 ms section called a frame;
Analysis of four-step neural network approaches can be explained by further information. Sound is produced by air (or some other medium) vibration, which we register by ears, but machines by receivers. Basic sound creates a wave which has two descriptions:amplitude(how strong is it), andfrequency(how often it vibrates per second).
Accuracy can be computed with the help of word error rate (WER). Word error rate can be calculated by aligning the recognized word and referenced word using dynamic string alignment. The problem may occur while computing the word error rate due to the difference between the sequence lengths of the recognized word and referenced word.
The formula to compute the word error rate (WER) is:
WER=(s+d+i)n{\displaystyle WER={(s+d+i) \over n}}
wheresis the number of substitutions,dis the number of deletions,iis the number of insertions, andnis the number of word references.
While computing, the word recognition rate (WRR) is used. The formula is:
wherehis the number of correctly recognized words:
Speech recognition can become a means of attack, theft, or accidental operation. For example, activation words like "Alexa" spoken in an audio or video broadcast can cause devices in homes and offices to start listening for input inappropriately, or possibly take an unwanted action.[140]Voice-controlled devices are also accessible to visitors to the building, or even those outside the building if they can be heard inside. Attackers may be able to gain access to personal information, like calendar, address book contents, private messages, and documents. They may also be able to impersonate the user to send messages or make online purchases.
Two attacks have been demonstrated that use artificial sounds. One transmits ultrasound and attempt to send commands without nearby people noticing.[141]The other adds small, inaudible distortions to other speech or music that are specially crafted to confuse the specific speech recognition system into recognizing music as speech, or to make what sounds like one command to a human sound like a different command to the system.[142]
Popular speech recognition conferences held each year or two include SpeechTEK and SpeechTEK Europe,ICASSP,Interspeech/Eurospeech, and the IEEE ASRU. Conferences in the field ofnatural language processing, such asACL,NAACL, EMNLP, and HLT, are beginning to include papers onspeech processing. Important journals include theIEEETransactions on Speech and Audio Processing (later renamedIEEETransactions on Audio, Speech and Language Processing and since Sept 2014 renamedIEEE/ACM Transactions on Audio, Speech and Language Processing—after merging with an ACM publication), Computer Speech and Language, and Speech Communication.
Books like "Fundamentals of Speech Recognition" byLawrence Rabinercan be useful to acquire basic knowledge but may not be fully up to date (1993). Another good source can be "Statistical Methods for Speech Recognition" byFrederick Jelinekand "Spoken Language Processing (2001)" byXuedong Huangetc., "Computer Speech", byManfred R. Schroeder, second edition published in 2004, and "Speech Processing: A Dynamic and Optimization-Oriented Approach" published in 2003 by Li Deng and Doug O'Shaughnessey. The updated textbookSpeech and Language Processing(2008) byJurafskyand Martin presents the basics and the state of the art for ASR.Speaker recognitionalso uses the same features, most of the same front-end processing, and classification techniques as is done in speech recognition. A comprehensive textbook, "Fundamentals of Speaker Recognition" is an in depth source for up to date details on the theory and practice.[143]A good insight into the techniques used in the best modern systems can be gained by paying attention to government sponsored evaluations such as those organised byDARPA(the largest speech recognition-related project ongoing as of 2007 is the GALE project, which involves both speech recognition and translation components).
A good and accessible introduction to speech recognition technology and its history is provided by the general audience book "The Voice in the Machine. Building Computers That Understand Speech" byRoberto Pieraccini(2012).
The most recent book on speech recognition isAutomatic Speech Recognition: A Deep Learning Approach(Publisher: Springer) written by Microsoft researchers D. Yu and L. Deng and published near the end of 2014, with highly mathematically oriented technical detail on how deep learning methods are derived and implemented in modern speech recognition systems based on DNNs and related deep learning methods.[84]A related book, published earlier in 2014, "Deep Learning: Methods and Applications" by L. Deng and D. Yu provides a less technical but more methodology-focused overview of DNN-based speech recognition during 2009–2014, placed within the more general context of deep learning applications including not only speech recognition but also image recognition, natural language processing, information retrieval, multimodal processing, and multitask learning.[80]
In terms of freely available resources,Carnegie Mellon University'sSphinxtoolkit is one place to start to both learn about speech recognition and to start experimenting. Another resource (free but copyrighted) is theHTKbook (and the accompanying HTK toolkit). For more recent and state-of-the-art techniques,Kalditoolkit can be used.[144]In 2017Mozillalaunched the open source project calledCommon Voice[145]to gather big database of voices that would help build free speech recognition project DeepSpeech (available free atGitHub),[146]using Google's open source platformTensorFlow.[147]When Mozilla redirected funding away from the project in 2020, it was forked by its original developers as Coqui STT[148]using the same open-source license.[149][150]
GoogleGboardsupports speech recognition on allAndroidapplications. It can be activated through themicrophoneicon.[151]Speech recognition can be activated inMicrosoft Windowsoperating systems by pressing Windows logo key + Ctrl + S.[152]
The commercial cloud based speech recognition APIs are broadly available.
For more software resources, seeList of speech recognition software.
|
https://en.wikipedia.org/wiki/Speech_recognition
|
Infunctional analysis, anF-spaceis avector spaceX{\displaystyle X}over therealorcomplexnumbers together with ametricd:X×X→R{\displaystyle d:X\times X\to \mathbb {R} }such that
The operationx↦‖x‖:=d(0,x){\displaystyle x\mapsto \|x\|:=d(0,x)}is called anF-norm, although in general an F-norm is not required to be homogeneous. Bytranslation-invariance, the metric is recoverable from the F-norm. Thus, a real or complex F-space is equivalently a real or complex vector space equipped with a complete F-norm.
Some authors use the termFréchet spacerather thanF-space, but usually the term "Fréchet space" is reserved forlocally convexF-spaces.
Some other authors use the term "F-space" as a synonym of "Fréchet space", by which they mean a locally convex complete metrizabletopological vector space.
The metric may or may not necessarily be part of the structure on an F-space; many authors only require that such a space bemetrizablein a manner that satisfies the above properties.
AllBanach spacesandFréchet spacesare F-spaces. In particular, a Banach space is an F-space with an additional requirement thatd(ax,0)=|a|d(x,0).{\displaystyle d(ax,0)=|a|d(x,0).}[1]
TheLpspacescan be made into F-spaces for allp≥0{\displaystyle p\geq 0}and forp≥1{\displaystyle p\geq 1}they can be made into locally convex and thus Fréchet spaces and even Banach spaces.
L12[0,1]{\displaystyle L^{\frac {1}{2}}[0,\,1]}is an F-space. It admits no continuous seminorms and no continuous linear functionals — it has trivialdual space.
LetWp(D){\displaystyle W_{p}(\mathbb {D} )}be the space of all complex valuedTaylor seriesf(z)=∑n≥0anzn{\displaystyle f(z)=\sum _{n\geq 0}a_{n}z^{n}}on the unit discD{\displaystyle \mathbb {D} }such that∑n|an|p<∞{\displaystyle \sum _{n}\left|a_{n}\right|^{p}<\infty }then for0<p<1,{\displaystyle 0<p<1,}Wp(D){\displaystyle W_{p}(\mathbb {D} )}are F-spaces under thep-norm:‖f‖p=∑n|an|p(0<p<1).{\displaystyle \|f\|_{p}=\sum _{n}\left|a_{n}\right|^{p}\qquad (0<p<1).}
In fact,Wp{\displaystyle W_{p}}is aquasi-Banach algebra. Moreover, for anyζ{\displaystyle \zeta }with|ζ|≤1{\displaystyle |\zeta |\leq 1}the mapf↦f(ζ){\displaystyle f\mapsto f(\zeta )}is a bounded linear (multiplicative functional) onWp(D).{\displaystyle W_{p}(\mathbb {D} ).}
Theorem[2][3](Klee (1952))—Letd{\displaystyle d}beany[note 1]metric on a vector spaceX{\displaystyle X}such that the topologyτ{\displaystyle \tau }induced byd{\displaystyle d}onX{\displaystyle X}makes(X,τ){\displaystyle (X,\tau )}into a topological vector space. If(X,d){\displaystyle (X,d)}is a complete metric space then(X,τ){\displaystyle (X,\tau )}is acomplete topological vector space.
Theopen mapping theoremimplies that ifτandτ2{\displaystyle \tau {\text{ and }}\tau _{2}}are topologies onX{\displaystyle X}that make both(X,τ){\displaystyle (X,\tau )}and(X,τ2){\displaystyle \left(X,\tau _{2}\right)}intocompletemetrizable topological vector spaces(for example, Banach orFréchet spaces) and if one topology isfiner or coarserthan the other then they must be equal (that is, ifτ⊆τ2orτ2⊆τthenτ=τ2{\displaystyle \tau \subseteq \tau _{2}{\text{ or }}\tau _{2}\subseteq \tau {\text{ then }}\tau =\tau _{2}}).[4]
|
https://en.wikipedia.org/wiki/F-space
|
Web log analysis software(also called aweb log analyzer) is a kind ofweb analyticssoftware that parses aserver log filefrom aweb server, and based on the values contained in the log file, derives indicators about when, how, and by whom a web server is visited. Reports are usually generated immediately, but data extracted from the log files can alternatively be stored in a database, allowing various reports to be generated on demand.
Features supported by log analysis packages may include "hit filters", which use pattern matching to examine selected log data.[citation needed]
|
https://en.wikipedia.org/wiki/Web_log_analysis_software
|
AnInternet filterissoftwarethat restricts or controls the content an Internet user is capable to access, especially when utilized to restrict material delivered over theInternetvia theWeb,Email, or other means. Such restrictions can be applied at various levels: a government can attempt to apply them nationwide (seeInternet censorship), or they can, for example, be applied by anInternet service providerto its clients, by an employer to its personnel,by a schoolto its students, by a library to its visitors, by a parent to a child's computer, or by anindividual user to their own computers. The motive is often to prevent access to content which the computer's owner(s) or other authorities may consider objectionable. When imposed without the consent of the user, content control can be characterised as a form of internet censorship. Some filter software includes time control functions that empowers parents to set the amount of time that child may spend accessing the Internet or playing games or other computer activities.
The term "content control" is used on occasion byCNN,[1]Playboymagazine,[2]theSan Francisco Chronicle,[3]andThe New York Times.[4]However, several other terms, including "content filtering software", "web content filter", "filtering proxy servers", "secure web gateways", "censorware", "content security and control", "web filteringsoftware", "content-censoring software", and "content-blockingsoftware", are often used. "Nannyware" has also been used in both product marketing and by the media. Industry research companyGartneruses"secure web gateway"(SWG) to describe the market segment.[5]
Companies that make products that selectivelyblockWeb sites do not refer to these products as censorware, and prefer terms such as "Internet filter" or "URL Filter"; in the specialized case of software specifically designed to allow parents to monitor and restrict the access of their children, "parental control software" is also used. Some products log all sites that a user accesses and rates them based on content type for reporting to an "accountability partner" of the person's choosing, and the termaccountability softwareis used. Internet filters, parental control software, and/or accountability software may also be combined into one product.
Those critical of such software, however, use the term "censorware" freely: consider the Censorware Project, for example.[6]The use of the term "censorware" in editorials criticizing makers of such software is widespread and covers many different varieties and applications:Xeni Jardinused the term in a 9 March 2006 editorial inThe New York Times,when discussing the use of American-made filtering software to suppress content in China; in the same month a high school student used the term to discuss the deployment of such software in his school district.[7][8]
In general, outside of editorial pages as described above, traditional newspapers do not use the term "censorware" in their reporting, preferring instead to use less overtly controversial terms such as "content filter", "content control", or "web filtering";The New York TimesandThe Wall Street Journalboth appear to follow this practice. On the other hand, Web-based newspapers such asCNETuse the term in both editorial and journalistic contexts, for example "Windows Live to Get Censorware."[9]
Filters can be implemented in many different ways: by software on a personal computer, via network infrastructure such asproxy servers,DNSservers, orfirewallsthat provide Internet access. No solution provides complete coverage, so most companies deploy a mix of technologies to achieve the proper content control in line with their policies.
The Internet does not intrinsically provide content blocking, and therefore there is much content on the Internet that is considered unsuitable for children, given that much content is given certifications as suitable for adults only, e.g. 18-rated games and movies.
Internet service providers(ISPs) that block material containingpornography, or controversial religious, political, or news-related content en route are often utilized by parents who do not permit their children to access content not conforming totheir personal beliefs. Content filtering software can, however, also be used to blockmalwareand other content that is or contains hostile, intrusive, or annoying material includingadware,spam,computer viruses,worms,trojan horses, andspyware.
Most content control software is marketed to organizations or parents. It is, however, also marketed on occasion to facilitate self-censorship, for example by people struggling with addictions toonline pornography, gambling, chat rooms, etc. Self-censorship software may also be utilised by some in order to avoid viewing content they consider immoral, inappropriate, or simply distracting. A number ofaccountability softwareproducts are marketed asself-censorshiporaccountability software. These are often promoted by religious media and atreligious gatherings.[17]
Utilizing a filter that is overly zealous at filtering content, or mislabels content not intended to be censored can result in over-blocking, or over-censoring. Overblocking can filter out material that should be acceptable under the filtering policy in effect, for example health related information may unintentionally be filtered along withporn-related material because of theScunthorpe problem. Filter administrators may prefer to err on the side of caution by accepting over blocking to prevent any risk of access to sites that they determine to be undesirable. Content-control software was mentioned as blocking access to Beaver College before its name change toArcadia University.[18]Another example was the filtering ofHorniman Museum.[19]As well, over-blocking may encourage users to bypass the filter entirely.
Whenever new information is uploaded to the Internet, filters can under block, or under-censor, content if the parties responsible for maintaining the filters do not update them quickly and accurately, and a blacklisting rather than a whitelisting filtering policy is in place.[20]
Many[21]would not be satisfied with government filtering viewpoints on moral or political issues, agreeing that this could become support forpropaganda. Many[22]would also find it unacceptable that an ISP, whether by law or by the ISP's own choice, should deploy such software without allowing the users to disable the filtering for their own connections. In the United States, theFirst Amendment to the United States Constitutionhas been cited in calls to criminalise forced internet censorship. (Seesection below)
In 1998, a United States federal district court in Virginia ruled (Loudoun v. Board of Trustees of the Loudoun County Library) that the imposition of mandatory filtering in a public library violates the First Amendment.[23]
In 1996 the US Congress passed theCommunications Decency Act, banning indecency on the Internet. Civil liberties groups challenged the law under the First Amendment, and in 1997 theSupreme Courtruled in their favor.[24]Part of the civil liberties argument, especially from groups like theElectronic Frontier Foundation,[25]was that parents who wanted to block sites could use their own content-filtering software, making government involvement unnecessary.[26]
In the late 1990s, groups such as the Censorware Project began reverse-engineering the content-control software and decrypting the blacklists to determine what kind of sites the software blocked. This led to legal action alleging violation of the "Cyber Patrol"license agreement.[27]They discovered that such tools routinely blocked unobjectionable sites while also failing to block intended targets.
Some content-control software companies responded by claiming that their filtering criteria were backed by intensive manual checking. The companies' opponents argued, on the other hand, that performing the necessary checking would require resources greater than the companies possessed and that therefore their claims were not valid.[28]
TheMotion Picture Associationsuccessfully obtained a UK ruling enforcing ISPs to use content-control software to preventcopyright infringementby their subscribers.[29]
Many types of content-control software have been shown to block sites based on the religious and political leanings of the company owners. Examples include blocking several religious sites[30][31](including the Web site of the Vatican), many political sites, and homosexuality-related sites.[32]X-Stopwas shown to block sites such as theQuakerweb site, theNational Journal of Sexual Orientation Law,The Heritage Foundation, and parts ofThe Ethical Spectacle.[33]CYBERsitter blocks out sites likeNational Organization for Women.[34]Nancy Willard, an academic researcher and attorney, pointed out that many U.S. public schools and libraries use the same filtering software that many Christian organizations use.[35]Cyber Patrol, a product developed by The Anti-Defamation League and Mattel's The Learning Company,[36]has been found to block not only political sites it deems to be engaging in 'hate speech' but also human rights web sites, such as Amnesty International's web page about Israel and gay-rights web sites, such as glaad.org.[37]
Content labeling may be considered another form of content-control software. In 1994, theInternet Content Rating Association(ICRA) — now part of theFamily Online Safety Institute— developed a content rating system for online content providers. Using an online questionnaire a webmaster describes the nature of their web content. A small file is generated that contains a condensed, computer readable digest of this description that can then be used by content filtering software to block or allow that site.
ICRA labels come in a variety of formats.[38]These include the World Wide Web Consortium'sResource Description Framework(RDF) as well asPlatform for Internet Content Selection(PICS) labels used byMicrosoft'sInternet ExplorerContent Advisor.[39]
ICRA labels are an example of self-labeling. Similarly, in 2006 theAssociation of Sites Advocating Child Protection (ASACP)initiated the Restricted to Adults self-labeling initiative. ASACP members were concerned that various forms of legislation being proposed in theUnited Stateswere going to have the effect of forcing adult companies to label their content.[40]The RTA label, unlike ICRA labels, does not require a webmaster to fill out a questionnaire or sign up to use. Like ICRA the RTA label is free. Both labels are recognized by awide variety of content-control software.
TheVoluntary Content Rating(VCR) system was devised bySolid Oak Softwarefor theirCYBERsitterfiltering software, as an alternative to the PICS system, which some critics deemed too complex. It employsHTMLmetadatatags embedded within web page documents to specify the type of content contained in the document. Only two levels are specified,matureandadult, making the specification extremely simple.
The Australian Internet Safety Advisory Body has information about "practical advice on Internet safety, parental control and filters for the protection of children, students and families" that also includes public libraries.[41]
NetAlert, the software made available free of charge by the Australian government, was allegedly cracked by a 16-year-old student, Tom Wood, less than a week after its release in August 2007. Wood supposedly bypassed the $84 million filter in about half an hour to highlight problems with the government's approach to Internet content filtering.[42]
The Australian Government has introduced legislation that requires ISPs to "restrict access to age restricted content (commercial MA15+ content and R18+ content) either hosted in Australia or provided from Australia" that was due to commence from 20 January 2008, known asCleanfeed.[43]
Cleanfeed is a proposed mandatory ISP level content filtration system. It was proposed by theBeazleyledAustralian Labor Partyopposition in a 2006 press release, with the intention of protecting children who were vulnerable due to claimed parental computer illiteracy. It was announced on 31 December 2007 as a policy to be implemented by theRuddALP government, and initial tests inTasmaniahave produced a 2008 report. Cleanfeed is funded in the current budget, and is moving towards an Expression of Interest for live testing with ISPs in 2008. Public opposition and criticism have emerged, led by theEFAand gaining irregular mainstream media attention, with a majority of Australians reportedly "strongly against" its implementation.[44]Criticisms include its expense, inaccuracy (it will be impossible to ensure only illegal sites are blocked) and the fact that it will be compulsory, which can be seen as an intrusion on free speech rights.[44]Another major criticism point has been that although the filter is claimed to stop certain materials, the underground rings dealing in such materials will not be affected. The filter might also provide a false sense of security for parents, who might supervise children less while using the Internet, achieving the exact opposite effect.[original research?]Cleanfeed is a responsibility ofSenator Conroy'sportfolio.
InDenmarkit is stated policy that it will "prevent inappropriate Internet sites from being accessed from children's libraries across Denmark".[45]"'It is important that every library in the country has the opportunity to protect children against pornographic material when they are using library computers. It is a main priority for me as Culture Minister to make sure children can surf the net safely at libraries,' states Brian Mikkelsen in a press-release of the Danish Ministry of Culture."[46]
Many libraries in the UK such as theBritish Library[47]andlocal authoritypublic libraries[48]apply filters to Internet access. According to research conducted by the Radical Librarians Collective, at least 98% of public libraries apply filters; including categories such as "LGBT interest", "abortion" and "questionable".[49]Some public libraries blockPayday loanwebsites[50]
The use of Internet filters or content-control software varies widely in public libraries in the United States, since Internet use policies are established by the local library board. Many libraries adopted Internet filters after Congress conditioned the receipt of universal service discounts on the use of Internet filters through theChildren's Internet Protection Act(CIPA). Other libraries do not install content control software, believing that acceptable use policies and educational efforts address the issue of children accessingage-inappropriatecontent while preserving adult users' right to freely access information. Some libraries use Internet filters on computers used by children only. Some libraries that employ content-control software allow the software to be deactivated on a case-by-case basis on application to a librarian; libraries that are subject to CIPA are required to have a policy that allows adults to request that the filter be disabled without having to explain the reason for their request.
Many legal scholars believe that a number of legal cases, in particularReno v. American Civil Liberties Union, established that the use of content-control software in libraries is a violation of the First Amendment.[51]The Children's Internet Protection Act [CIPA] and the June 2003 caseUnited States v. American Library Associationfound CIPA constitutional as a condition placed on the receipt of federal funding, stating that First Amendment concerns were dispelled by the law's provision that allowed adult library users to have the filtering software disabled, without having to explain the reasons for their request. The plurality decision left open a future "as-applied" Constitutional challenge, however.
In November 2006, a lawsuit was filed against the North Central Regional Library District (NCRL) in Washington State for its policy of refusing to disable restrictions upon requests of adult patrons, but CIPA was not challenged in that matter.[52]In May 2010, the Washington State Supreme Court provided an opinion after it was asked to certify a question referred by the United States District Court for the Eastern District of Washington: "Whether a public library, consistent with Article I, § 5 of the Washington Constitution, may filter Internet access for all patrons without disabling Web sites containing constitutionally-protected speech upon the request of an adult library patron." The Washington State Supreme Court ruled that NCRL's internet filtering policy did not violate Article I, Section 5 of the Washington State Constitution. The Court said: "It appears to us that NCRL's filtering policy is reasonable and accords with its mission and these policies and is viewpoint neutral. It appears that no article I, section 5 content-based violation exists in this case. NCRL's essential mission is to promote reading and lifelong learning. As NCRL maintains, it is reasonable to impose restrictions on Internet access in order to maintain an environment that is conducive to study and contemplative thought." The case returned to federal court.
In March 2007, Virginia passed a law similar to CIPA that requires public libraries receiving state funds to use content-control software. Like CIPA, the law requires libraries to disable filters for an adult library user when requested to do so by the user.[53]
Content filtering in general can "be bypassed entirely by tech-savvy individuals." Blocking content on a device "[will not]…guarantee that users won't eventually be able to find a way around the filter."[54]Content providers may changeURLsorIP addressesto circumvent filtering. Individuals with technical expertise may use a different method by employing multiple domains or URLs that direct to a shared IP address where restricted content is present. This strategy doesn't circumventIP packet filtering, however can evadeDNS poisoningandweb proxies. Additionally, perpetrators may use mirrored websites that avoid filters.[55]
Some software may be bypassed successfully by using alternative protocols such asFTPortelnetorHTTPS, conducting searches in a different language, using aproxy serveror a circumventor such asPsiphon. Also cached web pages returned by Google or other searches could bypass some controls as well. Web syndication services may provide alternate paths for content. Some of the more poorly designed programs can be shut down by killing their processes: for example, inMicrosoft Windowsthrough the WindowsTask Manager, or inMac OS Xusing Force Quit orActivity Monitor. Numerous workarounds and counters to workarounds from content-control software creators exist.Googleservices are often blocked by filters, but these may most often be bypassed by usinghttps://in place ofhttp://since content filtering software is not able to interpret content under secure connections (in this case SSL).[needs update]
An encryptedVPNcan be used as means of bypassing content control software, especially if the content control software is installed on an Internet gateway or firewall. Other ways to bypass a content control filter include translation sites andestablishing a remote connectionwith an uncensored device.[56]
Some ISPs offerparental controloptions. Some offer security software which includes parental controls.Mac OS X v10.4offers parental controls for several applications (Mail,Finder,iChat,Safari&Dictionary). Microsoft'sWindows Vistaoperating system also includes content-control software.
Content filtering technology exists in two major forms:application gatewayorpacket inspection. For HTTP access the application gateway is called aweb-proxyor just a proxy. Such web-proxies can inspect both the initial request and the returned web page using arbitrarily complex rules and will not return any part of the page to the requester until a decision is made. In addition they can make substitutions in whole or for any part of the returned result. Packet inspection filters do not initially interfere with the connection to the server but inspect the data in the connection as it goes past, at some point the filter may decide that the connection is to be filtered and it will then disconnect it by injecting a TCP-Reset or similar faked packet. The two techniques can be used together with the packet filter monitoring a link until it sees an HTTP connection starting to an IP address that has content that needs filtering. The packet filter then redirects the connection to the web-proxy which can perform detailed filtering on the website without having to pass through all unfiltered connections. This combination isquite popularbecause it can significantly reduce the cost of the system.
There are constraints to IP level packet-filtering, as it may result in rendering all web content associated with a particular IP address inaccessible. This may result in the unintentional blocking of legitimate sites that share the same IP address or domain. For instance, university websites commonly employ multiple domains under oneIP address. Moreover, IP level packet-filtering can be surpassed by using a distinct IP address for certain content while still being linked to the same domain or server.[57]
Gateway-based content control software may be more difficult to bypass than desktop software as the user does not have physical access to the filtering device. However, many of the techniques in theBypassing filterssection still work.
|
https://en.wikipedia.org/wiki/Internet_filter
|
Incomputational complexity theory,Yao's principle(also calledYao's minimax principleorYao's lemma) relates the performance ofrandomized algorithmsto deterministic (non-random) algorithms. It states that, for certain classes of algorithms, and certain measures of the performance of the algorithms, the following two quantities are equal:
Yao's principle is often used to prove limitations on the performance of randomized algorithms, by finding a probability distribution on inputs that is difficult for deterministic algorithms, and inferring that randomized algorithms have the same limitation on their worst case performance.[1]
This principle is named afterAndrew Yao, who first proposed it in a 1977 paper.[2]It is closely related to theminimax theoremin the theory ofzero-sum games, and to theduality theory of linear programs.
Consider an arbitrary real valued cost measurec(A,x){\displaystyle c(A,x)}of an algorithmA{\displaystyle A}on an inputx{\displaystyle x}, such as its running time, for which we want to study theexpected valueover randomized algorithms and random inputs. Consider, also, afinite setA{\displaystyle {\mathcal {A}}}of deterministic algorithms (made finite, for instance, by limiting the algorithms to a specific input size), and a finite setX{\displaystyle {\mathcal {X}}}of inputs to these algorithms. LetR{\displaystyle {\mathcal {R}}}denote the class of randomized algorithms obtained from probability distributions over the deterministic behaviors inA{\displaystyle {\mathcal {A}}}, and letD{\displaystyle {\mathcal {D}}}denote the class of probability distributions on inputs inX{\displaystyle {\mathcal {X}}}. Then, Yao's principle states that:[1]
maxD∈DminA∈AEx∼D[c(A,x)]=minR∈Rmaxx∈XE[c(R,x)].{\displaystyle \max _{D\in {\mathcal {D}}}\min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]=\min _{R\in {\mathcal {R}}}\max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)].}
Here,E{\displaystyle \mathbb {E} }is notation for the expected value, andx∼D{\displaystyle x\sim D}means thatx{\displaystyle x}is a random variable distributed according toD{\displaystyle D}. Finiteness ofA{\displaystyle {\mathcal {A}}}andX{\displaystyle {\mathcal {X}}}allowsD{\displaystyle {\mathcal {D}}}andR{\displaystyle {\mathcal {R}}}to be interpreted assimplicesofprobability vectors,[3]whosecompactnessimplies that the minima and maxima in these formulas exist.[4]
Another version of Yao's principle weakens it from an equality to an inequality, but at the same time generalizes it by relaxing the requirement that the algorithms and inputs come from a finite set. The direction of the inequality allows it to be used when a specific input distribution has been shown to be hard for deterministic algorithms, converting it into alower boundon the cost of all randomized algorithms. In this version, for every inputdistributionD∈D{\displaystyle D\in {\mathcal {D}}},and for every randomizedalgorithmR{\displaystyle R}inR{\displaystyle {\mathcal {R}}},[1]minA∈AEx∼D[c(A,x)]≤maxx∈XE[c(R,x)].{\displaystyle \min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]\leq \max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)].}That is, the best possible deterministic performance against distributionD{\displaystyle D}is alower boundfor the performance of each randomized algorithmR{\displaystyle R}against its worst-case input. This version of Yao's principle can be proven through the chain of inequalitiesminA∈AEx∼D[c(A,x)]≤Ex∼D[c(R,x)]≤maxx∈XE[c(R,x)],{\displaystyle \min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]\leq \mathbb {E} _{x\sim D}[c(R,x)]\leq \max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)],}each of which can be shown using onlylinearity of expectationand the principle thatmin≤E≤max{\displaystyle \min \leq \mathbb {E} \leq \max }for all distributions. By avoiding maximization and minimization overD{\displaystyle {\mathcal {D}}}andR{\displaystyle {\mathcal {R}}}, this version of Yao's principle can apply in some cases whereX{\displaystyle {\mathcal {X}}}orA{\displaystyle {\mathcal {A}}}are not finite.[5]Although this direction of inequality is the direction needed for proving lower bounds on randomized algorithms, the equality version of Yao's principle, when it is available, can also be useful in these proofs. The equality of the principle implies that there is no loss of generality in using the principle to prove lower bounds: whatever the actual best randomized algorithm might be, there is some input distribution through which a matching lower bound on its complexity can be proven.[6]
When the costc{\displaystyle c}denotes the running time of an algorithm, Yao's principle states that the best possible running time of a deterministic algorithm, on a hard input distribution, gives a lower bound for theexpected timeof anyLas Vegas algorithmon its worst-case input. Here, a Las Vegas algorithm is a randomized algorithm whose runtime may vary, but for which the result is always correct.[7][8]For example, this form of Yao's principle has been used to prove the optimality of certainMonte Carlo tree searchalgorithms for the exact evaluation ofgame trees.[8]
The time complexity ofcomparison-based sortingandselection algorithmsis often studied using the number of comparisons between pairs of data elements as a proxy for the total time. When these problems are considered over a fixed set of elements, their inputs can be expressed aspermutationsand a deterministic algorithm can be expressed as adecision tree. In this way both the inputs and the algorithms form finite sets as Yao's principle requires. Asymmetrizationargument identifies the hardest input distributions: they are therandom permutations, the distributions onn{\displaystyle n}distinct elements for which allpermutationsare equally likely. This is because, if any other distribution were hardest, averaging it with all permutations of the same hard distribution would be equally hard, and would produce the distribution for a random permutation. Yao's principle extends lower bounds for the average case number of comparisons made by deterministic algorithms, for random permutations, to the worst case analysis of randomized comparison algorithms.[2]
An example given by Yao is the analysis of algorithms for finding thek{\displaystyle k}th largest of a given set ofn{\displaystyle n}values, the selection problem.[2]Subsequently, to Yao's work, Walter Cunto andIan Munroshowed that, for random permutations, any deterministic algorithm must perform at leastn+min(k,n−k)−O(1){\displaystyle n+\min(k,n-k)-O(1)}expected comparisons.[9]By Yao's principle, the same number of comparisons must be made by randomized algorithms on their worst-case input.[10]TheFloyd–Rivest algorithmcomes withinO(nlogn){\displaystyle O({\sqrt {n\log n}})}comparisons of this bound.[11]
Another of the original applications by Yao of his principle was to theevasiveness of graph properties, the number of tests of the adjacency of pairs of vertices needed to determine whether a graph has a given property, when the only access to the graph is through such tests.[2]Richard M. Karpconjectured that every randomized algorithm for every nontrivial monotone graph property (a property that remains true for every subgraph of a graph with the property) requires a quadratic number of tests, but only weaker bounds have been proven.[12]
As Yao stated, for graph properties that are true of the empty graph but false for some other graph onn{\displaystyle n}vertices with only a bounded numbers{\displaystyle s}of edges, a randomized algorithm must probe a quadratic number of pairs of vertices. For instance, for the property of being aplanar graph,s=9{\displaystyle s=9}because the 9-edgeutility graphis non-planar. More precisely, Yao states that for these properties, at least(12−p)1s(n2){\displaystyle \left({\tfrac {1}{2}}-p\right){\tfrac {1}{s}}{\tbinom {n}{2}}}tests are needed, for everyε>0{\displaystyle \varepsilon >0}, for a randomized algorithm to have probability at mostp{\displaystyle p}of making a mistake. Yao also used this method to show that quadratically many queries are needed for the properties of containing a giventreeorcliqueas a subgraph, of containing aperfect matching, and of containing aHamiltonian cycle, for small enough constant error probabilities.[2]
Inblack-box optimization, the problem is to determine the minimum or maximum value of a function, from a given class of functions, accessible only through calls to the function on arguments from some finite domain. In this case, the cost to be optimized is the number of calls. Yao's principle has been described as "the only method available for proving lower bounds for all randomized search heuristics for selected classes of problems".[13]Results that can be proven in this way include the following:
Incommunication complexity, an algorithm describes acommunication protocolbetween two or more parties, and its cost may be the number of bits or messages transmitted between the parties. In this case, Yao's principle describes an equality between theaverage-case complexityof deterministic communication protocols, on an input distribution that is the worst case for the problem, and the expected communication complexity of randomized protocols on their worst-case inputs.[6][14]
An example described byAvi Wigderson(based on a paper by Manu Viola) is the communication complexity for two parties, each holdingn{\displaystyle n}-bit input values, to determine which value is larger. For deterministic communication protocols, nothing better thann{\displaystyle n}bits of communication is possible, easily achieved by one party sending their whole input to the other. However, parties with a shared source of randomness and a fixed error probability can exchange 1-bithash functionsofprefixesof the input to perform a noisybinary searchfor the first position where their inputs differ, achievingO(logn){\displaystyle O(\log n)}bits of communication. This is within a constant factor of optimal, as can be shown via Yao's principle with an input distribution that chooses the position of the first difference uniformly at random, and then chooses random strings for the shared prefix up to that position and the rest of the inputs after that position.[6][15]
Yao's principle has also been applied to thecompetitive ratioofonline algorithms. An online algorithm must respond to a sequence of requests, without knowledge of future requests, incurring some cost or profit per request depending on its choices. The competitive ratio is the ratio of its cost or profit to the value that could be achieved achieved by anoffline algorithmwith access to knowledge of all future requests, for a worst-case request sequence that causes this ratio to be as far from one as possible. Here, one must be careful to formulate the ratio with the algorithm's performance in the numerator and the optimal performance of an offline algorithm in the denominator, so that the cost measure can be formulated as an expected value rather than as thereciprocalof an expected value.[5]
An example given byBorodin & El-Yaniv (2005)concernspage replacement algorithms, which respond to requests forpagesof computer memory by using acacheofk{\displaystyle k}pages, for a given parameterk{\displaystyle k}. If a request matches a cached page, it costs nothing; otherwise one of the cached pages must be replaced by the requested page, at a cost of onepage fault. A difficult distribution of request sequences for this model can be generated by choosing each request uniformly at random from a pool ofk+1{\displaystyle k+1}pages. Any deterministic online algorith hasnk+1{\displaystyle {\tfrac {n}{k+1}}}expected page faults, overn{\displaystyle n}requests. Instead, an offline algorithm can divide the request sequence into phases within which onlyk{\displaystyle k}pages are used, incurring only one fault at the start of a phase to replace the one page that is unused within the phase. As an instance of thecoupon collector's problem, the expected requests per phase is(k+1)Hk{\displaystyle (k+1)H_{k}}, whereHk=1+12+⋯+1k{\displaystyle H_{k}=1+{\tfrac {1}{2}}+\cdots +{\tfrac {1}{k}}}is thek{\displaystyle k}thharmonic number. Byrenewal theory, the offline algorithm incursn(k+1)Hk+o(n){\displaystyle {\tfrac {n}{(k+1)H_{k}}}+o(n)}page faults with high probability, so the competitive ratio of any deterministic algorithm against this input distribution is at leastHk{\displaystyle H_{k}}. By Yao's principle,Hk{\displaystyle H_{k}}also lower bounds the competitive ratio of any randomized page replacement algorithm against a request sequence chosen by anoblivious adversaryto be a worst case for the algorithm but without knowledge of the algorithm's random choices.[16]
For online problems in a general class related to theski rental problem, Seiden has proposed a cookbook method for deriving optimally hard input distributions, based on certain parameters of the problem.[17]
Yao's principle may be interpreted ingame theoreticterms, via a two-playerzero-sum gamein which one player,Alice, selects a deterministic algorithm, the other player, Bob, selects an input, and the payoff is the cost of the selected algorithm on the selected input. Any randomized algorithmR{\displaystyle R}may be interpreted as a randomized choice among deterministic algorithms, and thus as amixed strategyfor Alice. Similarly, a non-random algorithm may be thought of as apure strategyfor Alice. In any two-player zero-sum game, if one player chooses a mixed strategy, then the other player has an optimal pure strategy against it. By theminimax theoremofJohn von Neumann, there exists a game valuec{\displaystyle c}, and mixed strategies for each player, such that the players can guarantee expected valuec{\displaystyle c}or better by playing those strategies, and such that the optimal pure strategy against either mixed strategy produces expected value exactlyc{\displaystyle c}. Thus, the minimax mixed strategy for Alice, set against the best opposing pure strategy for Bob, produces the same expected game valuec{\displaystyle c}as the minimax mixed strategy for Bob, set against the best opposing pure strategy for Alice. This equality of expected game values, for the game described above, is Yao's principle in its form as an equality.[5]Yao's 1977 paper, originally formulating Yao's principle, proved it in this way.[2]
The optimal mixed strategy for Alice (a randomized algorithm) and the optimal mixed strategy for Bob (a hard input distribution) may each be computed using a linear program that has one player's probabilities as its variables, with a constraint on the game value for each choice of the other player. The two linear programs obtained in this way for each player aredual linear programs, whose equality is an instance of linear programming duality.[3]However, although linear programs may be solved inpolynomial time, the numbers of variables and constraints in these linear programs (numbers of possible algorithms and inputs) are typically too large to list explicitly. Therefore, formulating and solving these programs to find these optimal strategies is often impractical.[13][14]
ForMonte Carlo algorithms, algorithms that use a fixed amount of computational resources but that may produce an erroneous result, a form of Yao's principle applies to the probability of an error, the error rate of an algorithm. Choosing the hardest possible input distribution, and the algorithm that achieves the lowest error rate against that distribution, gives the same error rate as choosing an optimal algorithm and its worst case input distribution. However, the hard input distributions found in this way are not robust to changes in the parameters used when applying this principle. If an input distribution requires high complexity to achieve a certain error rate, it may nevertheless have unexpectedly low complexity for a different error rate. Ben-David and Blais show that, forBoolean functionsunder many natural measures of computational complexity, there exists an input distribution that is simultaneously hard for all error rates.[18]
Variants of Yao's principle have also been considered forquantum computing. In place of randomized algorithms, one may consider quantum algorithms that have a good probability of computing the correct value for every input (probability at least23{\displaystyle {\tfrac {2}{3}}}); this condition together withpolynomial timedefines the complexity classBQP. It does not make sense to ask for deterministic quantum algorithms, but instead one may consider algorithms that, for a given input distribution, have probability 1 of computing a correct answer, either in aweaksense that the inputs for which this is true have probability≥23{\displaystyle \geq {\tfrac {2}{3}}}, or in astrongsense in which, in addition, the algorithm must have probability 0 or 1 of generating any particular answer on the remaining inputs. For any Boolean function, the minimum complexity of a quantum algorithm that is correct with probability≥23{\displaystyle \geq {\tfrac {2}{3}}}against its worst-case input is less than or equal to the minimum complexity that can be attained, for a hard input distribution, by the best weak or strong quantum algorithm against that distribution. The weak form of this inequality is within a constant factor of being an equality, but the strong form is not.[19]
|
https://en.wikipedia.org/wiki/Yao%27s_principle
|
Percy Edwin Ludgate(2 August 1883 – 16 October 1922) was anIrishamateur scientist who designed the secondanalytical engine(general-purposeTuring-completecomputer) in history.[1][2]
Ludgate was born on 2 August 1883 inSkibbereen,County Cork, to Michael Ludgate and Mary McMahon.[3][2]In the 1901 census, he is listed asCivil ServantNational Education (Boy Copyist) inDublin.[4]In the 1911 census, he is also in Dublin, as a Commercial Clerk (Corn Merchant).[5]He studied accountancy atRathmines College of Commerce, earning a gold medal based on the results of his final examinations in 1917.[6]At some date before or after then, he joined Kevans & Son, accountants.[3]
It seems that Ludgate worked as a clerk for an unknown corn merchant, in Dublin, and pursued his interest in calculating machines at night.[6]Charles Babbagein 1843 and Ludgate in 1909 designed the only two mechanical analytical engines before the electromechanical analytical engine ofLeonardo Torres Quevedoof 1920 and its few successors, and the six first-generationelectronicanalytical engines of 1949.
Working alone, Ludgate designed an analytical engine while unaware of Babbage's designs, although he later went on to write about Babbage's machine. Ludgate's engine used multiplication as its base mechanism (unlike Babbage's which used addition). It incorporated the firstmultiplier-accumulator, and was the first to exploit a multiplier-accumulator to perform division, using multiplication seeded by reciprocal, via the convergent series(1 +x)−1.
Ludgate's engine also used a mechanism similar to slide rules, but employing unique, discrete "Logarithmic Indexes" (now known asIrish logarithms),[7]as well as a novel memory system utilizing concentric cylinders, storing numbers as displacements of rods in shuttles. His design featured several other novel features, including for program control (e.g.,preemptionandsubroutines– ormicrocode, depending on one's viewpoint). The design is so dissimilar from Babbage's that it can be considered a second, unique type ofanalytical engine, which thus preceded the third (electromechanical) and fourth (electronic) types. The engine's precise mechanism is unknown, as the only written accounts which survive do not detail its workings, although he stated in 1914 that "[c]omplete descriptive drawings of the machine exist, as well as a description in manuscript" – these have never been found.[8]
Ludgate was one of just a few independent workers in the field of science and mathematics.[citation needed]His inventions were worked on outside a lab. He worked on them only part-time, often until the early hours of the morning. Many publications refer to him as an accountant, but that came only after his 1909 analytical engine paper. Little is known about his personal life, as his only known records are his scientific writings. Prior to 2016, the best source of information about Ludgate and his significance was in the work of ProfessorBrian Randell.[9]Since then, further investigation is underway atTrinity College, Dublinunder the auspices of theJohn Gabriel Byrne Computer Science Collection.[10]
Ludgate died ofpneumoniaon 19 October 1922,[3]and is buried inMount Jerome Cemeteryin Dublin.[6]
In 1960, a German patent lawyer working on behalf ofIBMsuccessfully relied on Ludgate’s 1909 paper to defeat an important 1941 patent application by the pioneering computer scientistKonrad Zuse. Had the patent been approved, Zuse would have controlled the primary intellectual property for crucial techniques that all computers now use; this would have changed his career and could well have altered the commercial trajectory of the computer industry.[11][12]
In 1991, a prize for the best final-year project in the Moderatorship incomputer sciencecourse atTrinity College, Dublin– theLudgate Prize– was instituted in his honour,[13]and in 2016 the Ludgate Hub e-business incubation centre was opened inSkibbereen, where he was born.[6]
In October 2022, a plaque from theNational Committee for Commemorative Plaques in Science and Technologywas unveiled at Ludgate's home inDrumcondraby the Provost of Trinity College,Linda Doyle. (As can be seen in the photo, the year of birth is listed incorrectly on the plaque.)[14][15]
Also in 2022, a podcast with Dr Chris Horn discussed Percy Ludgate,[16]then in October 2024 an appealing and accurate podcast on Percy Ludgate was created by Google's Gemini A.I..[17]
|
https://en.wikipedia.org/wiki/Percy_Ludgate
|
The 1583Throckmorton Plotwas one of a series of attempts byEnglish Roman Catholicsto deposeElizabeth I of Englandand replace her withMary, Queen of Scots, then held under house arrest in England. The alleged objective was to facilitate a Spanish invasion of England, assassinate Elizabeth, and put Mary on the English throne.
The plot is named after the key conspirator,Sir Francis Throckmorton, cousin ofBess Throckmorton,lady in waitingto Queen Elizabeth. Throckmorton was arrested in November 1583 and executed on 10 July 1584.[1]
The plot aimed to free Mary, Queen of Scots, under house arrest in England since 1568, make her queen in place of Elizabeth, and legally restoreRoman Catholicism.[2]This would be achieved by a Spanish-backed invasion of England, led by the FrenchDuke of Guise, supported by a simultaneous revolt of English Roman Catholics.[3]Guise would then marry Mary and become king.
It was typical of the amateurish and overly optimistic approach of many such attempts. Throckmorton was placed under surveillance almost as soon as he returned to England, and subsequently arrested and executed. The plot was never put into action.[4]
Francis Throckmorton (1554-1584) came from a prominent English Catholic family, his fatherJohn Throckmortonbeing a senior judge and witness toQueen Mary's will.[5]While travelling in Europe with his brother Thomas from 1580 to 1583, they visitedParisand met with Catholic exilesCharles PagetandThomas Morgan.[6]
After returning to London in 1583, Francis Throckmorton carried messages between Mary, Queen of Scots, Morgan, andBernardino de Mendoza,Philip II of Spain's ambassador in London. This correspondence was routed through the French embassy in London. Throckmorton also carried some letters written by Mary to the French ambassadorMichel de Castelnau. An agent within the French embassy atSalisbury CourtnearFleet Street, known as "Henry Fagot", notifiedFrancis Walsingham, Elizabeth'sSecretary of State.[7]
Throckmorton was taken into custody in November, along with incriminating documents, including lists of English Catholic supporters.[8]He was encoding a letter to Mary, Queen of Scots when he was arrested. After a few days, he was taken to theTower of London.[9]Another conspirator and letter carrier,George More, was also arrested and questioned, but released after making a deal with Walsingham.[10]
Shortly before his arrest, Throckmorton managed to send a casket of other documents to Mendoza; it has been suggested this was exactly what Walsingham wanted him to do. Throckmorton was a relatively minor player, whose significance was to confirm the extent of Spanish involvement in seeking to overthrow Elizabeth.[11]
Protected bydiplomatic immunity, Mendoza was expelled in January 1584.[1]He was the last Spanish ambassador to England during theElizabethan era.[12]Throckmorton was tortured with therack,[13]first on 16 November, to ensure he revealed as much information as possible. On 19 November, he confessed to giving the Spanish ambassador a list of suitable havens and ports on the English coast.[14]
Throckmorton was put on trial on 21 May 1584 and executed on 10 July.[15]His brother Thomas and many others managed to escape; some were imprisoned in the Tower of London, but Francis Throckmorton was the only one executed.[4][16]
Unsurprisingly, Mary denied any knowledge of the plot. She was able to claim that she was not the author of letters coded in cipher by her secretaries. More of these letters were rediscovered and deciphered in 2023, and seem to implicate her. In June 1583, she asked the French ambassador Michel de Castelnau to apologise to Throckmorton for not writing to him in her own hand, and observed the potential for "great danger". A few months later, as the conspiracy unravelled, she offered money from her French dowry income to the Guises to maintain their interest in her cause after the fall of theGowrie Regimein Scotland.[17]
Mary was placed under strict confinement atChartley HallinStaffordshire. A new and stricter custodianAmias Pauletwas appointed in January 1585.[18]Walsingham andLord Burghleydrew up theBond of Association, obliging all signatories to execute anyone who attempted to usurp the throne or to assassinate the Queen.[19]Mary herself was one of the signatories and it provided the basis forher executionfollowing the 1586Babington Plot.[20][21]
A servant of Mary, Queen of Scots,Jérôme Pasquier, was questioned byThomas Phelippesin September 1586. He confessed to writing a letter in cipher for Mary to send to the French ambassador Castelnau asking him to negotiate a pardon for Francis Throckmorton.[22]
Many participants in the Babington andGunpowder Plotswere related by blood or marriage to Francis Throckmorton, among themRobert CatesbyandFrancis Tresham. Bess Throckmorton (1565-1647) secretly marriedSir Walter Raleigh(1554-1618).
A ballad celebrating the discovery of the plot compared Elizabeth's escape to the survival ofShadrach, Meshach, and Abednegoin Nebuchadnezzar's fiery furnace.[23]
|
https://en.wikipedia.org/wiki/Throckmorton_Plot
|
Tyranny of the majorityrefers to a situation inmajority rulewhere the preferences and interests of the majority dominate the political landscape, potentially sidelining or repressing minority groups and using majority rule to take non-democratic actions.[1]This idea has been discussed by various thinkers, includingJohn Stuart MillinOn Liberty[2]andAlexis de TocquevilleinDemocracy in America.[3][4]
To reduce the risk of majority tyranny, modern democracies frequently have countermajoritarian institutions that restrict the ability of majorities to repress minorities and stymie political competition.[1][5]In the context of a nation,constitutionallimits on the powers of a legislative body such as abill of rightsorsupermajority clausehave been used.Separation of powersorjudicial independencemay also be implemented.[6]
Insocial choice, a tyranny-of-the-majority scenario can be formally defined as a situation where the candidate or decision preferred by a majority is greatly inferior (hence "tyranny") to the socially optimal candidate or decision according to some measure of excellence such astotal utilitarianismor theegalitarian rule.
The origin of the term "tyranny of the majority" is commonly attributed toAlexis de Tocqueville, who used it in his bookDemocracy in America. It appears in Part 2 of the book in the title of Chapter 8, "What Moderates the Tyranny of the Majority in the United States' Absence of Administrative Centralization" (French:De ce qui tempère aux États-Unis latyrannie de la majorité[7]) and in the previous chapter in the names of sections such as "The Tyranny of the Majority" and "Effects of the Tyranny of the Majority on American National Character; the Courtier Spirit in the United States".[8]
While the specific phrase "tyranny of the majority" is frequently attributed to variousFounding Fathers of the United States, onlyJohn Adamsis known to have used it, arguing against government by a singleunicameralelected body. Writing in defense of theConstitutionin March 1788,[9]Adams referred to "a single sovereign assembly, each member…only accountable to his constituents; and the majority of members who have been of one party" as a "tyranny of the majority", attempting to highlight the need instead for "amixed government, consisting ofthree branches". Constitutional authorJames Madisonpresented a similar idea inFederalist 10, citing the destabilizing effect of "the superior force of an interested and overbearing majority" on a government, though the essay as a whole focuses on the Constitution's efforts to mitigate factionalism generally.
Later users includeEdmund Burke, who wrote in a 1790 letter that "The tyranny of a multitude is a multiplied tyranny."[10]It was further popularised byJohn Stuart Mill, influenced by Tocqueville, inOn Liberty(1859).Friedrich Nietzscheused the phrase in the first sequel toHuman, All Too Human(1879).[11]Ayn Randwrote that individual rights are not subject to a public vote, and that the political function of rights is precisely to protect minorities from oppression by majorities and "the smallest minority on earth is the individual".[12]InHerbert Marcuse's 1965 essayRepressive Tolerance, he said "tolerance is extended to policies, conditions, and modes of behavior which should not be tolerated because they are impeding, if not destroying, the chances of creating an existence without fear and misery" and that "this sort of tolerance strengthens the tyranny of the majority against which authentic liberals protested".[13]In 1994, legal scholarLani Guinierused the phrase as the title for a collection oflaw reviewarticles.[14]
A term used inClassicalandHellenistic Greecefor oppressive popular rule wasochlocracy("mob rule");tyrannymeant rule by one man—whether undesirable or not.
Herbert Spencer, in "The Right to Ignore the State" (1851), pointed the problem with the following example:[15]
Suppose, for the sake of argument, that, struck by someMalthusian panic, a legislature duly representing public opinion were to enact that all children born during the next ten years should be drowned. Does anyone think such an enactment would be warrantable? If not, there is evidently a limit to the power of a majority.
Secession of theConfederate States of Americafrom the United States was anchored by a version ofsubsidiarity, found within the doctrines ofJohn C. Calhoun.Antebellum South Carolinautilized Calhoun's doctrines in theOld Southas public policy, adopted from his theory ofconcurrent majority. This "localism" strategy was presented as a mechanism to circumvent Calhoun's perceived tyranny of the majority in the United States. Each state presumptively held the Sovereign power to block federal laws that infringed uponstates' rights, autonomously. Calhoun's policies directly influenced Southern public policy regarding slavery, and undermined theSupremacy Clausepower granted to the federal government. The subsequent creation of theConfederate States of Americacatalyzed theAmerican Civil War.
19th century concurrent majority theories held logical counterbalances to standard tyranny of the majority harms originating fromAntiquityand onward. Essentially, illegitimate or temporary coalitions that held majority volume could disproportionately outweigh and hurt any significant minority, by nature and sheer volume. Calhoun's contemporary doctrine was presented as one of limitation within American democracy to prevent traditional tyranny, whether actual or imagined.[16]
Federalist No. 10"The Same Subject Continued: The Union as a Safeguard Against Domestic Faction and Insurrection" (November 23, 1787):[17]
The inference to which we are brought is, that the CAUSES of faction cannot be removed, and that relief is only to be sought in the means of controlling its EFFECTS. If a faction consists of less than a majority, relief is supplied by the republican principle, which enables the majority to defeat its sinister views by regular vote. It may clog the administration, it may convulse the society; but it will be unable to execute and mask its violence under the forms of the Constitution. When a majority is included in a faction, the form of popular government, on the other hand, enables it to sacrifice to its ruling passion or interest both the public good and the rights of other citizens. To secure the public good and private rights against the danger of such a faction, and at the same time to preserve the spirit and the form of popular government, is then the great object to which our inquiries are directed...By what means is this object attainable? Evidently by one of two only. Either the existence of the same passion or interest in a majority at the same time must be prevented, or the majority, having such coexistent passion or interest, must be rendered, by their number and local situation, unable to concert and carry into effect schemes of oppression.
With respect to American democracy, Tocqueville, in his bookDemocracy in America, says:
So what is a majority taken as a whole, if not an individual who has opinions and, most often, interests contrary to another individual called the minority. Now, if you admit that an individual vested with omnipotence can abuse it against his adversaries, why would you not admit the same thing for the majority? Have men, by gathering together, changed character? By becoming stronger, have they become more patient in the face of obstacles? As for me, I cannot believe it; and the power to do everything that I refuse to any one of my fellows, I will never grant to several.[18]
So when I see the right and the ability to do everything granted to whatever power, whether called people or king, democracy or aristocracy, whether exercised in a monarchy or a republic, I say: the seed of tyranny is there and I try to go and live under other laws.[19]
When a man or a party suffers from an injustice in the United States, to whom do you want them to appeal? To public opinion? That is what forms the majority. To the legislative body? It represents the majority and blindly obeys it. To the executive power? It is named by the majority and serves it as a passive instrument. To the police? The police are nothing other than the majority under arms. To the jury? The jury is the majority vested with the right to deliver judgments. The judges themselves, in certain states, are elected by the majority. However iniquitous or unreasonable the measure that strikes you may be, you must therefore submit to it or flee. What is that if not the very soul of tyranny under the forms of liberty[20]
Robert A. Dahlargues that the tyranny of the majority is a spurious dilemma (p. 171):[21]
Critic: Are you trying to say that majority tyranny is simply an illusion? If so, that is going to be small comfort to a minority whose fundamental rights are trampled on by an abusive majority. I think you need to consider seriously two possibilities; first, that a majority will infringe on the rights of a minority, and second, that a majority may oppose democracy itself.Advocate: Let's take up the first. The issue is sometimes presented as a paradox. If a majority is not entitled to do so, then it is thereby deprived of its rights; but if a majority is entitled to do so, then it can deprive the minority of its rights. The paradox is supposed to show that no solution can be both democratic and just. But the dilemma seems to be spurious.Of course a majority might have the power or strength to deprive a minority of its political rights. […] The question is whether a majority mayrightlyuse its primary political rights to deprive a minority of its primary political rights.The answer is clearly no. To put it another way, logically it can't be true that the members of an association ought to govern themselves by the democratic process, and at the same time a majority of the association may properly strip a minority of its primary political rights. For, by doing so the majority would deny the minority the rights necessary to the democratic process. In effect therefore the majority would affirm that the association ought not to govern itself by the democratic process. They can't have it both ways.Critic: Your argument may be perfectly logical. But majorities aren't always perfectly logical. They may believe in democracy to some extent and yet violate its principles. Even worse, they maynotbelieve in democracy and yet they may cynically use the democratic process to destroy democracy. […] Without some limits, both moral and constitutional, the democratic process becomes self-contradictory, doesn't it?Advocate: That's exactly what I've been trying to show. Of course democracy has limits. But my point is that these are built into the very nature of the process itself. If you exceed those limits, then you necessarily violate the democratic process.
Regarding recent American politics (specificallyinitiatives), Donovan et al. argue that:
One of the original concerns about direct democracy is the potential it has to allow a majority of voters to trample the rights of minorities. Many still worry that the process can be used to harm gays and lesbians as well as ethnic, linguistic, and religious minorities. … Recent scholarly research shows that the initiative process is sometimes prone to produce laws that disadvantage relatively powerless minorities … State and local ballot initiatives have been used to undo policies – such as school desegregation, protections against job and housing discrimination, and affirmative action – that minorities have secured from legislatures.[22]
The notion that, in a democracy, the greatest concern is that the majority will tyrannise and exploit diverse smaller interests, has been criticised byMancur OlsoninThe Logic of Collective Action, who argues instead that narrow and well organised minorities are more likely to assert their interests over those of the majority. Olson argues that when the benefits of political action (e.g., lobbying) are spread over fewer agents, there is a stronger individual incentive to contribute to that political activity. Narrow groups, especially those who can reward active participation to their group goals, might therefore be able to dominate or distort political process, a process studied inpublic choice theory.
Class studies
Tyranny of the majority has also been prevalent in some class studies. Rahim Baizidi uses the concept of "democratic suppression" to analyze the tyranny of the majority in economic classes. According to this, the majority of the upper and middle classes, together with a small portion of the lower class, form the majority coalition of conservative forces in the society.[23]
Anti-federalists of public choice theory point out thatvote tradingcan protect minority interests from majorities in representative democratic bodies such as legislatures.[citation needed]They continue that direct democracy, such as statewide propositions on ballots, does not offer such protections.[weasel words]
|
https://en.wikipedia.org/wiki/Tyranny_of_the_majority
|
Pleonasm(/ˈpliː.əˌnæzəm/; fromAncient Greekπλεονασμόςpleonasmós, fromπλέονpléon'to be in excess')[1][2]isredundancyin linguistic expression, such as in "black darkness," "burning fire," "the man he said,"[3]or "vibrating with motion." It is a manifestation oftautologyby traditionalrhetoricalcriteria.[4]Pleonasm may also be used for emphasis, or because the phrase has become established in a certain form. Tautology and pleonasm are not consistently differentiated in literature.[5]
Most often,pleonasmis understood to mean a word or phrase which is useless,clichéd, or repetitive, but a pleonasm can also be simply an unremarkable use ofidiom. It can aid in achieving a specific linguistic effect, be it social, poetic or literary. Pleonasm sometimes serves the same function as rhetorical repetition—it can be used to reinforce an idea, contention or question, rendering writing clearer and easier to understand. Pleonasm can serve as aredundancy check; if a word is unknown, misunderstood, misheard, or if the medium of communication is poor—a static-filled radio transmission or sloppy handwriting—pleonastic phrases can help ensure that the meaning is communicated even if some of the words are lost.[citation needed]
Some pleonastic phrases are part of a language'sidiom, liketuna fish,chain mailandsafe haveninAmerican English. They are so common that their use is unremarkable for native speakers, although in many cases the redundancy can be dropped with no loss of meaning.
When expressing possibility, English speakers often use potentially pleonastic expressions such asIt might be possibleorperhaps it's possible, where both terms (verbmightor adverbperhapsalong with the adjectivepossible) have the same meaning under certain constructions. Many speakers of English use such expressions for possibility in general, such that most instances of such expressions by those speakers are in fact pleonastic. Others, however, use this expression only to indicate a distinction betweenontologicalpossibility andepistemicpossibility, as in "Both the ontological possibility of X under current conditions and the ontological impossibility of X under current conditions are epistemically possible" (inlogicalterms, "I am not aware of any facts inconsistent with the truth of proposition X, but I am likewise not aware of any facts inconsistent with the truth of the negation of X"). The habitual use of the double construction to indicate possibilityper seis far less widespread among speakers of most[citation needed]other languages (except in Spanish; see examples); rather, almost all speakers of those languages use one term in a single expression:[dubious–discuss]
In asatellite-framedlanguage like English,verb phrasescontainingparticlesthat denote direction of motion are so frequent that even when such a particle is pleonastic, it seems natural to include it (e.g. "enter into").
Some pleonastic phrases, when used in professional or scholarly writing, may reflect a standardized usage that has evolved or a meaning familiar to specialists but not necessarily to those outside that discipline. Such examples as "null and void", "each and every" arelegal doubletsthat are part oflegally operative languagethat is often drafted into legal documents. A classic example of such usage was that by theLord Chancellorat the time (1864),Lord Westbury, in the English case ofex parteGorely,[6]when he described a phrase in an Act as "redundant and pleonastic". This type of usage may be favored in certain contexts. However, it may also be disfavored when used gratuitously to portray false erudition, obfuscate, or otherwise introduce verbiage, especially in disciplines where imprecision may introduce ambiguities (such as the natural sciences).[7]
Examples fromBaroque,Mannerist, andVictorianprovide a counterpoint toStrunk's advocacy of concise writing:
There are various kinds of pleonasm, includingbilingual tautological expressions,syntactic pleonasm,semantic pleonasmandmorphological pleonasm:
A bilingual tautological expression is a phrase that combines words that mean the same thing in two different languages.[8]: 138An example of a bilingual tautological expression is theYiddishexpressionמים אחרונים וואַסערmayim akhroynem vaser. It literally means "water last water" and refers to "water for washing the hands after meal, grace water".[8]: 138Its first element,mayim, derives from theHebrewמים ['majim] "water". Its second element,vaser, derives from theMiddle High Germanwordvaser"water".
According toGhil'ad Zuckermann, Yiddish abounds with both bilingual tautological compounds and bilingual tautological first names.[8]: 138
The following are examples of bilingual tautological compounds in Yiddish:
The following are examples of bilingual tautological first names in Yiddish:
Examples occurring in English-language contexts include:
Syntacticpleonasm occurs when thegrammarof a language makes certainfunction wordsoptional.[citation needed]For example, consider the followingEnglishsentences:
In this construction, theconjunctionthatis optional when joining a sentence to averbphrase withknow. Both sentences are grammatically correct, but the wordthatis pleonastic in this case. By contrast, when a sentence is in spoken form and the verb involved is one of assertion, the use ofthatmakes clear that the present speaker is making an indirect rather than a direct quotation, such that he is not imputing particular words to the person he describes as having made an assertion; the demonstrative adjectivethatalso does not fit such an example. Also, some writers may use "that" for technical clarity reasons.[9]In some languages, such as French, the word is not optional and should therefore not be considered pleonastic.
The same phenomenon occurs inSpanishwith subject pronouns. Since Spanish is anull-subject language, which allows subject pronouns to be deleted when understood, the following sentences mean the same:
In this case, the pronounyo('I') is grammatically optional; both sentences mean "I love you" (however, they may not have the same tone orintention—this depends onpragmaticsrather than grammar). Such differing butsyntacticallyequivalent constructions, in many languages, may also indicate a difference inregister.
The process of deleting pronouns is calledpro-dropping, and it also happens in many other languages, such asKorean,Japanese,Hungarian,Latin,Italian,Portuguese,Swahili,Slavic languages, and theLao language.
In contrast, formal English requires an overt subject in each clause. A sentence may not need a subject to have valid meaning, but to satisfy the syntactic requirement for an explicit subject a pleonastic (ordummy pronoun) is used; only the first sentence in the following pair is acceptable English:
In this example the pleonastic "it" fills the subject function, but it contributes no meaning to the sentence. The second sentence, which omits the pleonasticitis marked as ungrammatical although no meaning is lost by the omission.[10]Elements such as "it" or "there", serving as empty subject markers, are also called (syntactic)expletives, or dummy pronouns. Compare:
The pleonasticne(ne pléonastique), expressing uncertainty in formalFrench, works as follows:
Two more striking examples of French pleonastic construction areaujourd'huiandQu'est-ce que c'est?.
The wordaujourd'hui/au jour d'huiis translated as 'today', but originally means "on the day of today" since the now obsoletehuimeans "today". The expressionau jour d'aujourd'hui(translated as "on the day of today") is common in spoken language and demonstrates that the original construction ofaujourd'huiis lost. It is considered a pleonasm.
The phraseQu'est-ce que c'est?meaning 'What's that?' or 'What is it?', while literally, it means "What is it that it is?".
There are examples of the pleonastic, or dummy, negative in English, such as the construction, heard in the New England region of the United States, in which the phrase "So don't I" is intended to have the same positive meaning as "So do I."[11][12]
WhenRobert Southsaid, "It is a pleonasm, a figure usual inScripture, by a multiplicity of expressions to signify one notable thing",[13]he was observing theBiblical Hebrewpoetic propensity to repeat thoughts in different words, since written Biblical Hebrew was a comparatively early form of written language and was written using oral patterning, which has many pleonasms. In particular, very many verses of thePsalmsare split into two halves, each of which says much the same thing in different words. The complex rules and forms of written language as distinct from spoken language were not as well-developed as they are today when the books making up theOld Testamentwere written.[14][15]See alsoparallelism (rhetoric).
This same pleonastic style remains very common in modern poetry and songwriting (e.g., "Anne, with her father / is out in the boat / riding the water / riding the waves / on the sea", fromPeter Gabriel's "Mercy Street").
Semantic pleonasm is a question more ofstyleandusagethan of grammar.[16]Linguists usually call thisredundancyto avoid confusion with syntactic pleonasm, a more important phenomenon fortheoretical linguistics. It usually takes one of two forms: Overlap or prolixity.
Overlap: One word's semantic component is subsumed by the other:
Prolixity: A phrase may have words which add nothing, or nothing logical or relevant, to the meaning.
An expression like "tuna fish", however, might elicit one of many possible responses, such as:
In some cases, the redundancy in meaning occurs at the syntactic level above the word, such as at the phrase level:
The redundancy of these two well-known statements is deliberate, forhumorouseffect. (SeeYogi Berra#"Yogi-isms".) But one does hear educated people say "my predictions about the future of politics" for "my predictions about politics", which are equivalent in meaning. While predictions are necessarily about the future (at least in relation to the time the prediction was made), the nature of this future can be subtle (e.g., "I predict that he died a week ago"—the prediction is about future discovery or proof of the date of death, not about the death itself). Generally "the future" is assumed, making most constructions of this sort pleonastic. The latter humorous quote above about not making predictions—byYogi Berra—is not really a pleonasm, but rather anironicplay on words.
Alternatively it could be an analogy between predict and guess.
However, "It'sdéjà vuall over again" could mean that there was earlier anotherdéjà vuof the same event or idea, which has now arisen for a third time; or that the speaker had very recently experienced adéjà vuof a different idea.
Redundancy, and "useless" or "nonsensical" words (or phrases, or morphemes), can also be inherited by one language from the influence of another and are not pleonasms in the more critical sense but actual changes in grammatical construction considered to be required for "proper" usage in the language or dialect in question.Irish English, for example, is prone to a number of constructions that non-Irish speakers find strange and sometimes directly confusing or silly:
All of these constructions originate from the application ofIrish Gaelicgrammatical rules to the English dialect spoken, in varying particular forms, throughout the island.
Seemingly "useless" additions and substitutions must be contrasted with similar constructions that are used for stress, humor, or other intentional purposes, such as:
The latter of these is a result of Yiddish influences on modern English, especiallyEast CoastUS English.
Sometimes editors and grammatical stylists will use "pleonasm" to describe simple wordiness. This phenomenon is also calledprolixityorlogorrhea. Compare:
or even:
The reader or hearer does not have to be told that loud music has a sound, and in a newspaper headline or other abbreviated prose can even be counted upon to infer that "burglary" is a proxy for "sound of the burglary" and that the music necessarily must have been loud to drown it out, unless the burglary was relatively quiet (this is not a trivial issue, as it may affect the legal culpability of the person who played the music); the word "loud" may imply that the music should have been played quietly if at all. Many are critical of the excessively abbreviated constructions of "headline-itis" or "newsspeak", so "loud [music]" and "sound of the [burglary]" in the above example should probably not be properly regarded as pleonastic or otherwise genuinely redundant, but simply as informative and clarifying.
Prolixity is also used to obfuscate, confuse, or euphemize and is not necessarily redundant or pleonastic in such constructions, though it often is. "Post-traumatic stress disorder" (shell shock) and "pre-owned vehicle" (used car) are bothtumideuphemisms but are not redundant. Redundant forms, however, are especially common in business, political, and academic language that is intended to sound impressive (or to be vague so as to make it hard to determine what is actually being promised, or otherwise misleading). For example: "This quarter, we are presently focusing with determination on an all-new, innovative integrated methodology and framework for rapid expansion of customer-oriented external programs designed and developed to bring the company's consumer-first paradigm into the marketplace as quickly as possible."
In contrast to redundancy, anoxymoronresults when two seemingly contradictory words are adjoined.
Redundancies sometimes take the form of foreign words whose meaning is repeated in the context:
These sentences use phrases which mean, respectively, "the the restaurant restaurant", "the the tar tar", "with in juice sauce" and so on. However, many times these redundancies are necessary—especially when the foreign words make up a proper noun as opposed to a common one. For example, "We went to Il Ristorante" is acceptable provided the audience can infer that it is a restaurant. (If they understand Italian and English it might, if spoken, be misinterpreted as a generic reference and not aproper noun, leading the hearer to ask "Which ristorante do you mean?"—such confusions are common in richly bilingual areas likeMontrealor theAmerican Southwestwhenmixing phrases from two languages.) But avoiding the redundancy of the Spanish phrase in the second example would only leave an awkward alternative: "La Brea pits are fascinating".
Most people find it best not to drop articles when using proper nouns made from foreign languages:
However, there are some exceptions to this, for example:
This is also similar to the treatment of definite and indefinite articles in titles of books, films, etc. where the article can—some would saymust—be present where it would otherwise be "forbidden":
Some cross-linguistic redundancies, especially in placenames, occur because a word in one language became the title of a place in another (e.g., theSahara Desert—"Sahara" is an English approximation of the word for "deserts" in Arabic). "TheLos Angeles Angels" professional baseball team is literally "the The Angels Angels". A supposed extreme example isTorpenhow HillinCumbria, where some of the elements in the name likely mean "hill".[citation needed]See theList of tautological place namesfor many more examples.
The wordtsetsemeans "fly" in theTswana language, aBantu languagespoken inBotswanaandSouth Africa. This word is the root of the English name for abiting flyfound inAfrica, thetsetse fly.
Acronyms and initialisms can also form the basis for redundancies; this is known humorously asRAS syndrome(for Redundant Acronym Syndrome syndrome). In all the examples that follow, the word after the acronym repeats a word represented in the acronym. The full redundant phrase is stated in the parentheses that follow each example:
(SeeRAS syndromefor many more examples.) The expansion of an acronym like PIN or HIV may be well known to English speakers, but the acronyms themselves have come to be treated as words, so little thought is given to what their expansion is (and "PIN" is also pronounced the same as the word "pin"; disambiguation is probably the source of "PIN number"; "SIN number" for "Social Insurance Number number" [sic] is a similar common phrase in Canada.) But redundant acronyms are more common with technical (e.g., computer) terms where well-informed speakers recognize the redundancy and consider it silly or ignorant, but mainstream users might not, since they may not be aware or certain of the full expansion of an acronym like "RAM".
Carefully constructed expressions, especially in poetry and political language, but also some general usages in everyday speech, may appear to be redundant but are not. This is most common with cognate objects (a verb's object that is cognate with the verb):
Or, a classic example from Latin:
The words need not be etymologically related, but simply conceptually, to be considered an example of cognate object:
Such constructions are not actually redundant (unlike "She slept a sleep" or "We wept tears") because the object's modifiers provide additional information. A rarer, more constructed form ispolyptoton, the stylistic repetition of the same word or words derived from the same root:
As with cognate objects, these constructions are not redundant because the repeated words or derivatives cannot be removed without removing meaning or even destroying the sentence, though in most cases they could be replaced with non-related synonyms at the cost of style (e.g., compare "The only thing we have to fear is terror".)
|
https://en.wikipedia.org/wiki/Pleonasm#Bilingual_tautological_expressions
|
Sparse principal component analysis(SPCA or sparse PCA) is a technique used in statistical analysis and, in particular, in the analysis ofmultivariatedata sets. It extends the classic method ofprincipal component analysis(PCA) for the reduction of dimensionality of data by introducing sparsity structures to the input variables.
A particular disadvantage of ordinary PCA is that the principal components are usually linear combinations of all input variables. SPCA overcomes this disadvantage by finding components that are linear combinations of just a few input variables (SPCs). This means that some of the coefficients of the linear combinations defining the SPCs, calledloadings,[note 1]are equal to zero. The number of nonzero loadings is called thecardinalityof the SPC.
Consider a datamatrix,X{\displaystyle X}, where each of thep{\displaystyle p}columns represent an input variable, and each of then{\displaystyle n}rows represents an independent sample from data population. One assumes each column ofX{\displaystyle X}has mean zero, otherwise one can subtract column-wise mean from each element ofX{\displaystyle X}.
LetΣ=1n−1X⊤X{\displaystyle \Sigma ={\frac {1}{n-1}}X^{\top }X}be the empiricalcovariance matrixofX{\displaystyle X}, which has dimensionp×p{\displaystyle p\times p}.
Given an integerk{\displaystyle k}with1≤k≤p{\displaystyle 1\leq k\leq p}, the sparse PCA problem can be formulated as maximizing the variance along a direction represented by vectorv∈Rp{\displaystyle v\in \mathbb {R} ^{p}}while constraining its cardinality:
The first constraint specifies thatvis a unit vector. In the second constraint,‖v‖0{\displaystyle \left\Vert v\right\Vert _{0}}represents theℓ0{\displaystyle \ell _{0}}pseudo-normofv, which is defined as the number of its non-zero components. So the second constraint specifies that the number of non-zero components invis less than or equal tok, which is typically an integer that is much smaller than dimensionp. The optimal value ofEq. 1is known as thek-sparse largesteigenvalue.
If one takesk=p, the problem reduces to the ordinaryPCA, and the optimal value becomes the largest eigenvalue of covariance matrixΣ.
After finding the optimal solutionv, one deflatesΣto obtain a new matrix
and iterate this process to obtain further principal components. However, unlike PCA, sparse PCA cannot guarantee that different principal components areorthogonal. In order to achieve orthogonality, additional constraints must be enforced.
The following equivalent definition is in matrix form.
LetV{\displaystyle V}be ap×psymmetric matrix, one can rewrite the sparse PCA problem as
Tris thematrix trace, and‖V‖0{\displaystyle \Vert V\Vert _{0}}represents the non-zero elements in matrixV.
The last line specifies thatVhasmatrix rankone and ispositive semidefinite.
The last line means that one hasV=vvT{\displaystyle V=vv^{T}}, soEq. 2is equivalent toEq. 1.
Moreover, the rank constraint in this formulation is actually redundant, and therefore sparse PCA can be cast as the following mixed-integer semidefinite program[1]
Because of the cardinality constraint, the maximization problem is hard to solve exactly, especially when dimensionpis high. In fact, the sparse PCA problem inEq. 1isNP-hardin the strong sense.[2]
As most sparse problems, variable selection in SPCA is a computationally intractable non-convex NP-hard problem,[3]therefore greedy sub-optimal algorithms are often employed to find solutions.
Note also that SPCA introduces hyperparameters quantifying in what capacity large parameter values are penalized.[4]These might needtuningto achieve satisfactory performance, thereby adding to the total computational cost.
Several alternative approaches (ofEq. 1) have been proposed, including
The methodological and theoretical developments of Sparse PCA as well as its applications in scientific studies are recently reviewed in a survey paper.[13]
It has been proposed that sparse PCA can be approximated bysemidefinite programming(SDP).[7]If one drops the rank constraint and relaxes the cardinality constraint by a 1-normconvexconstraint, one gets a semidefinite programming relaxation, which can be solved efficiently in polynomial time:
In the second constraint,1{\displaystyle \mathbf {1} }is ap×1vector of ones, and|V|is the matrix whose elements are the absolute values of the elements ofV.
The optimal solutionV{\displaystyle V}to the relaxed problemEq. 3is not guaranteed to have rank one. In that case,V{\displaystyle V}can be truncated to retain only the dominant eigenvector.
While the semidefinite program does not scale beyond n=300 covariates, it has been shown that a second-order cone relaxation of the semidefinite relaxation is almost as tight and successfully solves problems with n=1000s of covariates[14]
Suppose ordinary PCA is applied to a dataset where each input variable represents a different asset, it may generate principal components that are weighted combination of all the assets. In contrast, sparse PCA would produce principal components that are weighted combination of only a few input assets, so one can easily interpret its meaning. Furthermore, if one uses a trading strategy based on these principal components, fewer assets imply less transaction costs.
Consider a dataset where each input variable corresponds to a specific gene. Sparse PCA can produce a principal component that involves only a few genes, so researchers can focus on these specific genes for further analysis.
Contemporary datasets often have the number of input variables (p{\displaystyle p}) comparable with or even much larger than the number of samples (n{\displaystyle n}). It has been shown that ifp/n{\displaystyle p/n}does not converge to zero, the classical PCA is notconsistent. In other words, if we letk=p{\displaystyle k=p}inEq. 1, then
the optimal value does not converge to the largest eigenvalue of data population when the sample sizen→∞{\displaystyle n\rightarrow \infty }, and the optimal solution does not converge to the direction of maximum variance.
But sparse PCA can retain consistency even ifp≫n.{\displaystyle p\gg n.}
Thek-sparse largest eigenvalue (the optimal value ofEq. 1) can be used to discriminate an isometric model, where every direction has the same variance, from a spiked covariance model in high-dimensional setting.[15]Consider a hypothesis test where the null hypothesis specifies that dataX{\displaystyle X}are generated from a multivariate normal distribution with mean 0 and covariance equal to an identity matrix, and the alternative hypothesis specifies that dataX{\displaystyle X}is generated from a spiked model with signal strengthθ{\displaystyle \theta }:
wherev∈Rp{\displaystyle v\in \mathbb {R} ^{p}}has onlyknon-zero coordinates. The largestk-sparse eigenvalue can discriminate the two hypotheses if and only ifθ>Θ(klog(p)/n){\displaystyle \theta >\Theta ({\sqrt {k\log(p)/n}})}.
Since computingk-sparse eigenvalue is NP-hard, one can approximate it by the optimal value of semidefinite programming relaxation (Eq. 3). If that case, we can discriminate the two hypotheses ifθ>Θ(k2log(p)/n){\displaystyle \theta >\Theta ({\sqrt {k^{2}\log(p)/n}})}. The additionalk{\displaystyle {\sqrt {k}}}term cannot be improved by any other polynomial time algorithm if theplanted clique conjectureholds.
|
https://en.wikipedia.org/wiki/Sparse_PCA
|
Elementary mathematics, also known asprimaryorsecondary schoolmathematics, is the study of mathematics topics that are commonly taught at the primary or secondary school levels around the world. It includes a wide range of mathematical concepts and skills, includingnumber sense,algebra,geometry,measurement, anddata analysis. These concepts and skills form the foundation for more advanced mathematical study and are essential for success in many fields and everyday life. The study of elementary mathematics is a crucial part of a student's education and lays the foundation for future academic and career success.
Number sense is an understanding of numbers and operations. In the 'Number Sense and Numeration' strand students develop an understanding of numbers by being taught various ways of representing numbers, as well as the relationships among numbers.[2]
Properties of thenatural numberssuch asdivisibilityand the distribution ofprime numbers, are studied in basicnumber theory, another part of elementary mathematics.
Elementary Focus:
'Measurement skills and concepts' or 'Spatial Sense' are directly related to the world in which students live. Many of the concepts that students are taught in this strand are also used in other subjects such as science, social studies, and physical education[3]In the measurement strand students learn about the measurable attributes of objects,in addition to the basic metric system.
Elementary Focus:
The measurement strand consists of multiple forms of measurement, as Marian Small states: "Measurement is the process of assigning a qualitative or quantitative description of size to an object based on a particular attribute."[4]
A formula is an entity constructed using the symbols and formation rules of a givenlogical language.[5]For example, determining thevolumeof asphererequires a significant amount ofintegral calculusor its geometrical analogue, themethod of exhaustion;[6]but, having done this once in terms of someparameter(theradiusfor example), mathematicians have produced a formula to describe the volume.
An equation is aformulaof the formA=B, whereAandBareexpressionsthat may contain one or severalvariablescalledunknowns, and "=" denotes theequalitybinary relation. Although written in the form ofproposition, an equation is not astatementthat is either true or false, but a problem consisting of finding the values, calledsolutions, that, when substituted for the unknowns, yield equal values of the expressionsAandB. For example, 2 is the uniquesolutionof theequationx+ 2 = 4, in which theunknownisx.[7]
Data is asetofvaluesofqualitativeorquantitativevariables; restated, pieces of data are individual pieces ofinformation. Data incomputing(ordata processing) is represented in astructurethat is oftentabular(represented byrowsandcolumns), atree(asetofnodeswithparent-childrenrelationship), or agraph(a set ofconnectednodes). Data is typically the result ofmeasurementsand can bevisualizedusinggraphsorimages.
Data as anabstractconceptcan be viewed as the lowest level ofabstraction, from whichinformationand thenknowledgeare derived.
Two-dimensional geometry is a branch ofmathematicsconcerned with questions of shape, size, and relative position of two-dimensional figures. Basic topics in elementary mathematics include polygons, circles, perimeter and areas.
Apolygonis a shape that is bounded by a finite chain of straightline segmentsclosing in a loop to form aclosed chainorcircuit. These segments are called itsedgesorsides, and the points where two edges meet are the polygon'svertices(singular: vertex) orcorners. The interior of the polygon is sometimes called itsbody. Ann-gonis a polygon withnsides. A polygon is a 2-dimensional example of the more generalpolytopein any number of dimensions.
Acircleis a simpleshapeoftwo-dimensional geometrythat is the set of allpointsin aplanethat are at a given distance from a given point, thecenter.The distance between any of the points and the center is called theradius. It can also be defined as the locus of a point equidistant from a fixed point.
Aperimeteris a path that surrounds atwo-dimensionalshape. The term may be used either for the path or its length - it can be thought of as the length of the outline of a shape. The perimeter of acircleorellipseis called itscircumference.
Areais thequantitythat expresses the extent of atwo-dimensionalfigure orshape. There are several well-knownformulasfor the areas of simple shapes such astriangles,rectangles, andcircles.
Two quantities are proportional if a change in one is always accompanied by a change in the other, and if the changes are always related by use of a constant multiplier. The constant is called thecoefficientof proportionality orproportionality constant.
Analytic geometryis the study ofgeometryusing acoordinate system. This contrasts withsynthetic geometry.
Usually theCartesian coordinate systemis applied to manipulateequationsforplanes,straight lines, andsquares, often in two and sometimes in three dimensions. Geometrically, one studies theEuclidean plane(2 dimensions) andEuclidean space(3 dimensions). As taught in school books, analytic geometry can be explained more simply: it is concerned with defining and representing geometrical shapes in a numerical way and extracting numerical information from shapes' numerical definitions and representations.
Transformations are ways of shifting and scaling functions using different algebraic formulas.
Anegative numberis areal numberthat isless thanzero. Such numbers are often used to represent the amount of a loss or absence. For example, adebtthat is owed may be thought of as a negative asset, or a decrease in some quantity may be thought of as a negative increase. Negative numbers are used to describe values on a scale that goes below zero, such as the Celsius andFahrenheitscales for temperature.
Exponentiation is amathematicaloperation, written asbn, involving two numbers, thebaseband theexponent(orpower)n. Whennis anatural number(i.e., a positiveinteger), exponentiation corresponds to repeatedmultiplicationof the base: that is,bnis theproductof multiplyingnbases:
Roots are the opposite of exponents. Thenth rootof anumberx(writtenxn{\displaystyle {\sqrt[{n}]{x}}}) is a numberrwhich when raised to the powernyieldsx. That is,
wherenis thedegreeof the root. A root of degree 2 is called asquare rootand a root of degree 3, acube root. Roots of higher degree are referred to by using ordinal numbers, as infourth root,twentieth root, etc.
For example:
Compass-and-straightedge, also known as ruler-and-compass construction, is the construction of lengths,angles, and other geometric figures using only anidealizedrulerandcompass.
The idealized ruler, known as astraightedge, is assumed to be infinite in length, and has no markings on it and only one edge. The compass is assumed to collapse when lifted from the page, so may not be directly used to transfer distances. (This is an unimportant restriction since, using a multi-step procedure, a distance can be transferred even with a collapsing compass, seecompass equivalence theorem.) More formally, the only permissible constructions are those granted by thefirst three postulatesofEuclid.
Two figures or objects are congruent if they have the sameshapeand size, or if one has the same shape and size as the mirror image of the other.[8]More formally, two sets ofpointsare calledcongruentif, and only if, one can be transformed into the other by anisometry, i.e., a combination ofrigid motions, namely atranslation, arotation, and areflection. This means that either object can be repositioned and reflected (but not resized) so as to coincide precisely with the other object. So two distinct plane figures on a piece of paper are congruent if we can cut them out and then match them up completely. Turning the paper over is permitted.
Two geometrical objects are calledsimilarif they both have the sameshape, or one has the same shape as the mirror image of the other. More precisely, one can be obtained from the other by uniformlyscaling(enlarging or shrinking), possibly with additionaltranslation,rotationandreflection. This means that either object can be rescaled, repositioned, and reflected, so as to coincide precisely with the other object. If two objects are similar, each iscongruentto the result of a uniform scaling of the other.
Solid geometrywas the traditional name for thegeometryof three-dimensionalEuclidean space.Stereometrydeals with themeasurementsofvolumesof varioussolid figures(three-dimensionalfigures) includingpyramids,cylinders,cones,truncated cones,spheres, andprisms.
Rational numberis anynumberthat can be expressed as thequotientor fractionp/qof twointegers, with thedenominatorqnot equal to zero.[9]Sinceqmay be equal to 1, every integer is a rational number. Thesetof all rational numbers is usually denoted by a boldfaceQ(orblackboard boldQ{\displaystyle \mathbb {Q} }).
Apatternis a discernible regularity in the world or in a manmade design. As such, the elements of a pattern repeat in a predictable manner. Ageometric patternis a kind of pattern formed of geometric shapes and typically repeating like aaallpaper.
Arelationon asetAis a collection ofordered pairsof elements ofA. In other words, it is asubsetof theCartesian productA2=A×A. Common relations include divisibility between two numbers and inequalities.
Afunction[10]is arelationbetween asetof inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that relates each real numberxto its squarex2. The output of a functionfcorresponding to an inputxis denoted byf(x) (read "fofx"). In this example, if the input is −3, then the output is 9, and we may writef(−3) = 9. The input variable(s) are sometimes referred to as the argument(s) of the function.
Theslope of a lineis a number that describes both thedirectionand thesteepnessof the line.[11]Slope is often denoted by the letterm.[12]
Trigonometryis a branch ofmathematicsthat studies relationships involving lengths andanglesoftriangles. The field emerged during the 3rd century BC from applications ofgeometryto astronomical studies.[13]
In the United States, there has been considerable concern about the low level of elementary mathematics skills on the part of many students, as compared to students in other developed countries.[14]TheNo Child Left Behindprogram was one attempt to address this deficiency, requiring that all American students be tested in elementary mathematics.[15]
|
https://en.wikipedia.org/wiki/Elementary_mathematics
|
Theabsolute infinite(symbol:Ω), in context often called "absolute", is an extension of the idea ofinfinityproposed bymathematicianGeorg Cantor. Cantor linked the absolute infinite withGod,[1][2]: 175[3]: 556and believed that it had variousmathematicalproperties, including thereflection principle: every property of the absolute infinite is also held by some smaller object.[4][clarification needed]
Cantor said:
The actual infinite was distinguished by three relations: first, as it is realized in the supremeperfection, in the completely independent, extra worldly existence, in Deo, where I call it absolute infinite or simply absolute; second to the extent that it is represented in the dependent, creatural world; third as it can be conceived in abstracto in thought as a mathematical magnitude, number or order type. In the latter two relations, where it obviously reveals itself as limited and capable for further proliferation and hence familiar to the finite, I call itTransfinitumand strongly contrast it with the absolute.[5]
While using theLatinexpressionin Deo(in God), Cantor identifiesabsoluteinfinity withGod(GA 175–176, 376, 378, 386, 399). According to Cantor, Absolute Infinity is beyondmathematical comprehensionand shall be interpreted in terms ofnegative theology.[6]
Cantor also mentioned the idea in his letters toRichard Dedekind(text in square brackets not present in original):[8]
A multiplicity [he appears to mean what we now call aset] is calledwell-orderedif it fulfills the condition that every sub-multiplicity has a firstelement; such a multiplicity I call for short a "sequence"....Now I envisage the system of all [ordinal] numbers and denote itΩ....The systemΩin its natural ordering according to magnitude is a "sequence".Now let us adjoin 0 as an additional element to this sequence, and place it, obviously, in the first position; then we obtain a sequenceΩ′:0, 1, 2, 3, ... ω0, ω0+1, ..., γ, ...of which one can readily convince oneself that every number γ occurring in it is the type [i.e., order-type] of the sequence of all its preceding elements (including 0). (The sequenceΩhas this property first for ω0+1. [ω0+1 should be ω0.])NowΩ′(and therefore alsoΩ) cannot be a consistent multiplicity. For ifΩ′were consistent, then as a well-ordered set, a numberδwould correspond to it which would be greater than all numbers of the systemΩ; the numberδ, however, also belongs to the systemΩ, because it comprises all numbers. Thusδwould be greater thanδ, which is a contradiction. Therefore:
The system Ω of all [ordinal] numbers is an inconsistent, absolutely infinite multiplicity.
The idea that the collection of all ordinal numbers cannot logically exist seemsparadoxicalto many. This is related to the Burali-Forti's paradox which implies that there can be no greatestordinal number. All of these problems can be traced back to the idea that, for every property that can be logically defined, there exists a set of all objects that have that property. However, as in Cantor's argument (above), this idea leads to difficulties.
More generally, as noted byA. W. Moore, there can be no end to the process ofsetformation, and thus no such thing as thetotality of all sets, or theset hierarchy. Any such totality would itself have to be a set, thus lying somewhere within thehierarchyand thus failing to contain every set.
A standard solution to this problem is found inZermelo set theory, which does not allow the unrestricted formation of sets from arbitrary properties. Rather, we may form the set of all objects that have a given propertyand lie in some given set(Zermelo'sAxiom of Separation). This allows for the formation of sets based on properties, in a limited sense, while (hopefully) preserving the consistency of the theory.
While this solves the logical problem, one could argue that the philosophical problem remains. It seems natural that a set of individuals ought to exist, so long as the individuals exist. Indeed,naive set theorymight be said to be based on this notion. Although Zermelo's fix allows aclassto describe arbitrary (possibly "large") entities, these predicates of themetalanguagemay have no formal existence (i.e., as a set) within the theory. For example, the class of all sets would be aproper class. This is philosophically unsatisfying to some and has motivated additional work inset theoryand other methods of formalizing the foundations of mathematics such asNew FoundationsbyWillard Van Orman Quine.
Es wurde das Aktual-Unendliche (A-U.) nach drei Beziehungen unterschieden: erstens, sofern es in der höchsten Vollkommenheit, im völlig unabhängigen außerweltlichen Sein, in Deo realisiert ist, wo ich es Absolut Unendliches oder kurzweg Absolutes nenne; zweitens, sofern es in der abhängigen, kreatürlichen Welt vertreten ist; drittens, sofern es als mathematische Größe, Zahl oder Ordnungstypus vom Denken in abstracto aufgefaßt werden kann. In den beiden letzten Beziehungen, wo es offenbar als beschränktes, noch weiterer Vermehrung fähiges und insofern dem Endlichen verwandtes A.-U. sich darstellt, nenne ich esTransfinitumund setze es dem Absoluten strengstens entgegen.
|
https://en.wikipedia.org/wiki/Absolute_infinite
|
Public Key Cryptography Standards(PKCS) are a group ofpublic-key cryptographystandards devised and published byRSA SecurityLLC, starting in the early 1990s. The company published the standards to promote the use of the cryptography techniques for which they hadpatents, such as theRSA algorithm, theSchnorr signaturealgorithm and several others. Though notindustry standards(because the company retained control over them), some of the standards have begun to move into the "standards track" processes of relevantstandards organizationsin recent years[when?], such as theIETFand thePKIXworking group.
Key Updates (2023–2024):
This container format can contain multiple embedded objects, such as multiple certificates. Usually protected/encrypted with a password. Usable as a format for theJava KeyStoreand to establish client authentication certificates in Mozilla Firefox. Usable byApache Tomcat.
|
https://en.wikipedia.org/wiki/PKCS#1
|
Inmathematics, theVolterra integral equationsare a special type ofintegral equations.[1]They are divided into two groups referred to as the first and the second kind.
A linear Volterra equation of the first kind is
wherefis a given function andxis an unknown function to be solved for. A linear Volterra equation of the second kind is
Inoperator theory, and inFredholm theory, the corresponding operators are calledVolterra operators. A useful method to solve such equations, theAdomian decomposition method, is due toGeorge Adomian.
A linear Volterra integral equation is aconvolutionequation if
The functionK{\displaystyle K}in the integral is called thekernel. Such equations can be analyzed and solved by means ofLaplace transformtechniques.
For a weakly singular kernel of the formK(t,s)=(t2−s2)−α{\displaystyle K(t,s)=(t^{2}-s^{2})^{-\alpha }}with0<α<1{\displaystyle 0<\alpha <1}, Volterra integral equation of the first kind can conveniently be transformed into a classical Abel integral equation.
The Volterra integral equations were introduced byVito Volterraand then studied byTraian Lalescuin his 1908 thesis,Sur les équations de Volterra, written under the direction ofÉmile Picard. In 1911, Lalescu wrote the first book ever on integral equations.
Volterra integral equations find application indemographyasLotka's integral equation,[2]the study ofviscoelasticmaterials,
inactuarial sciencethrough therenewal equation,[3]and influid mechanicsto describe the flow behavior near finite-sized boundaries.[4][5]
A linear Volterra equation of the first kind can always be reduced to a linear Volterra equation of the second kind, assuming thatK(t,t)≠0{\displaystyle K(t,t)\neq 0}. Taking the derivative of the first kind Volterra equation gives us:dfdt=∫at∂K∂tx(s)ds+K(t,t)x(t){\displaystyle {df \over {dt}}=\int _{a}^{t}{\partial K \over {\partial t}}x(s)ds+K(t,t)x(t)}Dividing through byK(t,t){\displaystyle K(t,t)}yields:x(t)=1K(t,t)dfdt−∫at1K(t,t)∂K∂tx(s)ds{\displaystyle x(t)={1 \over {K(t,t)}}{df \over {dt}}-\int _{a}^{t}{1 \over {K(t,t)}}{\partial K \over {\partial t}}x(s)ds}Definingf~(t)=1K(t,t)dfdt{\textstyle {\widetilde {f}}(t)={1 \over {K(t,t)}}{df \over {dt}}}andK~(t,s)=−1K(t,t)∂K∂t{\textstyle {\widetilde {K}}(t,s)=-{1 \over {K(t,t)}}{\partial K \over {\partial t}}}completes the transformation of the first kind equation into a linear Volterra equation of the second kind.
A standard method for computing the numerical solution of a linear Volterra equation of the second kind is thetrapezoidal rule, which for equally-spaced subintervalsΔx{\displaystyle \Delta x}is given by:∫abf(x)dx≈Δx2[f(x0)+2∑i=1n−1f(xi)+f(xn)]{\displaystyle \int _{a}^{b}f(x)dx\approx {\Delta x \over {2}}\left[f(x_{0})+2\sum _{i=1}^{n-1}f(x_{i})+f(x_{n})\right]}Assuming equal spacing for the subintervals, the integral component of the Volterra equation may be approximated by:∫atK(t,s)x(s)ds≈Δs2[K(t,s0)x(s0)+2K(t,s1)x(s1)+⋯+2K(t,sn−1)x(sn−1)+K(t,sn)x(sn)]{\displaystyle \int _{a}^{t}K(t,s)x(s)ds\approx {\Delta s \over {2}}\left[K(t,s_{0})x(s_{0})+2K(t,s_{1})x(s_{1})+\cdots +2K(t,s_{n-1})x(s_{n-1})+K(t,s_{n})x(s_{n})\right]}Definingxi=x(si){\displaystyle x_{i}=x(s_{i})},fi=f(ti){\displaystyle f_{i}=f(t_{i})}, andKij=K(ti,sj){\displaystyle K_{ij}=K(t_{i},s_{j})}, we have the system of linear equations:x0=f0x1=f1+Δs2(K10x0+K11x1)x2=f2+Δs2(K20x0+2K21x1+K22x2)⋮xn=fn+Δs2(Kn0x0+2Kn1x1+⋯+2Kn,n−1xn−1+Knnxn){\displaystyle {\begin{aligned}x_{0}&=f_{0}\\x_{1}&=f_{1}+{\Delta s \over {2}}\left(K_{10}x_{0}+K_{11}x_{1}\right)\\x_{2}&=f_{2}+{\Delta s \over {2}}\left(K_{20}x_{0}+2K_{21}x_{1}+K_{22}x_{2}\right)\\&\vdots \\x_{n}&=f_{n}+{\Delta s \over {2}}\left(K_{n0}x_{0}+2K_{n1}x_{1}+\cdots +2K_{n,n-1}x_{n-1}+K_{nn}x_{n}\right)\end{aligned}}}This is equivalent to thematrixequation:x=f+Mx⟹x=(I−M)−1f{\displaystyle x=f+Mx\implies x=(I-M)^{-1}f}For well-behaved kernels, the trapezoidal rule tends to work well.
One area where Volterra integral equations appear is inruin theory, the study of the risk of insolvency in actuarial science. The objective is to quantify the probability of ruinψ(u)=P[τ(u)<∞]{\displaystyle \psi (u)=\mathbb {P} [\tau (u)<\infty ]}, whereu{\displaystyle u}is the initial surplus andτ(u){\displaystyle \tau (u)}is the time of ruin. In theclassical modelof ruin theory, the net cash positionXt{\displaystyle X_{t}}is a function of the initial surplus, premium income earned at ratec{\displaystyle c}, and outgoing claimsξ{\displaystyle \xi }:Xt=u+ct−∑i=1Ntξi,t≥0{\displaystyle X_{t}=u+ct-\sum _{i=1}^{N_{t}}\xi _{i},\quad t\geq 0}whereNt{\displaystyle N_{t}}is aPoisson processfor the number of claims with intensityλ{\displaystyle \lambda }. Under these circumstances, the ruin probability may be represented by a Volterra integral equation of the form[6]:ψ(u)=λc∫u∞S(x)dx+λc∫0uψ(u−x)S(x)dx{\displaystyle \psi (u)={\lambda \over {c}}\int _{u}^{\infty }S(x)dx+{\lambda \over {c}}\int _{0}^{u}\psi (u-x)S(x)dx}whereS(⋅){\displaystyle S(\cdot )}is thesurvival functionof the claims distribution.
|
https://en.wikipedia.org/wiki/Volterra_integral_equation
|
Javais ahigh-level,general-purpose,memory-safe,object-orientedprogramming language. It is intended to letprogrammerswrite once, run anywhere(WORA),[18]meaning thatcompiledJava code can run on all platforms that support Java without the need to recompile.[19]Java applications are typically compiled tobytecodethat can run on anyJava virtual machine(JVM) regardless of the underlyingcomputer architecture. Thesyntaxof Java is similar toCandC++, but has fewerlow-levelfacilities than either of them. The Java runtime provides dynamic capabilities (such asreflectionand runtime code modification) that are typically not available in traditional compiled languages.
Java gained popularity shortly after its release, and has been a popular programming language since then.[20]Java was the third most popular programming language in 2022[update]according toGitHub.[21]Although still widely popular, there has been a gradual decline in use of Java in recent years withother languages using JVMgaining popularity.[22]
Java was designed byJames GoslingatSun Microsystems. It was released in May 1995 as a core component of Sun'sJava platform. The original andreference implementationJavacompilers, virtual machines, andclass librarieswere released by Sun underproprietary licenses. As of May 2007, in compliance with the specifications of theJava Community Process, Sun hadrelicensedmost of its Java technologies under theGPL-2.0-onlylicense.Oracle, which bought Sun in 2010, offers its ownHotSpotJava Virtual Machine. However, the officialreference implementationis theOpenJDKJVM, which is open-source software used by most developers and is the default JVM for almost all Linux distributions.
Java 24is the version current as of March 2025[update]. Java 8, 11, 17, and 21 arelong-term supportversions still under maintenance.
James Gosling, Mike Sheridan, andPatrick Naughtoninitiated the Java language project in June 1991.[23]Java was originally designed for interactive television, but it was too advanced for the digital cable television industry at the time.[24]The language was initially calledOakafter anoaktree that stood outside Gosling's office. Later the project went by the nameGreenand was finally renamedJava, fromJava coffee, a type of coffee fromIndonesia.[25]Gosling designed Java with aC/C++-style syntax that system and application programmers would find familiar.[26]
Sun Microsystems released the first public implementation as Java 1.0 in 1996.[27]It promisedwrite once, run anywhere(WORA) functionality, providing no-cost run-times on popularplatforms. Fairly secure and featuring configurable security, it allowed network- and file-access restrictions. Majorweb browserssoon incorporated the ability to runJava appletswithin web pages, and Java quickly became popular. The Java 1.0 compiler was re-writtenin JavabyArthur van Hoffto comply strictly with the Java 1.0 language specification.[28]With the advent of Java 2 (released initially as J2SE 1.2 in December 1998 – 1999), new versions had multiple configurations built for different types of platforms.J2EEincluded technologies and APIs for enterprise applications typically run in server environments, while J2ME featured APIs optimized for mobile applications. The desktop version was renamed J2SE. In 2006, for marketing purposes, Sun renamed new J2 versions asJava EE,Java ME, andJava SE, respectively.
In 1997, Sun Microsystems approached theISO/IEC JTC 1standards body and later theEcma Internationalto formalize Java, but it soon withdrew from the process.[29][30][31]Java remains ade factostandard, controlled through theJava Community Process.[32]At one time, Sun made most of its Java implementations available without charge, despite theirproprietary softwarestatus. Sun generated revenue from Java through the selling of licenses for specialized products such as the Java Enterprise System.
On November 13, 2006, Sun released much of its Java virtual machine (JVM) asfree and open-source software(FOSS), under the terms of theGPL-2.0-onlylicense. On May 8, 2007, Sun finished the process, making all of its JVM's core code available underfree software/open-source distribution terms, aside from a small portion of code to which Sun did not hold the copyright.[33]
Sun's vice-president Rich Green said that Sun's ideal role with regard to Java was as anevangelist.[34]FollowingOracle Corporation's acquisition of Sun Microsystems in 2009–10, Oracle has described itself as the steward of Java technology with a relentless commitment to fostering a community of participation and transparency.[35]This did not prevent Oracle from filing a lawsuit against Google shortly after that for using Java inside theAndroid SDK(see theAndroidsection).
On April 2, 2010, James Gosling resigned fromOracle.[36]
In January 2016, Oracle announced that Java run-time environments based on JDK 9 will discontinue the browser plugin.[37]
Java software runs on most devices from laptops todata centers,game consolesto scientificsupercomputers.[38]
Oracle(and others) highly recommend uninstalling outdated and unsupported versions of Java, due to unresolved security issues in older versions.[39]
There were five primary goals in creating the Java language:[19]
As of November 2024[update], Java 8, 11, 17, and 21 are supported aslong-term support(LTS) versions, with Java 25, releasing in September 2025, as the next scheduled LTS version.[40]
Oracle released the last zero-cost public update for thelegacyversionJava 8LTS in January 2019 for commercial use, although it will otherwise still support Java 8 with public updates for personal use indefinitely. Other vendors such asAdoptiumcontinue to offer free builds of OpenJDK's long-term support (LTS) versions. These builds may include additional security patches and bug fixes.[41]
Major release versions of Java, along with their release dates:
Sun has defined and supports four editions of Java targeting different application environments and segmented many of itsAPIsso that they belong to one of the platforms. The platforms are:
Theclassesin the Java APIs are organized into separate groups calledpackages. Each package contains a set of relatedinterfaces, classes, subpackages andexceptions.
Sun also provided an edition calledPersonal Javathat has been superseded by later, standards-based Java ME configuration-profile pairings.
One design goal of Java isportability, which means that programs written for the Java platform must run similarly on any combination of hardware and operating system with adequate run time support. This is achieved by compiling the Java language code to an intermediate representation calledJava bytecode, instead of directly to architecture-specificmachine code. Java bytecode instructions are analogous to machine code, but they are intended to be executed by avirtual machine(VM) written specifically for the host hardware.End-userscommonly use aJava Runtime Environment(JRE) installed on their device for standalone Java applications or a web browser forJava applets.
Standard libraries provide a generic way to access host-specific features such as graphics,threading, andnetworking.
The use of universal bytecode makes porting simple. However, the overhead ofinterpretingbytecode into machine instructions made interpreted programs almost always run more slowly than nativeexecutables.Just-in-time(JIT) compilers that compile byte-codes to machine code during runtime were introduced from an early stage. Java's Hotspot compiler is actually two compilers in one; and withGraalVM(included in e.g. Java 11, but removed as of Java 16) allowingtiered compilation.[51]Java itself is platform-independent and is adapted to the particular platform it is to run on by aJava virtual machine(JVM), which translates theJava bytecodeinto the platform's machine language.[52]
Programs written in Java have a reputation for being slower and requiring more memory than those written inC++.[53][54]However, Java programs' execution speed improved significantly with the introduction ofjust-in-time compilationin 1997/1998 forJava 1.1,[55]the addition of language features supporting better code analysis (such as inner classes, the StringBuilder class, optional assertions, etc.), and optimizations in the Java virtual machine, such asHotSpotbecoming Sun's default JVM in 2000. With Java 1.5, the performance was improved with the addition of thejava.util.concurrentpackage, includinglock-freeimplementations of theConcurrentMapsand other multi-core collections, and it was improved further with Java 1.6.
Some platforms offer direct hardware support for Java; there are micro controllers that can run Java bytecode in hardware instead of a software Java virtual machine,[56]and someARM-based processors could have hardware support for executing Java bytecode through theirJazelleoption, though support has mostly been dropped in current implementations of ARM.
Java uses anautomatic garbage collectorto manage memory in theobject lifecycle. The programmer determines when objects are created, and the Java runtime is responsible for recovering the memory once objects are no longer in use. Once no references to an object remain, theunreachable memorybecomes eligible to be freed automatically by the garbage collector. Something similar to amemory leakmay still occur if a programmer's code holds a reference to an object that is no longer needed, typically when objects that are no longer needed are stored in containers that are still in use.[57]If methods for a non-existent object are called, anull pointerexception is thrown.[58][59]
One of the ideas behind Java's automatic memory management model is that programmers can be spared the burden of having to perform manual memory management. In some languages, memory for the creation of objects is implicitly allocated on thestackor explicitly allocated and deallocated from theheap. In the latter case, the responsibility of managing memory resides with the programmer. If the program does not deallocate an object, amemory leakoccurs.[57]If the program attempts to access or deallocate memory that has already been deallocated, the result is undefined and difficult to predict, and the program is likely to become unstable or crash. This can be partially remedied by the use ofsmart pointers, but these add overhead and complexity. Garbage collection does not preventlogical memoryleaks, i.e. those where the memory is still referenced but never used.[57]
Garbage collection may happen at any time. Ideally, it will occur when a program is idle. It is guaranteed to be triggered if there is insufficient free memory on the heap to allocate a new object; this can cause a program to stall momentarily. Explicit memory management is not possible in Java.
Java does not support C/C++ stylepointer arithmetic, where object addresses can be arithmetically manipulated (e.g. by adding or subtracting an offset). This allows the garbage collector to relocate referenced objects and ensures type safety and security.
As in C++ and some other object-oriented languages, variables of Java'sprimitive data typesare either stored directly in fields (for objects) or on thestack(for methods) rather than on the heap, as is commonly true for non-primitive data types (but seeescape analysis). This was a conscious decision by Java's designers for performance reasons.
Java contains multiple types of garbage collectors. Since Java 9, HotSpot uses theGarbage First Garbage Collector(G1GC) as the default.[60]However, there are also several other garbage collectors that can be used to manage the heap, such as the Z Garbage Collector (ZGC) introduced in Java 11, and Shenandoah GC, introduced in Java 12 but unavailable in Oracle-produced OpenJDK builds. Shenandoah is instead available in third-party builds of OpenJDK, such asEclipse Temurin. For most applications in Java, G1GC is sufficient. In prior versions of Java, such as Java 8, theParallel Garbage Collectorwas used as the default garbage collector.
Having solved the memory management problem does not relieve the programmer of the burden of handling properly other kinds of resources, like network or database connections, file handles, etc., especially in the presence of exceptions.
The syntax of Java is largely influenced byC++andC. Unlike C++, which combines the syntax for structured, generic, and object-oriented programming, Java was built almost exclusively as an object-oriented language.[19]All code is written inside classes, and every data item is an object, with the exception of the primitive data types, (i.e. integers, floating-point numbers,boolean values, and characters), which are not objects for performance reasons. Java reuses some popular aspects of C++ (such as theprintfmethod).
Unlike C++, Java does not supportoperator overloading[61]ormultiple inheritancefor classes, though multiple inheritance is supported forinterfaces.[62]
Java usescommentssimilar to those of C++. There are three different styles of comments: a single line style marked with two slashes (//), a multiple line style opened with/*and closed with*/, and theJavadoccommenting style opened with/**and closed with*/. The Javadoc style of commenting allows the user to run the Javadoc executable to create documentation for the program and can be read by someintegrated development environments(IDEs) such asEclipseto allow developers to access documentation within the IDE.
The following is a simple example of a"Hello, World!" programthat writes a message to thestandard output:
Java applets were programs embedded in other applications, mainly in web pages displayed in web browsers. The Java applet API was deprecated with the release of Java 9 in 2017.[63][64]
Java servlettechnology provides Web developers with a simple, consistent mechanism for extending the functionality of a Web server and for accessing existing business systems. Servlets areserver-sideJava EE components that generate responses to requests fromclients. Most of the time, this means generatingHTMLpages in response toHTTPrequests, although there are a number of other standard servlet classes available, for example forWebSocketcommunication.
The Java servlet API has to some extent been superseded (but still used under the hood) by two standard Java technologies for web services:
Typical implementations of these APIs on Application Servers or Servlet Containers use a standard servlet for handling all interactions with theHTTPrequests and responses that delegate to the web service methods for the actual business logic.
JavaServer Pages (JSP) areserver-sideJava EE components that generate responses, typicallyHTMLpages, toHTTPrequests fromclients. JSPs embed Java code in an HTML page by using the specialdelimiters<%and%>. A JSP is compiled to a Javaservlet, a Java application in its own right, the first time it is accessed. After that, the generated servlet creates the response.[65]
Swingis a graphical user interfacelibraryfor the Java SE platform. It is possible to specify a different look and feel through thepluggable look and feelsystem of Swing. Clones ofWindows,GTK+, andMotifare supplied by Sun.Applealso provides anAqualook and feel formacOS. Where prior implementations of these looks and feels may have been considered lacking, Swing in Java SE 6 addresses this problem by using more nativeGUI widgetdrawing routines of the underlying platforms.[66]
JavaFXis asoftware platformfor creating and deliveringdesktop applications, as well asrich web applicationsthat can run across a wide variety of devices. JavaFX is intended to replaceSwingas the standardgraphical user interface(GUI) library forJava SE, but since JDK 11 JavaFX has not been in the core JDK and instead in a separate module.[67]JavaFX has support fordesktop computersandweb browsersonMicrosoft Windows,Linux, andmacOS. JavaFX does not have support for native OS look and feels.[68]
In 2004,genericswere added to the Java language, as part of J2SE 5.0. Prior to the introduction of generics, each variable declaration had to be of a specific type. For container classes, for example, this is a problem because there is no easy way to create a container that accepts only specific types of objects. Either the container operates on all subtypes of a class or interface, usuallyObject, or a different container class has to be created for each contained class. Generics allow compile-time type checking without having to create many container classes, each containing almost identical code. In addition to enabling more efficient code, certain runtime exceptions are prevented from occurring, by issuing compile-time errors. If Java prevented all runtime type errors (ClassCastExceptions) from occurring, it would betype safe.
In 2016, the type system of Java was provenunsoundin that it is possible to use generics to construct classes and methods that allow assignment of an instance of one class to a variable of another unrelated class. Such code is accepted by the compiler, but fails at run time with a class cast exception.[69]
Criticisms directed at Java include the implementation of generics,[70]speed,[53]the handling of unsigned numbers,[71]the implementation of floating-point arithmetic,[72]and a history of security vulnerabilities in the primary Java VM implementationHotSpot.[73]Developers have criticized the complexity and verbosity of the Java Persistence API (JPA), a standard part of Java EE. This has led to increased adoption of higher-level abstractions like Spring Data JPA, which aims to simplify database operations and reduce boilerplate code. The growing popularity of such frameworks suggests limitations in the standard JPA implementation's ease-of-use for modern Java development.[74]
TheJava Class Libraryis thestandard library, developed to support application development in Java. It is controlled byOraclein cooperation with others through theJava Community Processprogram.[75]Companies or individuals participating in this process can influence the design and development of the APIs. This process has been a subject of controversy during the 2010s.[76]The class library contains features such as:
Javadoc is a comprehensive documentation system, created bySun Microsystems. It provides developers with an organized system for documenting their code. Javadoc comments have an extra asterisk at the beginning, i.e. the delimiters are/**and*/, whereas the normal multi-line comments in Java are delimited by/*and*/, and single-line comments start with//.[84]
Oracle Corporationowns the official implementation of the Java SE platform, due to its acquisition ofSun Microsystemson January 27, 2010. This implementation is based on the original implementation of Java by Sun. The Oracle implementation is available forWindows,macOS,Linux, andSolaris. Because Java lacks any formal standardization recognized byEcma International, ISO/IEC, ANSI, or other third-party standards organizations, the Oracle implementation is thede facto standard.
The Oracle implementation is packaged into two different distributions: The Java Runtime Environment (JRE) which contains the parts of the Java SE platform required to run Java programs and is intended for end users, and theJava Development Kit(JDK), which is intended for software developers and includes development tools such as theJava compiler,Javadoc,Jar, and adebugger. Oracle has also releasedGraalVM, a high performance Java dynamic compiler and interpreter.
OpenJDKis another Java SE implementation that is licensed under the GNU GPL. The implementation started when Sun began releasing the Java source code under the GPL. As of Java SE 7, OpenJDK is the official Java reference implementation.
The goal of Java is to make all implementations of Java compatible. Historically, Sun's trademark license for usage of the Java brand insists that all implementations becompatible. This resulted in a legal dispute withMicrosoftafter Sun claimed that the Microsoft implementation did not supportJava remote method invocation(RMI) orJava Native Interface(JNI) and had added platform-specific features of their own. Sun sued in 1997, and, in 2001, won a settlement of US$20 million, as well as a court order enforcing the terms of the license from Sun.[85]As a result, Microsoft no longer ships Java withWindows.
Platform-independent Java is essential toJava EE, and an even more rigorous validation is required to certify an implementation. This environment enables portable server-side applications.
The Java programming language requires the presence of a software platform in order for compiled programs to be executed.
Oracle supplies theJava platformfor use with Java. TheAndroid SDKis an alternative software platform, used primarily for developingAndroid applicationswith its own GUI system.
The Java language is a key pillar inAndroid, anopen sourcemobile operating system. Although Android, built on theLinux kernel, is written largely in C, theAndroid SDKuses the Java language as the basis for Android applications but does not use any of its standard GUI, SE, ME or other established Java standards.[86]The bytecode language supported by the Android SDK is incompatible with Java bytecode and runs on its own virtual machine, optimized for low-memory devices such assmartphonesandtablet computers. Depending on the Android version, the bytecode is either interpreted by theDalvik virtual machineor compiled into native code by theAndroid Runtime.
Android does not provide the full Java SE standard library, although the Android SDK does include an independent implementation of a large subset of it. It supports Java 6 and some Java 7 features, offering an implementation compatible with the standard library (Apache Harmony).
The use of Java-related technology in Android led to a legal dispute between Oracle and Google. On May 7, 2012, a San Francisco jury found that if APIs could be copyrighted, then Google had infringed Oracle's copyrights by the use of Java in Android devices.[87]District JudgeWilliam Alsupruled on May 31, 2012, that APIs cannot be copyrighted,[88]but this was reversed by the United States Court of Appeals for the Federal Circuit in May 2014.[89]On May 26, 2016, the district court decided in favor of Google, ruling the copyright infringement of the Java API in Android constitutes fair use.[90]In March 2018, this ruling was overturned by the Appeals Court, which sent down the case of determining the damages to federal court in San Francisco.[91]Google filed a petition forwrit of certiorariwith theSupreme Court of the United Statesin January 2019 to challenge the two rulings that were made by the Appeals Court in Oracle's favor.[92]On April 5, 2021, the Court ruled 6–2 in Google's favor, that its use of Java APIs should be consideredfair use. However, the court refused to rule on the copyrightability of APIs, choosing instead to determine their ruling by considering Java's API copyrightable "purely for argument's sake."[93]
|
https://en.wikipedia.org/wiki/Java_(programming_language)
|
SCADA(an acronym forsupervisory control and data acquisition) is acontrol systemarchitecture comprisingcomputers, networkeddata communicationsandgraphical user interfacesforhigh-levelsupervision of machines and processes. It also covers sensors and other devices, such asprogrammable logic controllers, also known as a DCS (Distributed Control System), which interface with process plant or machinery.
The operator interfaces, which enable monitoring and the issuing of process commands, such as controllersetpointchanges, are handled through the SCADA computer system. The subordinated operations, e.g. the real-time control logic or controller calculations, are performed by networked modules connected to the fieldsensorsandactuators.
The SCADA concept was developed to be a universal means of remote-access to a variety of local control modules, which could be from different manufacturers and allowing access through standard automationprotocols. In practice, large SCADA systems have grown to become similar todistributed control systemsin function, while using multiple means of interfacing with the plant. They can control large-scale processes spanning multiple sites, and work over large distances. It is one of the most commonly-used types ofindustrial control systems.
The key attribute of a SCADA system is its ability to perform a supervisory operation over a variety of other proprietary devices.
Level 1 contains theprogrammable logic controllers(PLCs) orremote terminal units(RTUs).
Level 2 contains the SCADA to readings and equipment status reports that are communicated to level 2 SCADA as required. Data is then compiled and formatted in such a way that a control room operator using thehuman-machine interface(HMI) can make supervisory decisions to adjust or override normal RTU (PLC) controls. Data may also be fed to ahistorian, often built on a commoditydatabase management system, to allow trending and other analytical auditing.
SCADA systems typically use atag database, which contains data elements calledtagsorpoints, which relate to specific instrumentation or actuators within the process system. Data is accumulated against these unique process control equipment tag references.
A SCADA system usually consists of the following main elements:
An important part of most SCADA implementations isalarm handling. The system monitors whether certain alarm conditions are satisfied, to determine when an alarm event has occurred. Once an alarm event has been detected, one or more actions are taken (such as the activation of one or more alarm indicators, and perhaps the generation of email or text messages so that management or remote SCADA operators are informed). In many cases, a SCADA operator may have to acknowledge the alarm event; this may deactivate some alarm indicators, whereas other indicators remain active until the alarm conditions are cleared.
Alarm conditions can be explicit—for example, an alarm point is a digital status point that has either the value NORMAL or ALARM that is calculated by a formula based on the values in other analogue and digital points—or implicit: the SCADA system might automatically monitor whether the value in an analogue point lies outside high and low- limit values associated with that point.
Examples of alarm indicators include a siren, a pop-up box on a screen, or a coloured or flashing area on a screen (that might act in a similar way to the "fuel tank empty" light in a car); in each case, the role of the alarm indicator is to draw the operator's attention to the part of the system 'in alarm' so that appropriate action can be taken.
"Smart" RTUs, or standard PLCs, are capable of autonomously executing simple logic processes without involving the supervisory computer. They employ standardized control programming languages (such as those underIEC 61131-3, a suite of five programming languages including function block, ladder, structured text, sequence function charts and instruction list), that are frequently used to create programs which run on these RTUs and PLCs. Unlike a procedural language likeCorFORTRAN, IEC 61131-3 has minimal training requirements by virtue of resembling historic physical control arrays. This allows SCADA system engineers to perform both the design and implementation of a program to be executed on an RTU or PLC.
Aprogrammable automation controller(PAC) is a compact controller that combines the features and capabilities of a PC-based control system with that of a typical PLC. PACs are deployed in SCADA systems to provide RTU and PLC functions. In manyelectrical substationSCADA applications, "distributed RTUs" use information processors or station computers to communicate withdigital protective relays, PACs, and other devices for I/O, and communicate with the SCADA master in lieu of a traditional RTU.
Since about 1998, virtually all major PLC manufacturers have offered integrated HMI/SCADA systems, many of them using open and non-proprietary communications protocols. Numerous specialized third-party HMI/SCADA packages, offering built-in compatibility with most major PLCs, have also entered the market, allowing mechanical engineers, electrical engineers and technicians to configure HMIs themselves, without the need for a custom-made program written by a software programmer. The Remote Terminal Unit (RTU) connects to physical equipment. Typically, an RTU converts the electrical signals from the equipment to digital values. By converting and sending these electrical signals out to equipment the RTU can control equipment.
SCADA systems have traditionally used combinations of radio and direct wired connections, althoughSONET/SDHis also frequently used for large systems such as railways and power stations. The remote management or monitoring function of a SCADA system is often referred to astelemetry. Some users want SCADA data to travel over their pre-established corporate networks or to share the network with other applications. The legacy of the early low-bandwidth protocols remains, though.
SCADA protocols are designed to be very compact. Many are designed to send information only when the master station polls the RTU. Typical legacy SCADA protocols includeModbusRTU,RP-570,Profibusand Conitel. These communication protocols, with the exception of Modbus (Modbus has been made open by Schneider Electric), are all SCADA-vendor specific but are widely adopted and used. Standard protocols areIEC 60870-5-101 or 104,IEC 61850andDNP3. These communication protocols are standardized and recognized by all major SCADA vendors. Many of these protocols now contain extensions to operate overTCP/IP. Although the use of conventional networking specifications, such asTCP/IP, blurs the line between traditional and industrial networking, they each fulfill fundamentally differing requirements.[3]Network simulationcan be used in conjunction with SCADA simulators to perform various 'what-if' analyses.
With increasing security demands (such asNorth American Electric Reliability Corporation(NERC) andcritical infrastructure protection(CIP) in the US), there is increasing use of satellite-based communication. This has the key advantages that the infrastructure can be self-contained (not using circuits from the public telephone system), can have built-in encryption, and can be engineered to the availability and reliability required by the SCADA system operator. Earlier experiences using consumer-gradeVSATwere poor. Modern carrier-class systems provide the quality of service required for SCADA.[4]
RTUs and other automatic controller devices were developed before the advent of industry wide standards for interoperability. The result is that developers and their management created a multitude of control protocols. Among the larger vendors, there was also the incentive to create their own protocol to "lock in" their customer base. Alist of automation protocolsis compiled here.
An example of efforts by vendor groups to standardize automation protocols is the OPC-UA (formerly "OLE for process control" nowOpen Platform Communications Unified Architecture).
SCADA systems have evolved through four generations as follows:[5][6][7][8]
Early SCADA system computing was done by largeminicomputers. Common network services did not exist at the time SCADA was developed. Thus SCADA systems were independent systems with no connectivity to other systems. The communication protocols used were strictly proprietary at that time. The first-generation SCADA system redundancy was achieved using a back-up mainframe system connected to all theRemote Terminal Unitsites and was used in the event of failure of the primary mainframe system.[9]Some first generation SCADA systems were developed as "turn key" operations that ran on minicomputers such as thePDP-11series.[10]
SCADA information and command processing were distributed across multiple stations which were connected through a LAN. Information was shared in near real time. Each station was responsible for a particular task, which reduced the cost as compared to First Generation SCADA. The network protocols used were still not standardized. Since these protocols were proprietary, very few people beyond the developers knew enough to determine how secure a SCADA installation was. Security of the SCADA installation was usually overlooked.
Similar to a distributed architecture, any complex SCADA can be reduced to the simplest components and connected through communication protocols. In the case of a networked design, the system may be spread across more than one LAN network called aprocess control network (PCN)and separated geographically. Several distributed architecture SCADAs running in parallel, with a single supervisor and historian, could be considered a network architecture. This allows for a more cost-effective solution in very large scale systems.
The growth of the internet has led SCADA systems to implement web technologies allowing users to view data, exchange information and control processes from anywhere in the world through web SOCKET connection.[11][12]The early 2000s saw the proliferation of Web SCADA systems.[13][14][15]Web SCADA systems use web browsers such as Google Chrome and Mozilla Firefox as the graphical user interface (GUI) for the operators HMI.[16][13]This simplifies the client side installation and enables users to access the system from various platforms with web browsers such as servers, personal computers, laptops, tablets and mobile phones.[13][17]
SCADA systems that tie together decentralized facilities such as power, oil, gas pipelines, water distribution and wastewater collection systems were designed to be open, robust, and easily operated and repaired, but not necessarily secure.[18][19]The move from proprietary technologies to more standardized and open solutions together with the increased number of connections between SCADA systems, office networks and theInternethas made them more vulnerable to types ofnetwork attacksthat are relatively common incomputer security. For example,United States Computer Emergency Readiness Team (US-CERT)released a vulnerability advisory[20]warning that unauthenticated users could download sensitive configuration information includingpassword hashesfrom anInductive AutomationIgnitionsystem utilizing a standardattack typeleveraging access to theTomcatEmbedded Web server. Security researcher Jerry Brown submitted a similar advisory regarding abuffer overflowvulnerability[21]in aWonderwareInBatchClientActiveX control. Both vendors made updates available prior to public vulnerability release. Mitigation recommendations were standardpatchingpractices and requiringVPNaccess for secure connectivity. Consequently, the security of some SCADA-based systems has come into question as they are seen as potentially vulnerable tocyber attacks.[22][23][24]
In particular, security researchers are concerned about:
SCADA systems are used to control and monitor physical processes, examples of which are transmission of electricity, transportation of gas and oil in pipelines, water distribution, traffic lights, and other systems used as the basis of modern society. The security of these SCADA systems is important because compromise or destruction of these systems would impact multiple areas of society far removed from the original compromise. For example, a blackout caused by a compromised electrical SCADA system would cause financial losses to all the customers that received electricity from that source. How security will affect legacy SCADA and new deployments remains to be seen.
There are many threat vectors to a modern SCADA system. One is the threat of unauthorized access to the control software, whether it is human access or changes induced intentionally or accidentally by virus infections and other software threats residing on the control host machine. Another is the threat of packet access to the network segments hosting SCADA devices. In many cases, the control protocol lacks any form ofcryptographic security, allowing an attacker to control a SCADA device by sending commands over a network. In many cases SCADA users have assumed that having a VPN offered sufficient protection, unaware that security can be trivially bypassed with physical access to SCADA-related network jacks and switches. Industrial control vendors suggest approaching SCADA security likeInformation Securitywith adefense in depthstrategy that leverages common IT practices.[25]Apart from that, research has shown that the architecture of SCADA systems has several other vulnerabilities, including direct tampering with RTUs, communication links from RTUs to the control center, and IT software and databases in the control center.[26]The RTUs could, for instance, be targets of deception attacks injecting false data[27]ordenial-of-service attacks.
The reliable function of SCADA systems in our modern infrastructure may be crucial to public health and safety. As such, attacks on these systems may directly or indirectly threaten public health and safety. Such an attack has already occurred, carried out onMaroochy ShireCouncil's sewage control system inQueensland,Australia.[28]Shortly after a contractor installed a SCADA system in January 2000, system components began to function erratically. Pumps did not run when needed and alarms were not reported. More critically, sewage flooded a nearby park and contaminated an open surface-water drainage ditch and flowed 500 meters to a tidal canal. The SCADA system was directing sewage valves to open when the design protocol should have kept them closed. Initially this was believed to be a system bug. Monitoring of the system logs revealed the malfunctions were the result of cyber attacks. Investigators reported 46 separate instances of malicious outside interference before the culprit was identified. The attacks were made by a disgruntled ex-employee of the company that had installed the SCADA system. The ex-employee was hoping to be hired by the utility full-time to maintain the system.
In April 2008, the Commission to Assess the Threat to the United States fromElectromagnetic Pulse(EMP) Attack issued a Critical Infrastructures Report which discussed the extreme vulnerability of SCADA systems to an electromagnetic pulse (EMP) event. After testing and analysis, the Commission concluded: "SCADA systems are vulnerable to EMP insult. The large numbers and widespread reliance on such systems by all of the Nation’s critical infrastructures represent a systemic threat to their continued operation following an EMP event. Additionally, the necessity to
reboot, repair, or replace large numbers of geographically widely dispersed systems will considerably impede the Nation’s recovery from such an assault."[29]
Many vendors of SCADA and control products have begun to address the risks posed by unauthorized access by developing lines of specialized industrialfirewallandVPNsolutions for TCP/IP-based SCADA networks as well as external SCADA monitoring and recording equipment.
TheInternational Society of Automation(ISA) started formalizing SCADA security requirements in 2007 with a working group, WG4. WG4 "deals specifically with unique technical requirements, measurements, and other features required to evaluate and assure security resilience and performance of industrial automation and control systems devices".[30]
The increased interest in SCADA vulnerabilities has resulted in vulnerability researchers discovering vulnerabilities in commercial SCADA software and more general offensive SCADA techniques presented to the general security community.[31]In electric and gas utility SCADA systems, the vulnerability of the large installed base of wired and wireless serial communications links is addressed in some cases by applyingbump-in-the-wiredevices that employ authentication andAdvanced Encryption Standardencryption rather than replacing all existing nodes.[32]
In June 2010, anti-virus security companyVirusBlokAdareported the first detection of malware that attacks SCADA systems (Siemens'WinCC/PCS 7 systems) running on Windows operating systems. The malware is calledStuxnetand uses fourzero-day attacksto install arootkitwhich in turn logs into the SCADA's database and steals design and control files.[33][34]The malware is also capable of changing the control system and hiding those changes. The malware was found on 14 systems, the majority of which were located in Iran.[35]
In October 2013National Geographicreleased a docudrama titledAmerican Blackoutwhich dealt with an imagined large-scale cyber attack on SCADA and the United States' electrical grid.[36]
Both large and small systems can be built using the SCADA concept. These systems can range from just tens to thousands ofcontrol loops, depending on the application. Example processes include industrial, infrastructure, and facility-based processes, as described below:
However, SCADA systems may have security vulnerabilities, so the systems should be evaluated to identify risks and solutions implemented to mitigate those risks.[37]
|
https://en.wikipedia.org/wiki/SCADA
|
Subjective video qualityisvideo qualityas experienced by humans. It is concerned with how video is perceived by a viewer (also called "observer" or "subject") and designates their opinion on a particularvideosequence. It is related to the field ofQuality of Experience. Measuring subjective video quality is necessary because objective quality assessment algorithms such asPSNRhave been shown to correlate poorly with subjective ratings. Subjective ratings may also be used as ground truth to develop new algorithms.
Subjective video quality testsarepsychophysical experimentsin which a number of viewers rate a given set of stimuli. These tests are quite expensive in terms of time (preparation and running) and human resources and must therefore be carefully designed.
In subjective video quality tests, typically,SRCs("Sources", i.e. original video sequences) are treated with various conditions (HRCsfor "Hypothetical Reference Circuits") to generatePVSs("Processed Video Sequences").[1]
The main idea of measuring subjective video quality is similar to themean opinion score(MOS) evaluation foraudio. To evaluate the subjective video quality of a video processing system, the following steps are typically taken:
Many parameters of the viewing conditions may influence the results, such as room illumination, display type, brightness, contrast, resolution, viewing distance, and the age and educational level of viewers. It is therefore advised to report this information along with the obtained ratings.
Typically, a system should be tested with a representative number of different contents and content characteristics. For example, one may select excerpts from contents of different genres, such as action movies, news shows, and cartoons. The length of the source video depends on the purpose of the test, but typically, sequences of no less than 10 seconds are used.
The amount of motion and spatial detail should also cover a broad range. This ensures that the test contains sequences which are of different complexity.
Sources should be of pristine quality. There should be no visiblecoding artifactsor other properties that would lower the quality of the original sequence.
The design of the HRCs depends on the system under study. Typically, multiple independent variables are introduced at this stage, and they are varied with a number of levels. For example, to test the quality of avideo codec, independent variables may be the video encoding software, a target bitrate, and the target resolution of the processed sequence.
It is advised to select settings that result in ratings which cover the full quality range. In other words, assuming anAbsolute Category Ratingscale, the test should show sequences that viewers would rate from bad to excellent.
Viewers are also called "observers" or "subjects". A certain minimum number of viewers should be invited to a study, since a larger number of subjects increases the reliability of the experiment outcome, for example by reducing the standard deviation of averaged ratings. Furthermore, there is a risk of having to exclude subjects for unreliable behavior during rating.
The minimum number of subjects that are required for a subjective video quality study is not strictly defined. According to ITU-T, any number between 4 and 40 is possible, where 4 is the absolute minimum for statistical reasons, and inviting more than 40 subjects has no added value. In general, at least 15 observers should participate in the experiment. They should not be directly
involved in picture quality evaluation as part of their work and should not be experienced assessors.[2]In other documents, it is also claimed that at minimum 10 subjects are needed to obtain meaningful averaged ratings.[3]
However, most recommendations for the number of subjects have been designed for measuring video quality encountered by a home television or PC user, where the range and diversity of distortions tend to be limited (e.g., to encoding artifacts only). Given the large ranges and diversity of impairments that may occur on videos captured with mobile devices and/or transmitted over wireless networks, generally, a larger number of human subjects may be required.
Brunnström and Barkowsky have provided calculations for estimating the minimum number of subjects necessary based on existing subjective tests.[4]They claim that in order to ensure statistically significant differences when comparing ratings, a larger number of subjects than usually recommended may be needed.
Viewers should be non-experts in the sense of not being professionals in the field of video coding or related domains. This requirement is introduced to avoid potential subject bias.[2]
Typically, viewers are screened fornormal visionor corrected-to-normal vision usingSnellen charts.Color blindnessis often tested withIshihara plates.[2]
There is an ongoing discussion in theQoEcommunity as to whether a viewer's cultural, social, or economic background has a significant impact on the obtained subjective video quality results. A systematic study involving six laboratories in four countries found no statistically significant impact of subject's language and culture / country of origin on video quality ratings.[5]
Subjective quality tests can be done in any environment. However, due to possible influence factors from heterogenous contexts, it is typically advised to perform tests in a neutral environment, such as a dedicated laboratory room. Such a room may be sound-proofed, with walls painted in neutral grey, and using properly calibrated light sources. Several recommendations specify these conditions.[6][7]Controlled environments have been shown to result in lower variability in the obtained scores.[5]
Crowdsourcinghas recently been used for subjective video quality evaluation, and more generally, in the context ofQuality of Experience.[8]Here, viewers give ratings using their own computer, at home, rather than taking part in a subjective quality test in laboratory rooms. While this method allows for obtaining more results than in traditional subjective tests at lower costs, the validity and reliability of the gathered responses must be carefully checked.[9]
Opinions of viewers are typically averaged into the mean opinion score (MOS). To this aim, the labels of categorical scales may be translated into numbers. For example, the responses "bad" to "excellent" can be mapped to the values 1 to 5, and then averaged. MOS values should always be reported with their statisticalconfidence intervalsso that the general agreement between observers can be evaluated.
Often, additional measures are taken before evaluating the results. Subject screening is a process in which viewers whose ratings are considered invalid or unreliable are rejected from further analysis. Invalid ratings are hard to detect, as subjects may have rated without looking at a video, or cheat during the test. The overall reliability of a subject can be determined by various procedures, some of which are outlined in ITU-R and ITU-T recommendations.[2][7]For example, the correlation between a person's individual scores and the overall MOS, evaluated for all sequences, is a good indicator of their reliability in comparison with the remaining test participants.
While rating stimuli, humans are subject to biases. These may lead to different and inaccurate scoring behavior and consequently result in MOS values that are not representative of the “true quality” of a stimulus. In the recent years, advanced models have been proposed that aim at formally describing the rating process and subsequently recovering noisiness in subjective ratings. According to Janowski et al., subjects may have an opinion bias that generally shifts their scores, as well as a scoring imprecision that is dependent on the subject and stimulus to be rated.[10]Li et al. have proposed to differentiate betweensubject inconsistencyandcontent ambiguity.[11]
There are many ways to select proper sequences, system settings, and test methodologies. A few of them have been standardized. They are thoroughly described in several ITU-R and ITU-T recommendations, among those ITU-R BT.500[7]and ITU-T P.910.[2]While there is an overlap in certain aspects, the BT.500 recommendation has its roots in broadcasting, whereas P.910 focuses on multimedia content.
A standardized testing method usually describes the following aspects:
Another recommendation, ITU-T P.913,[6]gives researchers more freedom to conduct subjective quality tests in environments different from a typical testing laboratory, while still requiring them to report all details necessary to make such tests reproducible.
Below, some examples of standardized testing procedures are explained.
Which method to choose largely depends on the purpose of the test and possible constraints in time and other resources. Some methods may have fewer context effects (i.e. where the order of stimuli influences the results), which are unwanted test biases.[12]In ITU-T P.910, it is noted that methods such as DCR should be used for testing the fidelity of transmission, especially in high quality systems. ACR and ACR-HR are better suited for qualification tests and – due to giving absolute results – comparison of systems. The PC method has a high discriminatory power, but it requires longer test sessions.
The results of subjective quality tests, including the used stimuli, are calleddatabases. A number of subjective picture and video quality databases based on such studies have been made publicly available by research institutes. These databases – some of which have become de facto standards – are used globally by television, cinematic, and video engineers around the world to design and test objective quality models, since the developed models can be trained against the obtained subjective data. An overview of publicly available databases has been compiled by theVideo Quality Experts Group, and video assets have been made available in theConsumer Digital Video Library.
|
https://en.wikipedia.org/wiki/Subjective_video_quality
|
Incomputing, acore dump,[a]memory dump,crash dump,storage dump,system dump, orABEND dump[1]consists of the recorded state of the workingmemoryof acomputer programat a specific time, generally when the program hascrashedor otherwise terminated abnormally.[2]In practice, other key pieces ofprogram stateare usually dumped at the same time, including theprocessor registers, which may include theprogram counterandstack pointer, memory management information, and other processor and operating system flags and information. Asnapshot dump(orsnap dump) is a memory dump requested by thecomputer operatoror by the running program, after which the program is able to continue. Core dumps are often used to assist in diagnosing anddebuggingerrors in computer programs.
On many operating systems, afatal exceptionin a program automatically triggers a core dump. By extension, the phrase "to dump core" has come to mean in many cases, any fatal error, regardless of whether a record of the program memory exists. The term "core dump", "memory dump", or just "dump" has also become jargon to indicate any output of a large amount of raw data for further examination or other purposes.[3][4]
The name comes frommagnetic-core memory,[5][6]the principal form ofrandom-access memoryfrom the 1950s to the 1970s. The name has remained long after magnetic-core technology became obsolete.
Earliest core dumps were paper printouts[7]of the contents of memory, typically arranged in columns ofoctalorhexadecimalnumbers (a "hex dump"), sometimes accompanied by their interpretations asmachine languageinstructions, text strings, or decimal or floating-point numbers (cf.disassembler).
As memory sizes increased and post-mortem analysis utilities were developed, dumps were written to magnetic media like tape or disk.
Instead of only displaying the contents of the applicable memory, modern operating systems typically generate a file containing an image of the memory belonging to the crashed process, or the memory images of parts of theaddress spacerelated to that process, along with other information such as the values of processor registers, program counter, system flags, and other information useful in determining the root cause of the crash. These files can be viewed as text, printed, or analysed with specialised tools such as elfdump onUnixandUnix-likesystems,objdumpandkdumponLinux, IPCS (Interactive Problem Control System) on IBMz/OS,[8]DVF (Dump Viewing Facility) on IBMz/VM,[9]WinDbgon Microsoft Windows,Valgrind, or other debuggers.
In some operating systems[b]an application or operator may request a snapshot of selected storage blocks, rather than all of the storage used by the application or operating system.
Core dumps can serve as useful debugging aids in several situations. On early standalone orbatch-processingsystems, core dumps allowed a user to debug a program without monopolizing the (very expensive) computing facility for debugging; a printout could also be more convenient than debugging usingfront panelswitches and lights.
On shared computers, whether time-sharing, batch processing, or server systems, core dumps allow off-line debugging of theoperating system, so that the system can go back into operation immediately.
Core dumps allow a user to save a crash for later or off-site analysis, or comparison with other crashes. Forembedded computers, it may be impractical to support debugging on the computer itself, so analysis of a dump may take place on a different computer. Some operating systems such as early versions ofUnixdid not support attachingdebuggersto running processes, so core dumps were necessary to run a debugger on a process's memory contents.
Core dumps can be used to capture data freed duringdynamic memory allocationand may thus be used to retrieve information from a program that is no longer running. In the absence of an interactive debugger, the core dump may be used by an assiduous programmer to determine the error from direct examination.
Snap dumps are sometimes a convenient way for applications to record quick and dirty debugging output.
A core dump generally represents the complete contents of the dumped regions of the address space of the dumped process. Depending on the operating system, the dump may contain few or no data structures to aid interpretation of the memory regions. In these systems, successful interpretation requires that the program or user trying to interpret the dump understands the structure of the program's memory use.
A debugger can use asymbol table, if one exists, to help the programmer interpret dumps, identifying variables symbolically and displaying source code; if the symbol table is not available, less interpretation of the dump is possible, but there might still be enough possible to determine the cause of the problem. There are also special-purpose tools calleddump analyzersto analyze dumps. One popular tool, available on many operating systems, is the GNU binutils'objdump.
On modernUnix-likeoperating systems, administrators and programmers can read core dump files using the GNU BinutilsBinary File Descriptor library(BFD), and theGNU Debugger(gdb) and objdump that use this library. This library will supply the raw data for a given address in a memory region from a core dump; it does not know anything about variables or data structures in that memory region, so the application using the library to read the core dump will have to determine the addresses of variables and determine the layout of data structures itself, for example by using the symbol table for the program undergoing debugging.
Analysts of crash dumps fromLinuxsystems can usekdumpor the Linux Kernel Crash Dump (LKCD).[10]
Core dumps can save the context (state) of a process at a given state for returning to it later. Systems can be made highly available by transferring core between processors, sometimes via core dump files themselves.
Core can also be dumped onto a remote host over a network (which is a security risk).[11]
Users of IBM mainframes runningz/OScan browse SVC and transaction dumps using Interactive Problem Control System (IPCS), a full screen dump reader which was originally introduced inOS/VS2 (MVS), supports user written scripts inREXXand supports point-and-shoot browsing[c]of dumps.
In older and simpler operating systems, each process had a contiguous address-space, so a dump file was sometimes simply a file with the sequence of bytes, digits,[d]characters[d]or words. On other systems a dump file contained discrete records, each containing a storage address and the associated contents. On the earliest of these machines, the dump was often written by a stand-alone dump program rather than by the application or the operating system.
TheIBSYSmonitor for theIBM 7090included a System Core-Storage Dump Program[12]that supported post-mortem and snap dumps.
On theIBM System/360, the standard operating systems wrote formatted ABEND and SNAP dumps, with the addresses, registers, storage contents, etc., all converted into printable forms. Later releases added the ability to write unformatted[e]dumps, called at that time core image dumps (also known as SVC dumps.)
In modern operating systems, a process address space may contain gaps, and it may share pages with other processes or files, so more elaborate representations are used; they may also include other information about the state of the program at the time of the dump.
InUnix-likesystems, core dumps generally use the standardexecutableimage-format:
InOS/360 and successors, a job may assign arbitrary data set names (dsnames) to the ddnamesSYSABENDandSYSUDUMPfor a formatted ABEND dump and to arbitrary ddnames for SNAP dumps, or define those ddnames as SYSOUT.[f]The Damage Assessment and Repair (DAR) facility added an automatic unformatted[h]storage dump to the datasetSYS1.DUMP[i]at the time of failure as well as a console dump requested by the operator. A job may assign an arbitrary dsname to the ddnameSYSMDUMPfor an unformatted ABEND dump, or define that ddname as SYSOUT.[j]The newer transaction dump is very similar to the older SVC dump. TheInteractive Problem Control System(IPCS), added to OS/VS2 bySelectable Unit(SU) 57[14][15]and part of every subsequentMVSrelease, can be used to interactively analyze storage dumps onDASD. IPCS understands the format and relationships of system control blocks, and can produce a formatted display for analysis. The current versions of IPCS allow inspection of active address spaces[16][k]without first taking a storage dump and of unformaated dumps on SPOOL.
Since Solaris 8, system utilitycoreadmallows the name and location of core files to be configured. Dumps of user processes are traditionally created ascore. On Linux (since versions 2.4.21 and 2.6 of theLinux kernel mainline), a different name can be specified viaprocfsusing the/proc/sys/kernel/core_patternconfiguration file; the specified name can also be a template that contains tags substituted by, for example, the executable filename, the process ID, or the reason for the dump.[17]System-wide dumps on modern Unix-like systems often appear asvmcoreorvmcore.incomplete.
Systems such asMicrosoft Windows, which usefilename extensions, may use extension.dmp; for example, core dumps may be namedmemory.dmpor\Minidump\Mini051509-01.dmp.
Microsoft Windowssupports two memory dump formats, described below.
There are five types of kernel-mode dumps:[18]
To analyze the Windows kernel-mode dumpsDebugging Tools for Windowsare used, a set that includes tools like WinDbg & DumpChk.[20][21][22]
User-mode memory dump, also known asminidump,[23]is a memory dump of a single process. It contains selected data records: full or partial (filtered) process memory; list of thethreadswith theircall stacksand state (such asregistersorTEB); information abouthandlesto the kernel objects; list of loaded and unloadedlibraries. Full list of options available inMINIDUMP_TYPEenum.[24]
TheNASAVoyager programwas probably the first craft to routinely utilize the core dump feature in the Deep Space segment. The core dump feature is a mandatory telemetry feature for the Deep Space segment as it has been proven to minimize system diagnostic costs.[citation needed]The Voyager craft uses routine core dumps to spot memory damage fromcosmic rayevents.
Space Mission core dump systems are mostly based on existing toolkits for the target CPU or subsystem. However, over the duration of a mission the core dump subsystem may be substantially modified or enhanced for the specific needs of the mission.
Descriptions of the file format
Kernel core dumps:
|
https://en.wikipedia.org/wiki/Core_dump
|
Ininformation theoryandstatistics,negentropyis used as a measure of distance to normality. The concept and phrase "negative entropy" was introduced byErwin Schrödingerin his 1944 popular-science bookWhat is Life?[1]Later,FrenchphysicistLéon Brillouinshortened the phrase tonéguentropie(negentropy).[2][3]In 1974,Albert Szent-Györgyiproposed replacing the termnegentropywithsyntropy. That term may have originated in the 1940s with the Italian mathematicianLuigi Fantappiè, who tried to construct a unified theory ofbiologyandphysics.Buckminster Fullertried to popularize this usage, butnegentropyremains common.
In a note toWhat is Life?Schrödinger explained his use of this phrase.
... if I had been catering for them [physicists] alone I should have let the discussion turn onfree energyinstead. It is the more familiar notion in this context. But this highly technical term seemed linguistically too near toenergyfor making the average reader alive to the contrast between the two things.
Ininformation theoryandstatistics, negentropy is used as a measure of distance to normality.[4][5][6]Out of alldistributionswith a given mean and variance, the normal orGaussian distributionis the one with the highestentropy. Negentropy measures the difference in entropy between a given distribution and the Gaussian distribution with the same mean and variance. Thus, negentropy is always nonnegative, is invariant by any linear invertible change of coordinates, and vanishesif and only ifthe signal is Gaussian.
Negentropy is defined as
whereS(φx){\displaystyle S(\varphi _{x})}is thedifferential entropyof the Gaussian density with the samemeanandvarianceaspx{\displaystyle p_{x}}andS(px){\displaystyle S(p_{x})}is the differential entropy ofpx{\displaystyle p_{x}}:
Negentropy is used instatisticsandsignal processing. It is related to networkentropy, which is used inindependent component analysis.[7][8]
The negentropy of a distribution is equal to theKullback–Leibler divergencebetweenpx{\displaystyle p_{x}}and a Gaussian distribution with the same mean and variance aspx{\displaystyle p_{x}}(seeDifferential entropy § Maximization in the normal distributionfor a proof). In particular, it is always nonnegative.
There is a physical quantity closely linked tofree energy(free enthalpy), with a unit of entropy and isomorphic to negentropy known in statistics and information theory. In 1873,Willard Gibbscreated a diagram illustrating the concept of free energy corresponding tofree enthalpy. On the diagram one can see the quantity calledcapacity for entropy. This quantity is the amount of entropy that may be increased without changing an internal energy or increasing its volume.[9]In other words, it is a difference between maximum possible, under assumed conditions, entropy and its actual entropy. It corresponds exactly to the definition of negentropy adopted in statistics and information theory. A similar physical quantity was introduced in 1869 byMassieufor theisothermal process[10][11][12](both quantities differs just with a figure sign) and by thenPlanckfor theisothermal-isobaricprocess.[13]More recently, the Massieu–Planckthermodynamic potential, known also asfree entropy, has been shown to play a great role in the so-called entropic formulation ofstatistical mechanics,[14]applied among the others in molecular biology[15]and thermodynamic non-equilibrium processes.[16]
In particular, mathematically the negentropy (the negative entropy function, in physics interpreted as free entropy) is theconvex conjugateofLogSumExp(in physics interpreted as the free energy).
In 1953,Léon Brillouinderived a general equation[17]stating that the changing of an information bit value requires at leastkTln2{\displaystyle kT\ln 2}energy. This is the same energy as the workLeó Szilárd's engine produces in the idealistic case. In his book,[18]he further explored this problem concluding that any cause of this bit value change (measurement, decision about a yes/no question, erasure, display, etc.) will require the same amount of energy.
|
https://en.wikipedia.org/wiki/Negentropy
|
Classical conditioning(alsorespondent conditioningandPavlovian conditioning) is a behavioral procedure in which a biologically potentstimulus(e.g. food, a puff of air on the eye, a potential rival) is paired with a neutral stimulus (e.g. the sound of amusical triangle). The termclassical conditioningrefers to the process of an automatic, conditioned response that is paired with a specific stimulus.[1]It is essentially equivalent to a signal.
The RussianphysiologistIvan Pavlovstudied classical conditioning with detailedexperimentswith dogs, and published the experimental results in 1897. In the study ofdigestion, Pavlov observed that the experimental dogs salivated when fed red meat.[2]Pavlovian conditioning is distinct fromoperant conditioning(instrumental conditioning), through which the strength of a voluntary behavior is modified, either by reinforcement or bypunishment. However, classical conditioning can affect operant conditioning; classically conditioned stimuli can reinforce operant responses.
Classical conditioning is a basic behavioral mechanism, and itsneural substratesare now beginning to be understood. Though it is sometimes hard to distinguish classical conditioning from other forms of associative learning (e.g. instrumental learning and humanassociative memory), a number of observations differentiate them, especially the contingencies whereby learning occurs.[3]
Together withoperant conditioning, classical conditioning became the foundation ofbehaviorism, a school ofpsychologywhich was dominant in the mid-20th century and is still an important influence on the practice ofpsychological therapyand the study of animal behavior. Classical conditioning has been applied in other areas as well. For example, it may affect the body's response topsychoactive drugs, the regulation of hunger, research on the neural basis of learning and memory, and in certain social phenomena such as thefalse consensus effect.[4]
Classical conditioning occurs when a conditioned stimulus (CS) is paired with an unconditioned stimulus (US). Usually, the conditioned stimulus is a neutral stimulus (e.g., the sound of atuning fork), the unconditioned stimulus is biologically potent (e.g., the taste of food) and the unconditioned response (UR) to the unconditioned stimulus is an unlearnedreflexresponse (e.g., salivation). After pairing is repeated the organism exhibits a conditioned response (CR) to the conditioned stimulus when the conditioned stimulus is presented alone. (A conditioned response may occur after only one pairing.) Thus, unlike the UR, the CR is acquired through experience, and it is also less permanent than the UR.[5]
Usually the conditioned response is similar to the unconditioned response, but sometimes it is quite different. For this and other reasons, most learning theorists suggest that the conditioned stimulus comes to signal or predict the unconditioned stimulus, and go on to analyse the consequences of this signal.[6]Robert A. Rescorlaprovided a clear summary of this change in thinking, and its implications, in his 1988 article "Pavlovian conditioning: It's not what you think it is".[7]Despite its widespread acceptance, Rescorla's thesis may not be defensible.[weasel words]
A false-positive involving classical conditioning from chance (where the unconditioned stimulus has the same chance of happening with or without the conditioned stimulus) has been proven to be improbable in successfully conditioning a response. The element of contingency has been further tested and is said to have "outlived any usefulness in the analysis of conditioning."[8]
Classical conditioning differs fromoperantorinstrumentalconditioning: in classical conditioning, behaviors are modified through the association of stimuli as described above, whereas in operant conditioning behaviors are modified by the effect they produce (i.e., reward or punishment).[9]
The best-known and most thorough early work on classical conditioning was done byIvan Pavlov, althoughEdwin Twitmyerpublished some related findings a year earlier.[10]During his research on thephysiologyofdigestionin dogs, Pavlov developed a procedure that enabled him to study the digestive processes of animals over long periods of time. He redirected the animals' digestive fluids outside the body, where they could be measured.
Pavlov noticed that his dogs began tosalivatein the presence of the technician who normally fed them, rather than simply salivating in the presence of food. Pavlov called the dogs' anticipatory salivation "psychic secretion". Putting these informal observations to an experimental test, Pavlov presented a stimulus (e.g. the sound of ametronome) and then gave the dog food; after a few repetitions, the dogs started to salivate in response to the stimulus. Pavlov concluded that if a particular stimulus in the dog's surroundings was present when the dog was given food then that stimulus could become associated with food and cause salivation on its own.
In Pavlov's experiments theunconditioned stimulus (US)was the food because its effects did not depend on previous experience. The metronome's sound is originally aneutral stimulus (NS)because it does not elicit salivation in the dogs. After conditioning, the metronome's sound becomes theconditioned stimulus (CS)or conditional stimulus; because its effects depend on its association with food.[11]Likewise, the responses of the dog follow the same conditioned-versus-unconditioned arrangement. Theconditioned response (CR)is the response to the conditioned stimulus, whereas theunconditioned response (UR)corresponds to the unconditioned stimulus.
Pavlov reported many basic facts about conditioning; for example, he found that learning occurred most rapidly when the interval between the CS and the appearance of the US was relatively short.[12]
As noted earlier, it is often thought that the conditioned response is a replica of the unconditioned response, but Pavlov noted that saliva produced by the CS differs in composition from that produced by the US. In fact, the CR may be any new response to the previously neutral CS that can be clearly linked to experience with the conditional relationship of CS and US.[7][9]It was also thought that repeated pairings are necessary for conditioning to emerge, but many CRs can be learned with a single trial, especially infear conditioningandtaste aversionlearning.
Learning is fastest in forward conditioning. During forward conditioning, the onset of the CS precedes the onset of the US in order to signal that the US will follow.[13][14]: 69Two common forms of forward conditioning are delay and trace conditioning.
During simultaneous conditioning, the CS and US are presented and terminated at the same time. For example: If a person hears a bell and has air puffed into their eye at the same time, and repeated pairings like this led to the person blinking when they hear the bell despite the puff of air being absent, this demonstrates that simultaneous conditioning has occurred.
Second-order or higher-order conditioning follow a two-step procedure. First a neutral stimulus ("CS1") comes to signal a US through forward conditioning. Then a second neutral stimulus ("CS2") is paired with the first (CS1) and comes to yield its own conditioned response.[14]: 66For example: A bell might be paired with food until the bell elicits salivation. If a light is then paired with the bell, then the light may come to elicit salivation as well. The bell is the CS1 and the food is the US. The light becomes the CS2 once it is paired with the CS1.
Backward conditioning occurs when a CS immediately follows a US.[13]Unlike the usual conditioning procedure, in which the CS precedes the US, the conditioned response given to the CS tends to be inhibitory. This presumably happens because the CS serves as a signal that the US has ended, rather than as a signal that the US is about to appear.[14]: 71For example, a puff of air directed at a person's eye could be followed by the sound of a buzzer.
In temporal conditioning, a US is presented at regular intervals, for instance every 10 minutes. Conditioning is said to have occurred when the CR tends to occur shortly before each US. This suggests that animals have abiological clockthat can serve as a CS. This method has also been used to study timing ability in animals (seeAnimal cognition).
The example below shows the temporal conditioning, as US such as food to a hungry mouse is simply delivered on a regular time schedule such as every thirty seconds. After sufficient exposure the mouse will begin to salivate just before the food delivery. This then makes it temporal conditioning as it would appear that the mouse is conditioned to the passage of time.
In this procedure, the CS is paired with the US, but the US also occurs at other times. If this occurs, it is predicted that the US is likely to happen in the absence of the CS. In other words, the CS does not "predict" the US. In this case, conditioning fails and the CS does not come to elicit a CR.[15]This finding – thatpredictionrather than CS-US pairing is the key to conditioning – greatly influenced subsequent conditioning research and theory.
In the extinction procedure, the CS is presented repeatedly in the absence of a US. This is done after a CS has been conditioned by one of the methods above. When this is done, the CR frequency eventually returns to pre-training levels. However, extinction does not eliminate the effects of the prior conditioning. This is demonstrated byspontaneous recovery– when there is a sudden appearance of the (CR) after extinction occurs – and other related phenomena (see "Recovery from extinction" below). These phenomena can be explained by postulating accumulation of inhibition when a weak stimulus is presented.
During acquisition, the CS and US are paired as described above. The extent of conditioning may be tracked by test trials. In these test trials, the CS is presented alone and the CR is measured. A single CS-US pairing may suffice to yield a CR on a test, but usually a number of pairings are necessary and there is a gradual increase in the conditioned response to the CS. This repeated number of trials increase the strength and/or frequency of the CR gradually. The speed of conditioning depends on a number of factors, such as the nature and strength of both the CS and the US, previous experience and the animal'smotivationalstate.[6][9]The process slows down as it nears completion.[16]
If the CS is presented without the US, and this process is repeated often enough, the CS will eventually stop eliciting a CR. At this point the CR is said to be "extinguished."[6][17]
External inhibitionmay be observed if a strong or unfamiliar stimulus is presented just before, or at the same time as, the CS. This causes a reduction in the conditioned response to the CS.
Several procedures lead to the recovery of a CR that had been first conditioned and then extinguished. This illustrates that the extinction procedure does not eliminate the effect of conditioning.[9]These procedures are the following:
Stimulus generalizationis said to occur if, after a particular CS has come to elicit a CR, a similar test stimulus is found to elicit the same CR. Usually the more similar the test stimulus is to the CS the stronger the CR will be to the test stimulus.[6]Conversely, the more the test stimulus differs from the CS, the weaker the CR will be, or the more it will differ from that previously observed.
One observesstimulus discriminationwhen one stimulus ("CS1") elicits one CR and another stimulus ("CS2") elicits either another CR or no CR at all. This can be brought about by, for example, pairing CS1 with an effective US and presenting CS2 with no US.[6]
Latent inhibition refers to the observation that it takes longer for a familiar stimulus to become a CS than it does for a novel stimulus to become a CS, when the stimulus is paired with an effective US.[6]
This is one of the most common ways to measure the strength of learning in classical conditioning. A typical example of this procedure is as follows: a rat first learns to press a lever throughoperant conditioning. Then, in a series of trials, the rat is exposed to a CS, a light or a noise, followed by the US, a mild electric shock. An association between the CS and US develops, and the rat slows or stops its lever pressing when the CS comes on. The rate of pressing during the CS measures the strength of classical conditioning; that is, the slower the rat presses, the stronger the association of the CS and the US. (Slow pressing indicates a "fear" conditioned response, and it is an example of a conditioned emotional response; see section below.)
Typically, three phases of conditioning are used.
A CS (CS+) is paired with a US untilasymptoticCR levels are reached.
CS+/US trials are continued, but these are interspersed with trials on which the CS+ is paired with a second CS, (the CS-) but not with the US (i.e. CS+/CS- trials). Typically, organisms show CRs on CS+/US trials, but stop responding on CS+/CS− trials.
This form of classical conditioning involves two phases.
A CS (CS1) is paired with a US.
A compound CS (CS1+CS2) is paired with a US.
A separate test for each CS (CS1 and CS2) is performed. The blocking effect is observed in a lack of conditional response to CS2, suggesting that the first phase of training blocked the acquisition of the second CS.
Experiments on theoretical issues in conditioning have mostly been done onvertebrates, especially rats and pigeons. However, conditioning has also been studied ininvertebrates, and very important data on the neural basis of conditioning has come from experiments on the sea slug,Aplysia.[6]Most relevant experiments have used the classical conditioning procedure, althoughinstrumental (operant) conditioningexperiments have also been used, and the strength of classical conditioning is often measured through its operant effects, as inconditioned suppression(see Phenomena section above) andautoshaping.
According to Pavlov, conditioning does not involve the acquisition of any new behavior, but rather the tendency to respond in old ways to new stimuli. Thus, he theorized that the CS merely substitutes for the US in evoking thereflexresponse. This explanation is called the stimulus-substitution theory of conditioning.[14]: 84A critical problem with the stimulus-substitution theory is that the CR and UR are not always the same. Pavlov himself observed that a dog's saliva produced as a CR differed in composition from that produced as a UR.[10]The CR is sometimes even the opposite of the UR. For example: the unconditional response to an electric shock is an increase in heart rate, whereas a CS that has been paired with the electric shock elicits a decrease in heart rate. (However, it has been proposed[by whom?]that only when the UR does not involve thecentral nervous systemare the CR and the UR opposites.)
The Rescorla–Wagner (R–W) model[9][18]is a relatively simple yet powerful model of conditioning. The model predicts a number of important phenomena, but it also fails in important ways, thus leading to a number of modifications and alternative models. However, because much of the theoretical research on conditioning in the past 40 years has been instigated by this model or reactions to it, the R–W model deserves a brief description here.[19][14]: 85
The Rescorla-Wagner model argues that there is a limit to the amount of conditioning that can occur in the pairing of two stimuli. One determinant of this limit is the nature of the US. For example: pairing a bell with a juicy steak is more likely to produce salivation than pairing the bell with a piece of dry bread, and dry bread is likely to work better than a piece of cardboard. A key idea behind the R–W model is that a CS signals or predicts the US. One might say that before conditioning, the subject is surprised by the US. However, after conditioning, the subject is no longer surprised, because the CS predicts the coming of the US. (The model can be described mathematically and that words like predict, surprise, and expect are only used to help explain the model.) Here the workings of the model are illustrated with brief accounts of acquisition, extinction, and blocking. The model also predicts a number of other phenomena, see main article on the model.
ΔV=αβ(λ−ΣV){\displaystyle \Delta V=\alpha \beta (\lambda -\Sigma V)}
This is the Rescorla-Wagner equation. It specifies the amount of learning that will occur on a single pairing of a conditioning stimulus (CS) with an unconditioned stimulus (US). The above equation is solved repeatedly to predict the course of learning over many such trials.
In this model, the degree of learning is measured by how well the CS predicts the US, which is given by the "associative strength" of the CS. In the equation, V represents the current associative strength of the CS, and ∆V is the change in this strength that happens on a given trial. ΣV is the sum of the strengths of all stimuli present in the situation. λ is the maximum associative strength that a given US will support; its value is usually set to 1 on trials when the US is present, and 0 when the US is absent. α and β are constants related to the salience of the CS and the speed of learning for a given US. How the equation predicts various experimental results is explained in following sections. For further details, see the main article on the model.[14]: 85–89
The R–W model measures conditioning by assigning an "associative strength" to the CS and other local stimuli. Before a CS is conditioned it has an associative strength of zero. Pairing the CS and the US causes a gradual increase in the associative strength of the CS. This increase is determined by the nature of the US (e.g. its intensity).[14]: 85–89The amount of learning that happens during any single CS-US pairing depends on the difference between the total associative strengths of CS and other stimuli present in the situation (ΣV in the equation), and a maximum set by the US (λ in the equation). On the first pairing of the CS and US, this difference is large and the associative strength of the CS takes a big step up. As CS-US pairings accumulate, the US becomes more predictable, and the increase in associative strength on each trial becomes smaller and smaller. Finally, the difference between the associative strength of the CS (plus any that may accrue to other stimuli) and the maximum strength reaches zero. That is, the US is fully predicted, the associative strength of the CS stops growing, and conditioning is complete.
The associative process described by the R–W model also accounts for extinction (see "procedures" above). The extinction procedure starts with a positive associative strength of the CS, which means that the CS predicts that the US will occur. On an extinction trial the US fails to occur after the CS. As a result of this "surprising" outcome, the associative strength of the CS takes a step down. Extinction is complete when the strength of the CS reaches zero; no US is predicted, and no US occurs. However, if that same CS is presented without the US but accompanied by a well-established conditioned inhibitor (CI), that is, a stimulus that predicts the absence of a US (in R-W terms, a stimulus with a negative associate strength) then R-W predicts that the CS will not undergo extinction (its V will not decrease in size).
The most important and novel contribution of the R–W model is its assumption that the conditioning of a CS depends not just on that CS alone, and its relationship to the US, but also on all other stimuli present in the conditioning situation. In particular, the model states that the US is predicted by the sum of the associative strengths of all stimuli present in the conditioning situation. Learning is controlled by the difference between this total associative strength and the strength supported by the US. When this sum of strengths reaches a maximum set by the US, conditioning ends as just described.[14]: 85–89
The R–W explanation of the blocking phenomenon illustrates one consequence of the assumption just stated. In blocking (see "phenomena" above), CS1 is paired with a US until conditioning is complete. Then on additional conditioning trials a second stimulus (CS2) appears together with CS1, and both are followed by the US. Finally CS2 is tested and shown to produce no response because learning about CS2 was "blocked" by the initial learning about CS1. The R–W model explains this by saying that after the initial conditioning, CS1 fully predicts the US. Since there is no difference between what is predicted and what happens, no new learning happens on the additional trials with CS1+CS2, hence CS2 later yields no response.
One of the main reasons for the importance of the R–W model is that it is relatively simple and makes clear predictions. Tests of these predictions have led to a number of important new findings and a considerably increased understanding of conditioning. Some new information has supported the theory, but much has not, and it is generally agreed that the theory is, at best, too simple. However, no single model seems to account for all the phenomena that experiments have produced.[9][20]Following are brief summaries of some related theoretical issues.[19]
The R–W model reduces conditioning to the association of a CS and US, and measures this with a single number, the associative strength of the CS. A number of experimental findings indicate that more is learned than this. Among these are two phenomena described earlier in this article
Latent inhibition might happen because a subject stops focusing on a CS that is seen frequently before it is paired with a US. In fact, changes in attention to the CS are at the heart of two prominent theories that try to cope with experimental results that give the R–W model difficulty. In one of these, proposed byNicholas Mackintosh,[21]the speed of conditioning depends on the amount of attention devoted to the CS, and this amount of attention depends in turn on how well the CS predicts the US. Pearce and Hall proposed a related model based on a different attentional principle[22]Both models have been extensively tested, and neither explains all the experimental results. Consequently, various authors have attempted hybrid models that combine the two attentional processes. Pearce and Hall in 2010 integrated their attentional ideas and even suggested the possibility of incorporating the Rescorla-Wagner equation into an integrated model.[9]
As stated earlier, a key idea in conditioning is that the CS signals or predicts the US (see "zero contingency procedure" above). However, for example, the room in which conditioning takes place also "predicts" that the US may occur. Still, the room predicts with much less certainty than does the experimental CS itself, because the room is also there between experimental trials, when the US is absent. The role of such context is illustrated by the fact that the dogs in Pavlov's experiment would sometimes start salivating as they approached the experimental apparatus, before they saw or heard any CS.[16]Such so-called "context" stimuli are always present, and their influence helps to account for some otherwise puzzling experimental findings. The associative strength of context stimuli can be entered into the Rescorla-Wagner equation, and they play an important role in thecomparatorandcomputationaltheories outlined below.[9]
To find out what has been learned, we must somehow measure behavior ("performance") in a test situation. However, as students know all too well, performance in a test situation is not always a good measure of what has been learned. As for conditioning, there is evidence that subjects in a blocking experiment do learn something about the "blocked" CS, but fail to show this learning because of the way that they are usually tested.
"Comparator" theories of conditioning are "performance based", that is, they stress what is going on at the time of the test. In particular, they look at all the stimuli that are present during testing and at how the associations acquired by these stimuli may interact.[23][24]To oversimplify somewhat, comparator theories assume that during conditioning the subject acquires both CS-US and context-US associations. At the time of the test, these associations are compared, and a response to the CS occurs only if the CS-US association is stronger than the context-US association. After a CS and US are repeatedly paired in simple acquisition, the CS-US association is strong and the context-US association is relatively weak. This means that the CS elicits a strong CR. In "zero contingency" (see above), the conditioned response is weak or absent because the context-US association is about as strong as the CS-US association. Blocking and other more subtle phenomena can also be explained by comparator theories, though, again, they cannot explain everything.[9][19]
An organism's need to predict future events is central to modern theories of conditioning. Most theories use associations between stimuli to take care of these predictions. For example: In the R–W model, the associative strength of a CS tells us how strongly that CS predicts a US. A different approach to prediction is suggested by models such as that proposed by Gallistel & Gibbon (2000, 2002).[25][26]Here the response is not determined by associative strengths. Instead, the organism records the times of onset and offset of CSs and USs and uses these to calculate the probability that the US will follow the CS. A number of experiments have shown that humans and animals can learn to time events (seeAnimal cognition), and the Gallistel & Gibbon model yields very good quantitative fits to a variety of experimental data.[6][19]However, recent studies have suggested that duration-based models cannot account for some empirical findings as well as associative models.[27]
The Rescorla-Wagner model treats a stimulus as a single entity, and it represents the associative strength of a stimulus with one number, with no record of how that number was reached. As noted above, this makes it hard for the model to account for a number of experimental results. More flexibility is provided by assuming that a stimulus is internally represented by a collection of elements, each of which may change from one associative state to another. For example, the similarity of one stimulus to another may be represented by saying that the two stimuli share elements in common. These shared elements help to account for stimulus generalization and other phenomena that may depend upon generalization. Also, different elements within the same set may have different associations, and their activations and associations may change at different times and at different rates. This allows element-based models to handle some otherwise inexplicable results.
A prominent example of the element approach is the "SOP" model of Wagner.[28]The model has been elaborated in various ways since its introduction, and it can now account in principle for a very wide variety of experimental findings.[9]The model represents any given stimulus with a large collection of elements. The time of presentation of various stimuli, the state of their elements, and the interactions between the elements, all determine the course of associative processes and the behaviors observed during conditioning experiments.
The SOP account of simple conditioning exemplifies some essentials of the SOP model. To begin with, the model assumes that the CS and US are each represented by a large group of elements. Each of these stimulus elements can be in one of three states:
Of the elements that represent a single stimulus at a given moment, some may be in state A1, some in state A2, and some in state I.
When a stimulus first appears, some of its elements jump from inactivity I to primary activity A1. From the A1 state they gradually decay to A2, and finally back to I. Element activity can only change in this way; in particular, elements in A2 cannot go directly back to A1. If the elements of both the CS and the US are in the A1 state at the same time, an association is learned between the two stimuli. This means that if, at a later time, the CS is presented ahead of the US, and some CS elements enter A1, these elements will activate some US elements. However, US elements activated indirectly in this way only get boosted to the A2 state. (This can be thought of the CS arousing a memory of the US, which will not be as strong as the real thing.) With repeated CS-US trials, more and more elements are associated, and more and more US elements go to A2 when the CS comes on. This gradually leaves fewer and fewer US elements that can enter A1 when the US itself appears. In consequence, learning slows down and approaches a limit. One might say that the US is "fully predicted" or "not surprising" because almost all of its elements can only enter A2 when the CS comes on, leaving few to form new associations.
The model can explain the findings that are accounted for by the Rescorla-Wagner model and a number of additional findings as well. For example, unlike most other models, SOP takes time into account. The rise and decay of element activation enables the model to explain time-dependent effects such as the fact that conditioning is strongest when the CS comes just before the US, and that when the CS comes after the US ("backward conditioning") the result is often an inhibitory CS. Many other more subtle phenomena are explained as well.[9]
A number of other powerful models have appeared in recent years which incorporate element representations. These often include the assumption that associations involve a network of connections between "nodes" that represent stimuli, responses, and perhaps one or more "hidden" layers of intermediate interconnections. Such models make contact with a current explosion of research onneural networks,artificial intelligenceandmachine learning.[citation needed]
Pavlov proposed that conditioning involved a connection between brain centers for conditioned and unconditioned stimuli. His physiological account of conditioning has been abandoned, but classical conditioning continues to be used to study the neural structures and functions that underlie learning and memory. Forms of classical conditioning that are used for this purpose include, among others,fear conditioning,eyeblink conditioning, and the foot contraction conditioning ofHermissenda crassicornis, a sea-slug. Both fear and eyeblink conditioning involve a neutral stimulus, frequently a tone, becoming paired with an unconditioned stimulus. In the case of eyeblink conditioning, the US is an air-puff, while in fear conditioning the US is threatening or aversive such as a foot shock.
The American neuroscientistDavid A. McCormickperformed experiments that demonstrated "...discrete regions of thecerebellumand associatedbrainstemareas contain neurons that alter their activity during conditioning – these regions are critical for the acquisition and performance of this simple learning task. It appears that other regions of the brain, including thehippocampus,amygdala, andprefrontal cortex, contribute to the conditioning process, especially when the demands of the task get more complex."[29]
Fear and eyeblink conditioning involve generally non overlapping neural circuitry, but share molecular mechanisms. Fear conditioning occurs in thebasolateral amygdala, which receivesglutaminergicinput directly from thalamic afferents, as well as indirectly from prefrontal projections. The direct projections are sufficient for delay conditioning, but in the case of trace conditioning, where the CS needs to be internally represented despite a lack of external stimulus, indirect pathways are necessary. Theanterior cingulateis one candidate for intermediate trace conditioning, but the hippocampus may also play a major role. Presynaptic activation ofprotein kinase Aand postsynaptic activation ofNMDA receptorsand its signal transduction pathway are necessary for conditioning related plasticity.CREBis also necessary for conditioning relatedplasticity, and it may induce downstream synthesis of proteins necessary for this to occur.[30]As NMDA receptors are only activated after an increase in presynapticcalcium(thereby releasing theMg2+block), they are a potential coincidence detector that could mediatespike timing dependent plasticity. STDP constrains LTP to situations where the CS predicts the US, and LTD to the reverse.[31]
Some therapies associated with classical conditioning areaversion therapy,systematic desensitizationandflooding.
Aversion therapy is a type of behavior therapy designed to make patients cease an undesirable habit by associating the habit with a strong unpleasant unconditioned stimulus.[32]: 336For example, a medication might be used to associate the taste of alcohol with stomach upset. Systematic desensitization is a treatment for phobias in which the patient is trained to relax while being exposed to progressively more anxiety-provoking stimuli (e.g. angry words). This is an example ofcounterconditioning, intended to associate the feared stimuli with a response (relaxation) that is incompatible with anxiety.[32]: 136Flooding is a form ofdesensitizationthat attempts to eliminate phobias and anxieties by repeated exposure to highly distressing stimuli until the lack of reinforcement of the anxiety response causes its extinction.[32]: 133"Flooding" usually involves actual exposure to the stimuli, whereas the term "implosion" refers to imagined exposure, but the two terms are sometimes used synonymously.
Conditioning therapies usually take less time thanhumanistictherapies.[33]
A stimulus that is present when adrugis administered or consumed may eventually evoke a conditioned physiological response that mimics the effect of the drug. This is sometimes the case withcaffeine; habitualcoffeedrinkers may find that the smell of coffee gives them a feeling of alertness. In other cases, the conditioned response is a compensatory reaction that tends to offset the effects of the drug. For example, if a drug causes the body to become less sensitive to pain, the compensatory conditioned reaction may be one that makes the user more sensitive to pain. This compensatory reaction may contribute todrug tolerance. If so, a drug user may increase the amount of drug consumed in order to feel its effects, and end up taking very large amounts of the drug. In this case a dangerous overdose reaction may occur if the CS happens to be absent, so that the conditioned compensatory effect fails to occur. For example, if the drug has always been administered in the same room, the stimuli provided by that room may produce a conditioned compensatory effect; then anoverdosereaction may happen if the drug is administered in a different location where the conditioned stimuli are absent.[34]
Signals that consistently precede food intake can become conditioned stimuli for a set of bodily responses that prepares the body for food anddigestion. These reflexive responses include the secretion ofdigestive juicesinto the stomach and the secretion of certain hormones into the blood stream, and they induce a state of hunger. An example of conditioned hunger is the "appetizer effect." Any signal that consistently precedes a meal, such as a clock indicating that it is time for dinner, can cause people to feel hungrier than before the signal. Thelateral hypothalamus(LH) is involved in the initiation of eating. Thenigrostriatal pathway, which includes thesubstantia nigra, thelateral hypothalamus, and thebasal gangliahave been shown to be involved in hunger motivation.[citation needed]
The influence of classical conditioning can be seen in emotional responses such asphobia,disgust,nausea, anger, andsexual arousal. A common example is conditioned nausea, in which the CS is the sight or smell of a particular food that in the past has resulted in an unconditioned stomach upset. Similarly, when the CS is the sight of a dog and the US is the pain of being bitten, the result may be a conditioned fear of dogs. An example of conditioned emotional response isconditioned suppression.
As an adaptive mechanism, emotional conditioning helps shield an individual from harm or prepare it for important biological events such as sexual activity. Thus, a stimulus that has occurred before sexual interaction comes to cause sexual arousal, which prepares the individual for sexual contact. For example, sexual arousal has been conditioned in human subjects by pairing a stimulus like a picture of a jar of pennies with views of an erotic film clip. Similar experiments involving bluegouramifish anddomesticated quailhave shown that such conditioning can increase the number of offspring. These results suggest that conditioning techniques might help to increase fertility rates ininfertileindividuals andendangered species.[35]
Pavlovian-instrumental transfer is a phenomenon that occurs when a conditioned stimulus (CS, also known as a "cue") that has been associated withrewardingoraversivestimulivia classical conditioning altersmotivational salienceandoperant behavior.[36][37][38][39]In a typical experiment, a rat is presented with sound-food pairings (classical conditioning). Separately, the rat learns to press a lever to get food (operant conditioning). Test sessions now show that the rat presses the lever faster in the presence of the sound than in silence, although the sound has never been associated with lever pressing.
Pavlovian-instrumental transfer is suggested to play a role in thedifferential outcomes effect, a procedure which enhances operant discrimination by pairing stimuli with specific outcomes.[citation needed]
|
https://en.wikipedia.org/wiki/Classical_conditioning
|
Sweepis a Britishpuppetand television character popular in the United Kingdom, United States, Canada, Australia, Ireland, New Zealand and other countries.
Sweep is a grey glove puppet dog with long black ears who joinedThe Sooty Showin 1957, as a friend to fellow puppetSooty.[1]He is a dim-witted dog with a penchant forbonesandsausages.[2][3]Sweep is notable for his method of communication[4]which consists of a loud high-pitched squeak that gains its inflection from normal speech and its rhythm from the syllables in each word.
The rest of the cast, namelySooand the presenter, could understand Sweep perfectly, and would (albeit indirectly) translate for the viewer.[5][6]The sound of Sweep's voice was achieved using "something similar to asaxophonereed".[7]Versions of the puppet later sold as toys had an integralsqueakerconnected to an air bulb that was squeezed by hand.
Sweep's family first appeared on theSooty Showin an episode called "Sweep's Family". He has a mother and father; a twin brother, Swoop; two cousins, Swipe and Swap[8]and another seven brothers in the litter (all of whom look exactly like him, and wear different coloured collars to tell each other apart).
Swipe and Swap are described as Sweep's brothers in theSooty & Co.episode "Sweep's Family" and theSooty Heightsepisode "The Hounds of Music".
This article about a television comedy character is astub. You can help Wikipedia byexpanding it.
Thispuppet-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Sweep_(puppet)
|
Anecho state network(ESN)[1][2]is a type ofreservoir computerthat uses arecurrent neural networkwith a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights of hiddenneuronsare fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behavior is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.
Alternatively, one may consider a nonparametric Bayesian formulation of the output layer, under which: (i) a prior distribution is imposed over the output weights; and (ii) the output weights are marginalized out in the context of prediction generation, given the training data. This idea has been demonstrated in[3]by using Gaussian priors, whereby a Gaussian process model with ESN-driven kernel function is obtained. Such a solution was shown to outperform ESNs with trainable (finite) sets of weights in several benchmarks.
Some publicly available efficient implementations of ESNs are aureservoir (a C++ library for various kinds with python/numpy bindings),MATLAB, ReservoirComputing.jl (a Julia-based implementation of various types) and pyESN (for simple ESNs inPython).
The Echo State Network (ESN)[4]belongs to the Recurrent Neural Network (RNN) family and provide their architecture and supervised learning principle. Unlike Feedforward Neural Networks, Recurrent Neural Networks are dynamic systems and not functions. Recurrent Neural Networks are typically used for:
For the training of RNNs a number of learning algorithms are available: backpropagation through time,real-time recurrent learning. Convergence is not guaranteed due to instability and bifurcation phenomena.[4]
The main approach of the ESN is firstly to operate a random, large, fixed, recurring neural network with the input signal, which induces a nonlinear response signal in each neuron within this "reservoir" network, and secondly connect a desired output signal by a trainable linear combination of all these response signals.[2]
Another feature of the ESN is the autonomous operation in prediction: if it is trained with an input that is a backshifted version of the output, then it can be used for signal generation/prediction by using the previous output as input.[4][5]
The main idea of ESNs is tied toliquid state machines, which were independently and simultaneously developed with ESNs by Wolfgang Maass.[6]They, ESNs and the newly researched backpropagation decorrelation learning rule for RNNs[7]are more and more summarized under the name Reservoir Computing.
Schiller and Steil[7]also demonstrated that in conventional training approaches for RNNs, in which all weights (not only output weights) are adapted, the dominant changes are in output weights. In cognitive neuroscience, Peter F. Dominey analysed a related process related to the modelling of sequence processing in the mammalian brain, in particular speech recognition in the human brain.[8]The basic idea also included a model of temporal input discrimination in biological neuronal networks.[9]An early clear formulation of the reservoir computing idea is due to K. Kirby, who disclosed this concept in a largely forgotten conference contribution.[10]The first formulation of the reservoir computing idea known today stems from L. Schomaker,[11]who described how a desired target output could be obtained from an RNN by learning to combine signals from a randomly configured ensemble of spiking neural oscillators.[2]
Echo state networks can be built in different ways. They can be set up with or without directly trainable input-to-output connections, with or without output reservation feedback, with different neurotypes, different reservoir internal connectivity patterns etc. The output weight can be calculated for linear regression with all algorithms whether they are online or offline. In addition to the solutions for errors with smallest squares, margin maximization criteria, so-called training support vector machines, are used to determine the output values.[12]Other variants of echo state networks seek to change the formulation to better match common models of physical systems, such as those typically those defined by differential equations. Work in this direction includes echo state networks which partially include physical models,[13]hybrid echo state networks,[14]and continuous-time echo state networks.[15]
The fixed RNN acts as a random, nonlinear medium whose dynamic response, the "echo", is used as a signal base. The linear combination of this base can be trained to reconstruct the desired output by minimizing some error criteria.[2]
RNNs were rarely used in practice before the introduction of the ESN, because of the complexity involved in adjusting their connections (e.g., lack of autodifferentiation, susceptibility to vanishing/exploding gradients, etc.). RNN training algorithms were slow and often vulnerable to issues, such as branching errors.[16]Convergence could therefore not be guaranteed. On the other hand, ESN training does not have a problem with branching and is easy to implement. In early studies, ESNs were shown to perform well on time series prediction tasks from synthetic datasets.[1][17]
Today, many of the problems that made RNNs slow and error-prone have been addressed with the advent of autodifferentiation (deep learning) libraries, as well as more stable architectures such aslong short-term memoryandGated recurrent unit; thus, the unique selling point of ESNs has been lost. RNNs have also proven themselves in several practical areas, such as language processing. To cope with tasks of similar complexity using reservoir calculation methods requires memory of excessive size.
ESNs are used in some areas, such as signal processing applications. In particular, they have been widely used as a computing principle that mixes well with non-digital computer substrates. Since ESNs do not need to modify the parameters of the RNN, they make it possible to use many different objects as their nonlinear "reservoir″. For example, optical microchips, mechanical nanooscillators, polymer mixtures, or even artificial soft limbs.[2]
|
https://en.wikipedia.org/wiki/Echo_state_network
|
Biologically InspiredCognitive Architectures(BICA) was aDARPAproject administered by theInformation Processing Technology Office(IPTO). BICA began in 2005 and is designed to create the next generation ofcognitive architecturemodels of human artificial intelligence. Its first phase (Design) ran from September 2005 to around October 2006, and was intended to generate new ideas for biological architectures that could be used to create embodied computational architectures of human intelligence.
The second phase (Implementation) of BICA was set to begin in the spring of 2007, and would have involved the actual construction of new intelligent agents that live and behave in avirtual environment. However, this phase was canceled by DARPA, reportedly because it was seen as being too ambitious.[1]
Now BICA is atransdisciplinarystudy that aims to design, characterise and implement human-level cognitive architectures. There is also BICA Society, a scientific nonprofit organization formed to promote and facilitate this study.[2]On their website,[3]they have an extensive comparison table of various cognitive architectures.[4]
|
https://en.wikipedia.org/wiki/Biologically_inspired_cognitive_architectures
|
C(pronounced/ˈsiː/– like the letterc)[6]is ageneral-purpose programming language. It was created in the 1970s byDennis Ritchieand remains very widely used and influential. By design, C's features cleanly reflect the capabilities of the targetedCPUs. It has found lasting use inoperating systemscode (especially inkernels[7]),device drivers, andprotocol stacks, but its use inapplication softwarehas been decreasing.[8]C is commonly used on computer architectures that range from the largestsupercomputersto the smallestmicrocontrollersandembedded systems.
A successor to the programming languageB, C was originally developed atBell Labsby Ritchie between 1972 and 1973 to construct utilities running onUnix. It was applied to re-implementing the kernel of the Unix operating system.[9]During the 1980s, C gradually gained popularity. It has become one of the most widely usedprogramming languages,[10][11]with Ccompilersavailable for practically all moderncomputer architecturesandoperating systems. The bookThe C Programming Language, co-authored by the original language designer, served for many years as thede factostandard for the language.[12][1]C has been standardized since 1989 by theAmerican National Standards Institute(ANSI) and, subsequently, jointly by theInternational Organization for Standardization(ISO) and theInternational Electrotechnical Commission(IEC).
C is animperativeprocedurallanguage, supportingstructured programming,lexical variable scope, andrecursion, with astatic type system. It was designed to becompiledto providelow-levelaccess tomemoryand language constructs that map efficiently tomachine instructions, all with minimalruntime support. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. Astandards-compliant C program written withportabilityin mind can be compiled for a wide variety of computer platforms and operating systems with few changes to its source code.
Since 2000, C has consistently ranked among the top four languages in theTIOBE index, a measure of the popularity of programming languages.[13]
C is animperative, procedural language in theALGOLtradition. It has a statictype system. In C, allexecutable codeis contained withinsubroutines(also called "functions", though not in the sense offunctional programming).Function parametersare passed by value, althougharraysare passed aspointers, i.e. the address of the first item in the array.Pass-by-referenceis simulated in C by explicitly passing pointers to the thing being referenced.
C program source text isfree-formcode.Semicolonsterminatestatements, whilecurly bracesare used to group statements intoblocks.
The C language also exhibits the following characteristics:
While C does not include certain features found in other languages (such asobject orientationandgarbage collection), these can be implemented or emulated, often through the use of external libraries (e.g., theGLib Object Systemor theBoehm garbage collector).
Many later languages have borrowed directly or indirectly from C, includingC++,C#, Unix'sC shell,D,Go,Java,JavaScript(includingtranspilers),Julia,Limbo,LPC,Objective-C,Perl,PHP,Python,Ruby,Rust,Swift,VerilogandSystemVerilog(hardware description languages).[5]These languages have drawn many of theircontrol structuresand other basic features from C. Most of them also express highly similarsyntaxto C, and they tend to combine the recognizable expression and statementsyntax of Cwith underlying type systems,data models, and semantics that can be radically different.
The origin of C is closely tied to the development of theUnixoperating system, originally implemented inassembly languageon aPDP-7byDennis RitchieandKen Thompson, incorporating several ideas from colleagues. Eventually, they decided to port the operating system to aPDP-11. The original PDP-11 version of Unix was also developed in assembly language.[9]
Thompson wanted a programming language for developing utilities for the new platform. He first tried writing aFortrancompiler, but he soon gave up the idea and instead created a cut-down version of the recently developedsystems programming languagecalledBCPL. The official description of BCPL was not available at the time,[14]and Thompson modified the syntax to be less 'wordy' and similar to a simplifiedALGOLknown as SMALGOL.[15]He called the resultB,[9]describing it as "BCPL semantics with a lot of SMALGOL syntax".[15]Like BCPL, B had abootstrappingcompiler to facilitate porting to new machines.[15]Ultimately, few utilities were written in B because it was too slow and could not take advantage of PDP-11 features such asbyteaddressability.
In 1971 Ritchie started to improve B, to use the features of the more-powerful PDP-11. A significant addition was a character data type. He called thisNew B(NB).[15]Thompson started to use NB to write theUnixkernel, and his requirements shaped the direction of the language development.[15][16]Through to 1972, richer types were added to the NB language: NB had arrays ofintandchar. Pointers, the ability to generate pointers to other types, arrays of all types, and types to be returned from functions were all also added. Arrays within expressions became pointers. A new compiler was written, and the language was renamed C.[9]
The C compiler and some utilities made with it were included inVersion 2 Unix, which is also known asResearch Unix.[17]
AtVersion 4 Unix, released in November 1973, theUnixkernelwas extensively re-implemented in C.[9]By this time, the C language had acquired some powerful features such asstructtypes.
Thepreprocessorwas introduced around 1973 at the urging ofAlan Snyderand also in recognition of the usefulness of the file-inclusion mechanisms available in BCPL andPL/I. Its original version provided only included files and simple string replacements:#includeand#defineof parameterless macros. Soon after that, it was extended, mostly byMike Leskand then by John Reiser, to incorporate macros with arguments andconditional compilation.[9]
Unix was one of the first operating system kernels implemented in a language other thanassembly. Earlier instances include theMulticssystem (which was written inPL/I) andMaster Control Program(MCP) for theBurroughs B5000(which was written inALGOL) in 1961. In around 1977, Ritchie andStephen C. Johnsonmade further changes to the language to facilitateportabilityof the Unix operating system. Johnson'sPortable C Compilerserved as the basis for several implementations of C on new platforms.[16]
In 1978Brian KernighanandDennis Ritchiepublished the first edition ofThe C Programming Language.[18]Known asK&Rfrom the initials of its authors, the book served for many years as an informalspecificationof the language. The version of C that it describes is commonly referred to as "K&R C". As this was released in 1978, it is now also referred to asC78.[19]The second edition of the book[20]covers the laterANSI Cstandard, described below.
K&Rintroduced several language features:
Even after the publication of the 1989 ANSI standard, for many years K&R C was still considered the "lowest common denominator" to which C programmers restricted themselves when maximum portability was desired, since many older compilers were still in use, and because carefully written K&R C code can be legal Standard C as well.
In early versions of C, only functions that return types other thanintmust be declared if used before the function definition; functions used without prior declaration were presumed to return typeint.
For example:
Theinttype specifiers which are commented out could be omitted in K&R C, but are required in later standards.
Since K&R function declarations did not include any information about function arguments, function parametertype checkswere not performed, although some compilers would issue a warning message if a local function was called with the wrong number of arguments, or if different calls to an external function used different numbers or types of arguments. Separate tools such as Unix'slintutility were developed that (among other things) could check for consistency of function use across multiple source files.
In the years following the publication of K&R C, several features were added to the language, supported by compilers from AT&T (in particularPCC[21]) and some other vendors. These included:
The large number of extensions and lack of agreement on astandard library, together with the language popularity and the fact that not even the Unix compilers precisely implemented the K&R specification, led to the necessity of standardization.[22]
During the late 1970s and 1980s, versions of C were implemented for a wide variety ofmainframe computers,minicomputers, andmicrocomputers, including theIBM PC, as its popularity began to increase significantly.
In 1983 theAmerican National Standards Institute(ANSI) formed a committee, X3J11, to establish a standard specification of C. X3J11 based the C standard on the Unix implementation; however, the non-portable portion of the Unix C library was handed off to theIEEEworking group1003 to become the basis for the 1988POSIXstandard. In 1989, the C standard was ratified as ANSI X3.159-1989 "Programming Language C". This version of the language is often referred to asANSI C, Standard C, or sometimesC89.
In 1990 the ANSI C standard (with formatting changes) was adopted by theInternational Organization for Standardization(ISO) as ISO/IEC 9899:1990, which is sometimes calledC90. Therefore, the terms "C89" and "C90" refer to the same programming language.
ANSI, like other national standards bodies, no longer develops the C standard independently, but defers to the international C standard, maintained by the working groupISO/IEC JTC1/SC22/WG14. National adoption of an update to the international standard typically occurs within a year of ISO publication.
One of the aims of the C standardization process was to produce asupersetof K&R C, incorporating many of the subsequently introduced unofficial features. The standards committee also included several additional features such asfunction prototypes(borrowed from C++),voidpointers, support for internationalcharacter setsandlocales, and preprocessor enhancements. Although thesyntaxfor parameter declarations was augmented to include the style used in C++, the K&R interface continued to be permitted, for compatibility with existing source code.
C89 is supported by current C compilers, and most modern C code is based on it. Any program written only in Standard C and without any hardware-dependent assumptions will run correctly on anyplatformwith a conforming C implementation, within its resource limits. Without such precautions, programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such asGUIlibraries, or to a reliance on compiler- or platform-specific attributes such as the exact size of data types and byteendianness.
In cases where code must be compilable by either standard-conforming or K&R C-based compilers, the__STDC__macro can be used to split the code into Standard and K&R sections to prevent the use on a K&R C-based compiler of features available only in Standard C.
After the ANSI/ISO standardization process, the C language specification remained relatively static for several years. In 1995, Normative Amendment 1 to the 1990 C standard (ISO/IEC 9899/AMD1:1995, known informally as C95) was published, to correct some details and to add more extensive support for international character sets.[23]
The C standard was further revised in the late 1990s, leading to the publication of ISO/IEC 9899:1999 in 1999, which is commonly referred to as "C99". It has since been amended three times by Technical Corrigenda.[24]
C99 introduced several new features, includinginline functions, several newdata types(includinglong long intand acomplextype to representcomplex numbers),variable-length arraysandflexible array members, improved support forIEEE 754floating point, support forvariadic macros(macros of variablearity), and support for one-line comments beginning with//, as in BCPL or C++. Many of these had already been implemented as extensions in several C compilers.
C99 is for the most part backward compatible with C90, but is stricter in some ways; in particular, a declaration that lacks a type specifier no longer hasintimplicitly assumed. A standard macro__STDC_VERSION__is defined with value199901Lto indicate that C99 support is available.GCC,Solaris Studio, and other C compilers now[when?]support many or all of the new features of C99. The C compiler inMicrosoft Visual C++, however, implements the C89 standard and those parts of C99 that are required for compatibility withC++11.[25][needs update]
In addition, the C99 standard requires support foridentifiersusingUnicodein the form of escaped characters (e.g.\u0040or\U0001f431) and suggests support for raw Unicode names.
Work began in 2007 on another revision of the C standard, informally called "C1X" until its official publication of ISO/IEC 9899:2011 on December 8, 2011. The C standards committee adopted guidelines to limit the adoption of new features that had not been tested by existing implementations.
The C11 standard adds numerous new features to C and the library, including type generic macros, anonymous structures, improved Unicode support, atomic operations, multi-threading, and bounds-checked functions. It also makes some portions of the existing C99 library optional, and improves compatibility with C++. The standard macro__STDC_VERSION__is defined as201112Lto indicate that C11 support is available.
C17 is an informal name for ISO/IEC 9899:2018, a standard for the C programming language published in June 2018. It introduces no new language features, only technical corrections, and clarifications to defects in C11. The standard macro__STDC_VERSION__is defined as201710Lto indicate that C17 support is available.
C23 is an informal name for the current major C language standard revision. It was informally known as "C2X" through most of its development. C23 was published in October 2024 as ISO/IEC 9899:2024.[26]The standard macro__STDC_VERSION__is defined as202311Lto indicate that C23 support is available.
C2Y is an informal name for the next major C language standard revision, after C23 (C2X), that is hoped to be released later in the 2020s, hence the '2' in "C2Y". An early working draft of C2Y was released in February 2024 as N3220 by the working groupISO/IEC JTC1/SC22/WG14.[27]
Historically, embedded C programming requires non-standard extensions to the C language to support exotic features such asfixed-point arithmetic, multiple distinctmemory banks, and basic I/O operations.
In 2008, the C Standards Committee published atechnical reportextending the C language[28]to address these issues by providing a common standard for all implementations to adhere to. It includes a number of features not available in normal C, such as fixed-point arithmetic, named address spaces, and basic I/O hardware addressing.
C has aformal grammarspecified by the C standard.[29]Line endings are generally not significant in C; however, line boundaries do have significance during the preprocessing phase. Comments may appear either between the delimiters/*and*/, or (since C99) following//until the end of the line. Comments delimited by/*and*/do not nest, and these sequences of characters are not interpreted as comment delimiters if they appear insidestringor character literals.[30]
C source files contain declarations and function definitions. Function definitions, in turn, contain declarations andstatements. Declarations either define new types using keywords such asstruct,union, andenum, or assign types to and perhaps reserve storage for new variables, usually by writing the type followed by the variable name. Keywords such ascharandintspecify built-in types. Sections of code are enclosed in braces ({and}, sometimes called "curly brackets") to limit the scope of declarations and to act as a single statement for control structures.
As an imperative language, C usesstatementsto specify actions. The most common statement is anexpression statement, consisting of an expression to be evaluated, followed by a semicolon; as aside effectof the evaluation,functions may be calledandvariables assignednew values. To modify the normal sequential execution of statements, C provides several control-flow statements identified by reserved keywords.Structured programmingis supported byif... [else] conditional execution and bydo...while,while, andforiterative execution (looping). Theforstatement has separate initialization, testing, and reinitialization expressions, any or all of which can be omitted.breakandcontinuecan be used within the loop. Break is used to leave the innermost enclosing loop statement and continue is used to skip to its reinitialisation. There is also a non-structuredgotostatement which branches directly to the designatedlabelwithin the function.switchselects acaseto be executed based on the value of an integer expression. Different from many other languages, control-flow willfall throughto the nextcaseunless terminated by abreak.
Expressions can use a variety of built-in operators and may contain function calls. The order in which arguments to functions and operands to most operators are evaluated is unspecified. The evaluations may even be interleaved. However, all side effects (including storage to variables) will occur before the next "sequence point"; sequence points include the end of each expression statement, and the entry to and return from each function call. Sequence points also occur during evaluation of expressions containing certain operators (&&,||,?:and thecomma operator). This permits a high degree of object code optimization by the compiler, but requires C programmers to take more care to obtain reliable results than is needed for other programming languages.
Kernighan and Ritchie say in the Introduction ofThe C Programming Language: "C, like any other language, has its blemishes. Some of the operators have the wrong precedence; some parts of the syntax could be better."[31]The C standard did not attempt to correct many of these blemishes, because of the impact of such changes on already existing software.
The basic C source character set includes the following characters:
Thenewlinecharacter indicates the end of a text line; it need not correspond to an actual single character, although for convenience C treats it as such.
Additional multi-byte encoded characters may be used instring literals, but they are not entirelyportable. SinceC99multi-national Unicode characters can be embedded portably within C source text by using\uXXXXor\UXXXXXXXXencoding (whereXdenotes a hexadecimal character).
The basic C execution character set contains the same characters, along with representations foralert,backspace, andcarriage return.Run-timesupport for extended character sets has increased with each revision of the C standard.
The following reserved words arecase sensitive.
C89 has 32 reserved words, also known as 'keywords', which cannot be used for any purposes other than those for which they are predefined:
C99 added five more reserved words: (‡ indicates an alternative spelling alias for a C23 keyword)
C11 added seven more reserved words:[32](‡ indicates an alternative spelling alias for a C23 keyword)
C23 reserved fifteen more words:
Most of the recently reserved words begin with an underscore followed by a capital letter, because identifiers of that form were previously reserved by the C standard for use only by implementations. Since existing program source code should not have been using these identifiers, it would not be affected when C implementations started supporting these extensions to the programming language. Some standard headers do define more convenient synonyms for underscored identifiers. Some of those words were added as keywords with their conventional spelling in C23 and the corresponding macros were removed.
Prior to C89,entrywas reserved as a keyword. In the second edition of their bookThe C Programming Language, which describes what became known as C89, Kernighan and Ritchie wrote, "The ... [keyword]entry, formerly reserved but never used, is no longer reserved." and "The stillbornentrykeyword is withdrawn."[33]
C supports a rich set ofoperators, which are symbols used within anexpressionto specify the manipulations to be performed while evaluating that expression. C has operators for:
C uses the operator=(used in mathematics to express equality) to indicate assignment, following the precedent ofFortranandPL/I, but unlikeALGOLand its derivatives. C uses the operator==to test for equality. The similarity between the operators for assignment and equality may result in the accidental use of one in place of the other, and in many cases the mistake does not produce an error message (although some compilers produce warnings). For example, the conditional expressionif (a == b + 1)might mistakenly be written asif (a = b + 1), which will be evaluated astrueunless the value ofais0after the assignment.[34]
The Coperator precedenceis not always intuitive. For example, the operator==binds more tightly than (is executed prior to) the operators&(bitwise AND) and|(bitwise OR) in expressions such asx & 1 == 0, which must be written as(x & 1) == 0if that is the coder's intent.[35]
The "hello, world" example that appeared in the first edition ofK&Rhas become the model for an introductory program in most programming textbooks. The program prints "hello, world" to thestandard output, which is usually a terminal or screen display.
The original version was:[36]
A standard-conforming "hello, world" program is:[a]
The first line of the program contains apreprocessing directive, indicated by#include. This causes the compiler to replace that line of code with the entire text of thestdio.hheader file, which contains declarations for standard input and output functions such asprintfandscanf. The angle brackets surroundingstdio.hindicate that the header file can be located using a search strategy that prefers headers provided with the compiler to other headers having the same name (as opposed to double quotes which typically include local or project-specific header files).
The second line indicates that a function namedmainis being defined. Themainfunction serves a special purpose in C programs; therun-time environmentcalls themainfunction to begin program execution. The type specifierintindicates that the value returned to the invoker (in this case the run-time environment) as a result of evaluating themainfunction, is an integer. The keywordvoidas a parameter list indicates that themainfunction takes no arguments.[b]
The opening curly brace indicates the beginning of the code that defines themainfunction.
The next line of the program is a statement thatcalls(i.e. diverts execution to) a function namedprintf, which in this case is supplied from a systemlibrary. In this call, theprintffunction ispassed(i.e. provided with) a single argument, which is theaddressof the first character in thestring literal"hello, world\n". The string literal is an unnamedarrayset up automatically by the compiler, with elements of typecharand a finalNULL character(ASCII value 0) marking the end of the array (to allowprintfto determine the length of the string). The NULL character can also be written as theescape sequence\0. The\nis a standard escape sequence that C translates to anewlinecharacter, which, on output, signifies the end of the current line. The return value of theprintffunction is of typeint, but it is silently discarded since it is not used. (A more careful program might test the return value to check that theprintffunction succeeded.) The semicolon;terminates the statement.
The closing curly brace indicates the end of the code for themainfunction. According to the C99 specification and newer, themainfunction (unlike any other function) will implicitly return a value of0upon reaching the}that terminates the function.[c]The return value of0is interpreted by the run-time system as an exit code indicating successful execution of the function.[37]
Thetype systemin C isstaticandweakly typed, which makes it similar to the type system ofALGOLdescendants such asPascal.[38]There are built-in types for integers of various sizes, both signed and unsigned,floating-point numbers, and enumerated types (enum). Integer typecharis often used for single-byte characters. C99 added aBoolean data type. There are also derived types includingarrays,pointers,records(struct), andunions(union).
C is often used in low-level systems programming where escapes from the type system may be necessary. The compiler attempts to ensure type correctness of most expressions, but the programmer can override the checks in various ways, either by using atype castto explicitly convert a value from one type to another, or by using pointers or unions to reinterpret the underlying bits of a data object in some other way.
Some find C's declaration syntax unintuitive, particularly forfunction pointers. (Ritchie's idea was to declare identifiers in contexts resembling their use: "declaration reflects use".)[39]
C'susual arithmetic conversionsallow for efficient code to be generated, but can sometimes produce unexpected results. For example, a comparison of signed and unsigned integers of equal width requires a conversion of the signed value to unsigned. This can generate unexpected results if the signed value is negative.
C supports the use ofpointers, a type ofreferencethat records the address or location of an object or function in memory. Pointers can bedereferencedto access data stored at the address pointed to, or to invoke a pointed-to function. Pointers can be manipulated using assignment orpointer arithmetic. The run-time representation of a pointer value is typically a raw memory address (perhaps augmented by an offset-within-word field), but since a pointer's type includes the type of the thing pointed to, expressions including pointers can be type-checked at compile time. Pointer arithmetic is automatically scaled by the size of the pointed-to data type.
Pointers are used for many purposes in C.Text stringsare commonly manipulated using pointers into arrays of characters.Dynamic memory allocationis performed using pointers; the result of amallocis usuallycastto the data type of the data to be stored. Many data types, such astrees, are commonly implemented as dynamically allocatedstructobjects linked together using pointers. Pointers to other pointers are often used in multi-dimensional arrays and arrays ofstructobjects. Pointers to functions (function pointers) are useful for passing functions as arguments tohigher-order functions(such asqsortorbsearch), indispatch tables, or ascallbackstoevent handlers.[37]
Anull pointervalueexplicitly points to no valid location. Dereferencing a null pointer value is undefined, often resulting in asegmentation fault. Null pointer values are useful for indicating special cases such as no "next" pointer in the final node of alinked list, or as an error indication from functions returning pointers. In appropriate contexts in source code, such as for assigning to a pointer variable, anull pointer constantcan be written as0, with or without explicit casting to a pointer type, as theNULLmacro defined by several standard headers or, since C23 with the constantnullptr. In conditional contexts, null pointer values evaluate tofalse, while all other pointer values evaluate totrue.
Void pointers (void *) point to objects of unspecified type, and can therefore be used as "generic" data pointers. Since the size and type of the pointed-to object is not known, void pointers cannot be dereferenced, nor is pointer arithmetic on them allowed, although they can easily be (and in many contexts implicitly are) converted to and from any other object pointer type.[37]
Careless use of pointers is potentially dangerous. Because they are typically unchecked, a pointer variable can be made to point to any arbitrary location, which can cause undesirable effects. Although properly used pointers point to safe places, they can be made to point to unsafe places by using invalidpointer arithmetic; the objects they point to may continue to be used after deallocation (dangling pointers); they may be used without having been initialized (wild pointers); or they may be directly assigned an unsafe value using a cast, union, or through another corrupt pointer. In general, C is permissive in allowing manipulation of and conversion between pointer types, although compilers typically provide options for various levels of checking. Some other programming languages address these problems by using more restrictivereferencetypes.
Arraytypes in C are traditionally of a fixed, static size specified at compile time. The more recent C99 standard also allows a form of variable-length arrays. However, it is also possible to allocate a block of memory (of arbitrary size) at run-time, using the standard library'smallocfunction, and treat it as an array.
Since arrays are always accessed (in effect) via pointers, array accesses are typicallynotchecked against the underlying array size, although some compilers may providebounds checkingas an option.[40][41]Array bounds violations are therefore possible and can lead to various repercussions, including illegal memory accesses, corruption of data,buffer overruns, and run-time exceptions.
C does not have a special provision for declaringmulti-dimensional arrays, but rather relies onrecursionwithin the type system to declare arrays of arrays, which effectively accomplishes the same thing. The index values of the resulting "multi-dimensional array" can be thought of as increasing inrow-major order. Multi-dimensional arrays are commonly used in numerical algorithms (mainly from appliedlinear algebra) to store matrices. The structure of the C array is well suited to this particular task. However, in early versions of C the bounds of the array must be known fixed values or else explicitly passed to any subroutine that requires them, and dynamically sized arrays of arrays cannot be accessed using double indexing. (A workaround for this was to allocate the array with an additional "row vector" of pointers to the columns.) C99 introduced "variable-length arrays" which address this issue.
The following example using modern C (C99 or later) shows allocation of a two-dimensional array on the heap and the use of multi-dimensional array indexing for accesses (which can use bounds-checking on many C compilers):
And here is a similar implementation using C99'sAutoVLAfeature:[d]
The subscript notationx[i](wherexdesignates a pointer) issyntactic sugarfor*(x+i).[42]Taking advantage of the compiler's knowledge of the pointer type, the address thatx + ipoints to is not the base address (pointed to byx) incremented byibytes, but rather is defined to be the base address incremented byimultiplied by the size of an element thatxpoints to. Thus,x[i]designates thei+1th element of the array.
Furthermore, in most expression contexts (a notable exception is as operand ofsizeof), an expression of array type is automatically converted to a pointer to the array's first element. This implies that an array is never copied as a whole when named as an argument to a function, but rather only the address of its first element is passed. Therefore, although function calls in C usepass-by-valuesemantics, arrays are in effect passed byreference.
The total size of an arrayxcan be determined by applyingsizeofto an expression of array type. The size of an element can be determined by applying the operatorsizeofto any dereferenced element of an arrayA, as inn = sizeof A[0]. Thus, the number of elements in a declared arrayAcan be determined assizeof A / sizeof A[0]. Note, that if only a pointer to the first element is available as it is often the case in C code because of the automatic conversion described above, the information about the full type of the array and its length are lost.
One of the most important functions of a programming language is to provide facilities for managingmemoryand the objects that are stored in memory. C provides three principal ways to allocate memory for objects:[37]
These three approaches are appropriate in different situations and have various trade-offs. For example, static memory allocation has little allocation overhead, automatic allocation may involve slightly more overhead, and dynamic memory allocation can potentially have a great deal of overhead for both allocation and deallocation. The persistent nature of static objects is useful for maintaining state information across function calls, automatic allocation is easy to use but stack space is typically much more limited and transient than either static memory or heap space, and dynamic memory allocation allows convenient allocation of objects whose size is known only at run-time. Most C programs make extensive use of all three.
Where possible, automatic or static allocation is usually simplest because the storage is managed by the compiler, freeing the programmer of the potentially error-prone chore of manually allocating and releasing storage. However, many data structures can change in size at runtime, and since static allocations (and automatic allocations before C99) must have a fixed size at compile-time, there are many situations in which dynamic allocation is necessary.[37]Prior to the C99 standard, variable-sized arrays were a common example of this. (See the article onC dynamic memory allocationfor an example of dynamically allocated arrays.) Unlike automatic allocation, which can fail at run time with uncontrolled consequences, the dynamic allocation functions return an indication (in the form of a null pointer value) when the required storage cannot be allocated. (Static allocation that is too large is usually detected by thelinkerorloader, before the program can even begin execution.)
Unless otherwise specified, static objects contain zero or null pointer values upon program startup. Automatically and dynamically allocated objects are initialized only if an initial value is explicitly specified; otherwise they initially have indeterminate values (typically, whateverbit patternhappens to be present in thestorage, which might not even represent a valid value for that type). If the program attempts to access an uninitialized value, the results are undefined. Many modern compilers try to detect and warn about this problem, but bothfalse positives and false negativescan occur.
Heap memory allocation has to be synchronized with its actual usage in any program to be reused as much as possible. For example, if the only pointer to a heap memory allocation goes out of scope or has its value overwritten before it is deallocated explicitly, then that memory cannot be recovered for later reuse and is essentially lost to the program, a phenomenon known as amemory leak.Conversely, it is possible for memory to be freed, but is referenced subsequently, leading to unpredictable results. Typically, the failure symptoms appear in a portion of the program unrelated to the code that causes the error, making it difficult to diagnose the failure. Such issues are ameliorated in languages withautomatic garbage collection.
The C programming language useslibrariesas its primary method of extension. In C, a library is a set of functions contained within a single "archive" file. Each library typically has aheader file, which contains the prototypes of the functions contained within the library that may be used by a program, and declarations of special data types and macro symbols used with these functions. For a program to use a library, it must include the library's header file, and the library must be linked with the program, which in many cases requirescompiler flags(e.g.,-lm, shorthand for "link the math library").[37]
The most common C library is theC standard library, which is specified by theISOandANSI Cstandards and comes with every C implementation (implementations which target limited environments such asembedded systemsmay provide only a subset of the standard library). This library supports stream input and output, memory allocation, mathematics, character strings, and time values. Several separate standard headers (for example,stdio.h) specify the interfaces for these and other standard library facilities.
Another common set of C library functions are those used by applications specifically targeted forUnixandUnix-likesystems, especially functions which provide an interface to thekernel. These functions are detailed in various standards such asPOSIXand theSingle UNIX Specification.
Since many programs have been written in C, there are a wide variety of other libraries available. Libraries are often written in C because C compilers generate efficientobject code; programmers then create interfaces to the library so that the routines can be used from higher-level languages likeJava,Perl, andPython.[37]
File input and output (I/O) is not part of the C language itself but instead is handled by libraries (such as the C standard library) and their associated header files (e.g.stdio.h). File handling is generally implemented through high-level I/O which works throughstreams. A stream is from this perspective a data flow that is independent of devices, while a file is a concrete device. The high-level I/O is done through the association of a stream to a file. In the C standard library, abuffer(a memory area or queue) is temporarily used to store data before it is sent to the final destination. This reduces the time spent waiting for slower devices, for example ahard driveorsolid-state drive. Low-level I/O functions are not part of the standard C library[clarification needed]but are generally part of "bare metal" programming (programming that is independent of anyoperating systemsuch as mostembedded programming). With few exceptions, implementations include low-level I/O.
A number of tools have been developed to help C programmers find and fix statements with undefined behavior or possibly erroneous expressions, with greater rigor than that provided by the compiler.
Automated source code checking and auditing tools exist, such asLint. A common practice is to use Lint to detect questionable code when a program is first written. Once a program passes Lint, it is then compiled using the C compiler. Also, many compilers can optionally warn about syntactically valid constructs that are likely to actually be errors.MISRA Cis a proprietary set of guidelines to avoid such questionable code, developed for embedded systems.[43]
There are also compilers, libraries, and operating system level mechanisms for performing actions that are not a standard part of C, such asbounds checkingfor arrays, detection ofbuffer overflow,serialization,dynamic memorytracking, andautomatic garbage collection.
Memory management checking tools likePurifyorValgrindand linking with libraries containing special versions of thememory allocation functionscan help uncover runtime errors in memory usage.[44][45]
C is widely used forsystems programmingin implementingoperating systemsandembedded systemapplications.[46]This is for several reasons:
C enables programmers to create efficient implementations of algorithms and data structures, because the layer of abstraction from hardware is thin, and its overhead is low, an important criterion for computationally intensive programs. For example, theGNU Multiple Precision Arithmetic Library, theGNU Scientific Library,Mathematica, andMATLABare completely or partially written in C. Many languages support calling library functions in C, for example, thePython-based frameworkNumPyuses C for the high-performance and hardware-interacting aspects.
Computer games are often built from a combination of languages. C has featured significantly, especially for those games attempting to obtain best performance from computer platforms. Examples include Doom from 1993.[47]
C is sometimes used as anintermediate languageby implementations of other languages. This approach may be used for portability or convenience; by using C as an intermediate language, additional machine-specific code generators are not necessary. C has some features, such as line-number preprocessor directives and optional superfluous commas at the end of initializer lists, that support compilation of generated code. However, some of C's shortcomings have prompted the development of otherC-based languagesspecifically designed for use as intermediate languages, such asC--. Also, contemporary major compilersGCCandLLVMboth feature anintermediate representationthat is not C, and those compilers support front ends for many languages including C.
A consequence of C's wide availability and efficiency is thatcompilers, libraries andinterpretersof other programming languages are often implemented in C.[48]For example, thereference implementationsofPython,[49]Perl,[50]Ruby,[51]andPHP[52]are written in C.
Historically, C was sometimes used forweb developmentusing theCommon Gateway Interface(CGI) as a "gateway" for information between the web application, the server, and the browser.[53]C may have been chosen overinterpreted languagesbecause of its speed, stability, and near-universal availability.[54]It is no longer common practice for web development to be done in C,[55]and many otherweb development languagesare popular. Applications where C-based web development continues include theHTTPconfiguration pages onrouters,IoTdevices and similar, although even here some projects have parts in higher-level languages e.g. the use ofLuawithinOpenWRT.
The two most popularweb servers,Apache HTTP ServerandNginx, are both written in C. These web servers interact with the operating system, listen on TCP ports for HTTP requests, and then serve up static web content, or cause the execution of other languages handling to 'render' content such asPHP, which is itself primarily written in C. C's close-to-the-metal approach allows for the construction of these high-performance software systems.
C has also been widely used to implementend-userapplications.[56]However, such applications can also be written in newer, higher-level languages.
the power of assembly language and the convenience of ... assembly language
While C has been popular, influential and hugely successful, it has drawbacks, including:
For some purposes, restricted styles of C have been adopted, e.g.MISRA CorCERT C, in an attempt to reduce the opportunity for bugs. Databases such asCWEattempt to count the ways C etc. has vulnerabilities, along with recommendations for mitigation.
There aretoolsthat can mitigate against some of the drawbacks. Contemporary C compilers include checks which may generate warnings to help identify many potential bugs.
C has both directly and indirectly influenced many later languages such asC++andJava.[65]The most pervasive influence has been syntactical; all of the languages mentioned combine the statement and (more or less recognizably) expressionsyntax of Cwith type systems, data models or large-scale program structures that differ from those of C, sometimes radically.
Several C or near-C interpreters exist, includingChandCINT, which can also be used for scripting.
Whenobject-oriented programminglanguages became popular,C++andObjective-Cwere two different extensions of C that provided object-oriented capabilities. Both languages were originally implemented assource-to-source compilers; source code was translated into C, and then compiled with a C compiler.[66]
TheC++programming language (originally named "C withClasses") was devised byBjarne Stroustrupas an approach to providingobject-orientedfunctionality with a C-like syntax.[67]C++ adds greater typing strength, scoping, and other tools useful in object-oriented programming, and permitsgeneric programmingvia templates. Nearly a superset of C, C++ now[when?]supports most of C, witha few exceptions.
Objective-Cwas originally a very "thin" layer on top of C, and remains a strictsupersetof C that permits object-oriented programming using a hybrid dynamic/static typing paradigm. Objective-C derives its syntax from both C andSmalltalk: syntax that involves preprocessing, expressions, function declarations, and function calls is inherited from C, while the syntax for object-oriented features was originally taken from Smalltalk.
In addition toC++andObjective-C,Ch,Cilk, andUnified Parallel Care nearly supersets of C.
|
https://en.wikipedia.org/wiki/C_programming_language
|
Similarity searchis the most general term used for a range of mechanisms which share the principle of searching (typically very large) spaces of objects where the only available comparator is thesimilaritybetween any pair of objects. This is becoming increasingly important in an age of large information repositories where the objects contained do not possess any natural order, for example large collections of images, sounds and other sophisticated digital objects.
Nearest neighbor searchandrange queriesare important subclasses of similarity search, and a number of solutions exist. Research in similarity search is dominated by the inherent problems of searching over complex objects. Such objects cause most known techniques to lose traction over large collections, due to a manifestation of the so-calledcurse of dimensionality, and there are still many unsolved problems. Unfortunately, in many cases where similarity search is necessary, the objects are inherently complex.
The most general approach to similarity search relies upon the mathematical notion ofmetric space, which allows the construction of efficient index structures in order to achieve scalability in the search domain.
Similarity search evolved independently in a number of different scientific and computing contexts, according to various needs. In 2008 a few leading researchers in the field felt strongly that the subject should be a research topic in its own right, to allow focus on the general issues applicable across the many diverse domains of its use. This resulted in the formation of theSISAPfoundation, whose main activity is a series of annual international conferences on the generic topic.
Metric search is similarity search which takes place withinmetric spaces. While thesemimetricproperties are more or less necessary for any kind of search to be meaningful, the further property oftriangle inequalityis useful for engineering, rather than conceptual, purposes.
A simple corollary of triangle inequality is that, if any two objects within the space are far apart, then no third object can be close to both. This observation allows data structures to be built, based on distances measured within the data collection, which allow subsets of the data to be excluded when a query is executed. As a simple example, areferenceobject can be chosen from the data set, and the remainder of the set divided into two parts based on distance to this object: those close to the reference object in setA, and those far from the object in setB. If, when the set is later queried, the distance from the query to the reference object is large, then none of the objects within setAcan be very close to the query; if it is very small, then no object within setBcan be close to the query.
Once such situations are quantified and studied, many different metric indexing structures can be designed, variously suitable for different types of collections. The research domain of metric search can thus be characterised as the study of pre-processing algorithms over large and relatively static collections of data which, using the properties of metric spaces, allow efficient similarity search to be performed.
A popular approach for similarity search islocality sensitive hashing(LSH).[1]Ithashesinput items so that similar items map to the same "buckets" in memory with high probability (the number of buckets being much smaller than the universe of possible input items). It is often applied in nearest neighbor search on large scale high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases.[2]
|
https://en.wikipedia.org/wiki/Similarity_search
|
Poetry(from theGreekwordpoiesis, "making") is a form ofliterary artthat usesaestheticand oftenrhythmic[1][2][3]qualities oflanguageto evokemeaningsin addition to, or in place of,literalor surface-level meanings. Any particular instance of poetry is called apoemand is written by apoet.
Poets use a variety of techniques called poetic devices, such asassonance,alliteration,euphony and cacophony,onomatopoeia,rhythm(viametre), andsound symbolism, to producemusicalor other artistic effects. They also frequently organize these effects intopoetic structures, which may be strict or loose, conventional or invented by the poet. Poetic structures vary dramatically by language and cultural convention, but they often userhythmic metre(patterns ofsyllable stressorsyllable (mora) weight). They may also use repeating patterns ofphonemes,phonemegroups, tones (phonemic pitch shifts found intonal languages), words, or entire phrases. These includeconsonance(or justalliteration),assonance(as in thedróttkvætt), andrhyme schemes(patterns inrimes, a type of phoneme group). Poetic structures may even besemantic(e.g. thevoltarequired in aPetrachan sonnet).
Most written poems are formatted inverse: a series or stack oflineson a page, which follow the poetic structure. For this reason,versehas also become asynonym(ametonym) for poetry.[note 1]
Some poetry types are unique to particularculturesandgenresand respond to characteristics of the language in which the poet writes. Readers accustomed to identifying poetry withDante,Goethe,Mickiewicz, orRumimay think of it as written inlinesbased onrhymeand regularmeter. There are, however, traditions, such asBiblical poetryandalliterative verse, that use other means to create rhythm andeuphony. Other traditions, such asSomali poetry, rely on complex systems of alliteration and metre independent of writing and been described as structurally comparable to ancient Greek and medieval European oral verse.[4]Much modern poetry reflects a critique of poetic tradition,[5]testing the principle of euphony itself or altogether forgoing rhyme or set rhythm.[6][7]
Poetry has a long and variedhistory, evolving differentially across the globe. It dates back at least to prehistoric times with hunting poetry inAfricaand topanegyricandelegiaccourt poetry of the empires of theNile,Niger, andVolta Rivervalleys.[8]Some of the earliest written poetry in Africa occurs among thePyramid Textswritten during the 25th century BCE. The earliest surviving Western Asianepic poem, theEpic of Gilgamesh, was written in theSumerian language.
Early poems in theEurasiancontinent include folk songs such as the ChineseShijing, religioushymns(such as theSanskritRigveda, theZoroastrianGathas, theHurrian songs, and the HebrewPsalms); and retellings of oral epics (such as the EgyptianStory of Sinuhe,Indian epic poetry, and theHomericepics, theIliadand theOdyssey).
Ancient Greek attempts to define poetry, such asAristotle'sPoetics, focused on the uses ofspeechinrhetoric,drama,song, andcomedy. Later attempts concentrated on features such asrepetition,verse form, andrhyme, and emphasized aesthetics which distinguish poetry from the format of more objectively-informative, academic, or typical writing, which is known asprose.
Poetry uses forms and conventions to suggest differentialinterpretationsof words, or to evokeemotiveresponses. The use ofambiguity,symbolism,irony, and otherstylisticelements ofpoetic dictionoften leaves a poem open to multiple interpretations. Similarly, figures of speech such asmetaphor,simile, andmetonymy[9]establish a resonance between otherwise disparate images—a layering of meanings, forming connections previously not perceived. Kindred forms of resonance may exist, between individualverses, in their patterns of rhyme or rhythm.
Poets – as, from theGreek, "makers" of language – have contributed to the evolution of the linguistic, expressive, and utilitarian qualities of their languages. In an increasinglyglobalizedworld, poets often adapt forms, styles, and techniques from diverse cultures and languages.
AWestern culturaltradition (extending at least fromHomertoRilke) associates the production of poetry withinspiration– often by aMuse(either classical or contemporary), or through other (often canonised) poets' work which sets some kind of example or challenge.
In first-person poems, the lyrics are spoken by an "I", acharacterwho may be termed thespeaker, distinct from thepoet(theauthor). Thus if, for example, a poem asserts, "I killed my enemy in Reno", it is the speaker, not the poet, who is the killer (unless this "confession" is a form ofmetaphorwhich needs to be considered in closercontext– viaclose reading).
Some scholars believe that the art of poetry may predateliteracy, and developed from folkepicsand other oral genres.[10][11]Others, however, suggest that poetry did not necessarily predate writing.[12]
The oldest surviving epic poem, theEpic of Gilgamesh, dates from the 3rd millenniumBCE inSumer(inMesopotamia, present-dayIraq), and was written incuneiformscript on clay tablets and, later, onpapyrus.[13]TheIstanbul tablet#2461, dating toc.2000BCE, describes an annual rite in which the kingsymbolically marriedand mated with the goddessInannato ensure fertility and prosperity; some have labelled it the world's oldest love poem.[14][15]An example of Egyptian epic poetry isThe Story of Sinuhe(c. 1800 BCE).[16]
Other ancient epics includes the GreekIliadand theOdyssey; the PersianAvestanbooks (theYasna); theRomannational epic,Virgil'sAeneid(written between 29 and 19 BCE); and theIndian epics, theRamayanaand theMahabharata. Epic poetry appears to have been composed in poetic form as an aid to memorization and oral transmission in ancient societies.[12][17]
Other forms of poetry, including such ancient collections of religioushymnsas the IndianSanskrit-languageRigveda, the AvestanGathas, theHurrian songs, and the HebrewPsalms, possibly developed directly fromfolk songs. The earliest entries in the oldest extant collection ofChinese poetry, theClassic of Poetry(Shijing), were initiallylyrics.[18]The Shijing, with its collection of poems and folk songs, was heavily valued by the philosopherConfuciusand is considered to be one of the officialConfucian classics. His remarks on the subject have become an invaluable source inancient music theory.[19]
The efforts of ancient thinkers to determine what makes poetry distinctive as a form, and what distinguishes good poetry from bad, resulted in "poetics"—the study of the aesthetics of poetry.[20]Some ancient societies, such as China's through theShijing, developed canons of poetic works that had ritual as well as aesthetic importance.[21]More recently, thinkers have struggled to find a definition that could encompass formal differences as great as those between Chaucer'sCanterbury TalesandMatsuo Bashō'sOku no Hosomichi, as well as differences in content spanningTanakhreligious poetry, love poetry, andrap.[22]
Until recently, the earliest examples ofstressed poetryhad been thought to be works composed byRomanos the Melodist(fl.6th century CE). However,Tim Whitmarshwrites that an inscribed Greek poem predated Romanos' stressed poetry.[23][24][25]
Classical thinkers in theWestemployed classification as a way to define and assess the quality of poetry. Notably, the existing fragments ofAristotle'sPoeticsdescribe three genres of poetry—the epic, the comic, and the tragic—and develop rules to distinguish the highest-quality poetry in each genre, based on the perceived underlying purposes of the genre.[26]Lateraestheticiansidentified three major genres: epic poetry,lyric poetry, anddramatic poetry, treatingcomedyandtragedyassubgenresof dramatic poetry.[27]
Aristotle's work was influential throughout the Middle East during theIslamic Golden Age,[28]as well as in Europe during theRenaissance.[29]Later poets and aestheticians often distinguished poetry from, and defined it in opposition toprose, which they generally understood as writing with a proclivity to logical explication and a linear narrative structure.[30]
This does not imply that poetry is illogical or lacks narration, but rather that poetry is an attempt to render the beautiful or sublime without the burden of engaging the logical or narrative thought-process. EnglishRomanticpoetJohn Keatstermed this escape from logic "negative capability".[31]This "romantic" approach viewsformas a key element of successful poetry because form is abstract and distinct from the underlying notional logic. This approach remained influential into the 20th century.[32]
During the 18th and 19th centuries, there was also substantially more interaction among the various poetic traditions, in part due to the spread of Europeancolonialismand the attendant rise in global trade.[33]In addition to a boom intranslation, during the Romantic period numerous ancient works were rediscovered.[34]
Some 20th-centuryliterary theoristsrely less on the ostensible opposition of prose and poetry, instead focusing on the poet as simply one who creates using language, and poetry as what the poet creates.[35]The underlying concept of the poet ascreatoris not uncommon, and somemodernist poetsessentially do not distinguish between the creation of a poem with words, and creative acts in other media. Other modernists challenge the very attempt to define poetry as misguided.[36]
The rejection of traditional forms and structures for poetry that began in the first half of the 20th century coincided with a questioning of the purpose and meaning of traditional definitions of poetry and of distinctions between poetry and prose, particularly given examples of poetic prose and prosaic poetry. Numerous modernist poets have written in non-traditional forms or in what traditionally would have been considered prose, although their writing was generally infused with poetic diction and often with rhythm andtoneestablished bynon-metricalmeans. While there was a substantialformalistreaction within the modernist schools to the breakdown of structure, this reaction focused as much on the development of new formal structures and syntheses as on the revival of older forms and structures.[37]
Postmodernismgoes beyond modernism's emphasis on the creative role of the poet, to emphasize the role of the reader of a text (hermeneutics), and to highlight the complex cultural web within which a poem is read.[38]Today, throughout the world, poetry often incorporates poetic form and diction from other cultures and from the past, further confounding attempts at definition and classification that once made sense within a tradition such as theWestern canon.[39]
The early 21st-century poetic tradition appears to continue to strongly orient itself to earlier precursor poetic traditions such as those initiated byWhitman,Emerson, andWordsworth. The literary criticGeoffrey Hartman(1929–2016) used the phrase "the anxiety of demand" to describe the contemporary response to older poetic traditions as "being fearful that the fact no longer has a form",[40]building on a trope introduced by Emerson. Emerson had maintained that in the debate concerning poetic structure where either "form" or "fact" could predominate, that one need simply "Ask the fact for the form." This has been challenged at various levels by other literary scholars such asHarold Bloom(1930–2019), who has stated: "The generation of poets who stand together now, mature and ready to write the major American verse of the twenty-first century, may yet be seen as what Stevens called 'a great shadow's last embellishment,' the shadow being Emerson's."[41]
In the 2020s, advances inartificial intelligence(AI), particularlylarge language models, enabled the generation of poetry in specific styles and formats.[42]A 2024 study found that AI-generated poems were rated by non-expert readers as more rhythmic, beautiful, and human-like than those written by well-known human authors. This preference may stem from the relative simplicity and accessibility of AI-generated poetry, which some participants found easier to understand.[43]
Prosody is the study of the meter,rhythm, andintonationof a poem. Rhythm and meter are different, although closely related.[44]Meter is the definitive pattern established for a verse (such asiambic pentameter), while rhythm is the actual sound that results from a line of poetry. Prosody also may be used more specifically to refer to thescanningof poetic lines to show meter.[45]
The methods for creating poetic rhythm vary across languages and between poetic traditions. Languages are often described as having timing set primarily byaccents,syllables, ormoras, depending on how rhythm is established, although a language can be influenced by multiple approaches.Japaneseis amora-timed language.Latin,Catalan,French,Leonese,GalicianandSpanishare called syllable-timed languages. Stress-timed languages includeEnglish,Russianand, generally,German.[46]Varyingintonationalso affects how rhythm is perceived. Languages can rely on either pitch or tone. Some languages with a pitch accent are Vedic Sanskrit or Ancient Greek.Tonal languagesinclude Chinese, Vietnamese and mostSubsaharan languages.[47]
Metrical rhythm generally involves precise arrangements of stresses or syllables into repeated patterns calledfeetwithin a line. In Modern English verse the pattern of stresses primarily differentiate feet, so rhythm based on meter in Modern English is most often founded on the pattern of stressed and unstressed syllables (alone orelided).[48]In theclassical languages, on the other hand, while themetricalunits are similar,vowel lengthrather than stresses define the meter.[49]Old Englishpoetry used a metrical pattern involving varied numbers of syllables but a fixed number of strong stresses in each line.[50]
The chief device of ancientHebrewBiblical poetry, including many of thepsalms, wasparallelism, a rhetorical structure in which successive lines reflected each other in grammatical structure, sound structure, notional content, or all three. Parallelism lent itself toantiphonalorcall-and-responseperformance, which could also be reinforced byintonation. Thus, Biblical poetry relies much less on metrical feet to create rhythm, but instead creates rhythm based on much larger sound units of lines, phrases and sentences.[51]Some classical poetry forms, such asVenpaof theTamil language, had rigid grammars (to the point that they could be expressed as acontext-free grammar) which ensured a rhythm.[52]
Classical Chinese poetics, based on thetone system of Middle Chinese, recognized two kinds of tones: the level (平píng) tone and the oblique (仄zè) tones, a category consisting of the rising (上sháng) tone, the departing (去qù) tone and the entering (入rù) tone. Certain forms of poetry placed constraints on which syllables were required to be level and which oblique.
The formal patterns of meter used in Modern English verse to create rhythm no longer dominate contemporary English poetry. In the case offree verse, rhythm is often organized based on looser units ofcadencerather than a regular meter.Robinson Jeffers,Marianne Moore, andWilliam Carlos Williamsare three notable poets who reject the idea that regular accentual meter is critical to English poetry.[53]Jeffers experimented withsprung rhythmas an alternative to accentual rhythm.[54]
In the Western poetic tradition, meters are customarily grouped according to a characteristicmetrical footand the number of feet per line.[56]The number of metrical feet in a line are described using Greek terminology:tetrameterfor four feet andhexameterfor six feet, for example.[57]Thus, "iambic pentameter" is a meter comprising five feet per line, in which the predominant kind of foot is the "iamb". This metric system originated in ancientGreek poetry, and was used by poets such asPindarandSappho, and by the greattragediansofAthens. Similarly, "dactylic hexameter", comprises six feet per line, of which the dominant kind of foot is the "dactyl". Dactylic hexameter was the traditional meter of Greekepic poetry, the earliest extant examples of which are the works ofHomerandHesiod.[58]Iambic pentameter and dactylic hexameter were later used by a number of poets, includingWilliam ShakespeareandHenry Wadsworth Longfellow, respectively.[59]The most common metrical feet in English are:[60]
There are a wide range of names for other types of feet, right up to achoriamb, a four syllable metric foot with a stressed syllable followed by two unstressed syllables and closing with a stressed syllable. The choriamb is derived from some ancientGreekandLatin poetry.[58]Languages which usevowel lengthorintonationrather than or in addition to syllabic accents in determining meter, such asOttoman TurkishorVedic, often have concepts similar to the iamb and dactyl to describe common combinations of long and short sounds.[62]
Each of these types of feet has a certain "feel," whether alone or in combination with other feet. The iamb, for example, is the most natural form of rhythm in the English language, and generally produces a subtle but stable verse.[63]Scanning meter can often show the basic or fundamental pattern underlying a verse, but does not show the varying degrees ofstress, as well as the differing pitches andlengthsof syllables.[64]
There is debate over how useful a multiplicity of different "feet" is in describing meter. For example,Robert Pinskyhas argued that while dactyls are important in classical verse, English dactylic verse uses dactyls very irregularly and can be better described based on patterns of iambs and anapests, feet which he considers natural to the language.[65]Actual rhythm is significantly more complex than the basic scanned meter described above, and many scholars have sought to develop systems that would scan such complexity.Vladimir Nabokovnoted that overlaid on top of the regular pattern of stressed and unstressed syllables in a line of verse was a separate pattern of accents resulting from the natural pitch of the spoken words, and suggested that the term "scud" be used to distinguish an unaccented stress from an accented stress.[66]
Sanskrit poetry is organized according tochhandas, which are manifold and continue to influence several South Asian languages' poetry.
Different traditions and genres of poetry tend to use different meters, ranging from the Shakespeareaniambic pentameterand the Homericdactylic hexameterto theanapestic tetrameterused in many nursery rhymes. However, a number of variations to the established meter are common, both to provide emphasis or attention to a given foot or line and to avoid boring repetition. For example, the stress in a foot may be inverted, acaesura(or pause) may be added (sometimes in place of a foot or stress), or the final foot in a line may be given afeminine endingto soften it or be replaced by aspondeeto emphasize it and create a hard stop. Some patterns (such as iambic pentameter) tend to be fairly regular, while other patterns, such as dactylic hexameter, tend to be highly irregular.[67]Regularity can vary between language. In addition, different patterns often develop distinctively in different languages, so that, for example,iambic tetrameterin Russian will generally reflect a regularity in the use of accents to reinforce the meter, which does not occur, or occurs to a much lesser extent, in English.[68]
Some common metrical patterns, with notable examples of poets and poems who use them, include:
Rhyme, alliteration, assonance andconsonanceare ways of creating repetitive patterns of sound. They may be used as an independent structural element in a poem, to reinforce rhythmic patterns, or as an ornamental element.[74]They can also carry a meaning separate from the repetitive sound patterns created. For example,Chaucerused heavy alliteration to mock Old English verse and to paint a character as archaic.[75]
Rhyme consists of identical ("hard-rhyme") or similar ("soft-rhyme") sounds placed at the ends of lines or at locations within lines ("internal rhyme"). Languages vary in the richness of their rhyming structures; Italian, for example, has a rich rhyming structure permitting maintenance of a limited set of rhymes throughout a lengthy poem. The richness results from word endings that follow regular forms. English, with its irregular word endings adopted from other languages, is less rich in rhyme.[76]The degree of richness of a language's rhyming structures plays a substantial role in determining what poetic forms are commonly used in that language.[77]
Alliteration is the repetition of letters or letter-sounds at the beginning of two or more words immediately succeeding each other, or at short intervals; or the recurrence of the same letter in accented parts of words. Alliteration and assonance played a key role in structuring early Germanic, Norse and Old English forms of poetry. The alliterative patterns of early Germanic poetry interweave meter and alliteration as a key part of their structure, so that the metrical pattern determines when the listener expects instances of alliteration to occur. This can be compared to an ornamental use of alliteration in most Modern European poetry, where alliterative patterns are not formal or carried through full stanzas. Alliteration is particularly useful in languages with less rich rhyming structures.
Assonance, where the use of similar vowel sounds within a word rather than similar sounds at the beginning or end of a word, was widely used inskaldicpoetry but goes back to the Homeric epic.[78]Because verbs carry much of the pitch in the English language, assonance can loosely evoke the tonal elements of Chinese poetry and so is useful in translating Chinese poetry.[79]Consonance occurs where a consonant sound is repeated throughout a sentence without putting the sound only at the front of a word. Consonance provokes a more subtle effect than alliteration and so is less useful as a structural element.[77]
In many languages, including Arabic and modern European languages, poets use rhyme in set patterns as a structural element for specific poetic forms, such asballads,sonnetsandrhyming couplets. However, the use of structural rhyme is not universal even within the European tradition. Much modern poetry avoids traditionalrhyme schemes. Classical Greek and Latin poetry did not use rhyme.[80]Rhyme entered European poetry in theHigh Middle Ages, due to the influence of theArabic languageinAl Andalus.[81]Arabic language poets used rhyme extensively not only with the development of literary Arabic in thesixth century, but also with the much older oral poetry, as in their long, rhymingqasidas.[82]Some rhyming schemes have become associated with a specific language, culture or period, while other rhyming schemes have achieved use across languages, cultures or time periods. Some forms of poetry carry a consistent and well-defined rhyming scheme, such as thechant royalor therubaiyat, while other poetic forms have variable rhyme schemes.[83]
Most rhyme schemes are described using letters that correspond to sets of rhymes, so if the first, second and fourth lines of a quatrain rhyme with each other and the third line do not rhyme, the quatrain is said to have an AA BArhyme scheme. This rhyme scheme is the one used, for example, in the rubaiyat form.[84]Similarly, an A BB A quatrain (what is known as "enclosed rhyme") is used in such forms as thePetrarchan sonnet.[85]Some types of more complicated rhyming schemes have developed names of their own, separate from the "a-bc" convention, such as theottava rimaandterza rima.[86]The types and use of differing rhyming schemes are discussed further in themain article.
Poetic form is more flexible in modernist and post-modernist poetry and continues to be less structured than in previous literary eras. Many modern poets eschew recognizable structures or forms and write infree verse. Free verse is, however, not "formless" but composed of a series of more subtle, more flexible prosodic elements.[87]Thus poetry remains, in all its styles, distinguished from prose by form;[88]some regard for basic formal structures of poetry will be found in all varieties of free verse, however much such structures may appear to have been ignored.[89]Similarly, in the best poetry written in classic styles there will be departures from strict form for emphasis or effect.[90]
Among major structural elements used in poetry are the line, thestanzaorverse paragraph, and larger combinations of stanzas or lines such ascantos. Also sometimes used are broader visual presentations of words andcalligraphy. These basic units of poetic form are often combined into larger structures, calledpoetic formsor poetic modes (see the following section), as in thesonnet.
Poetry is often separated into lines on a page, in a process known aslineation. These lines may be based on the number of metrical feet or may emphasize a rhyming pattern at the ends of lines. Lines may serve other functions, particularly where the poem is not written in a formal metrical pattern. Lines can separate, compare or contrast thoughts expressed in different units, or can highlight a change in tone.[91]See the article online breaksfor information about the division between lines.
Lines of poems are often organized intostanzas, which are denominated by the number of lines included. Thus a collection of two lines is acouplet(ordistich), three lines atriplet(ortercet), four lines aquatrain, and so on. These lines may or may not relate to each other by rhyme or rhythm. For example, a couplet may be two lines with identical meters which rhyme or two lines held together by a common meter alone.[92]
Other poems may be organized intoverse paragraphs, in which regular rhymes with established rhythms are not used, but the poetic tone is instead established by a collection of rhythms, alliterations, and rhymes established in paragraph form.[93]Many medieval poems were written in verse paragraphs, even where regular rhymes and rhythms were used.[94]
In many forms of poetry, stanzas are interlocking, so that the rhyming scheme or other structural elements of one stanza determine those of succeeding stanzas. Examples of such interlocking stanzas include, for example, theghazaland thevillanelle, where a refrain (or, in the case of the villanelle, refrains) is established in the first stanza which then repeats in subsequent stanzas. Related to the use of interlocking stanzas is their use to separate thematic parts of a poem. For example, thestrophe,antistropheandepodeof the ode form are often separated into one or more stanzas.[95]
In some cases, particularly lengthier formal poetry such as some forms of epic poetry, stanzas themselves are constructed according to strict rules and then combined. Inskaldicpoetry, thedróttkvættstanza had eight lines, each having three "lifts" produced with alliteration or assonance. In addition to two or three alliterations, the odd-numbered lines had partial rhyme of consonants with dissimilar vowels, not necessarily at the beginning of the word; the even lines contained internal rhyme in set syllables (not necessarily at the end of the word). Each half-line had exactly six syllables, and each line ended in a trochee. The arrangement of dróttkvætts followed far less rigid rules than the construction of the individual dróttkvætts.[96]
Even before the advent of printing, the visual appearance of poetry often added meaning or depth.Acrosticpoems conveyed meanings in the initial letters of lines or in letters at other specific places in a poem.[99]InArabic,HebrewandChinese poetry, the visual presentation of finelycalligraphedpoems has played an important part in the overall effect of many poems.[100]
With the advent ofprinting, poets gained greater control over the mass-produced visual presentations of their work. Visual elements have become an important part of the poet's toolbox, and many poets have sought to use visual presentation for a wide range of purposes. SomeModernistpoets have made the placement of individual lines or groups of lines on the page an integral part of the poem's composition. At times, this complements the poem'srhythmthrough visualcaesurasof various lengths, or createsjuxtapositionsso as to accentuate meaning,ambiguityorirony, or simply to create an aesthetically pleasing form. In its most extreme form, this can lead toconcrete poetryorasemic writing.[101][102]
Poetic diction treats the manner in which language is used, and refers not only to the sound but also to the underlying meaning and its interaction with sound and form.[103]Many languages and poetic forms have very specific poetic dictions, to the point where distinctgrammarsanddialectsare used specifically for poetry.[104][105]Registersin poetry can range from strict employment of ordinary speech patterns, as favoured in much late-20th-centuryprosody,[106]through to highly ornate uses of language, as in medieval and Renaissance poetry.[107]
Poetic diction can includerhetorical devicessuch assimileandmetaphor, as well as tones of voice, such asirony.Aristotlewrote in thePoeticsthat "the greatest thing by far is to be a master of metaphor."[108]Since the rise ofModernism, some poets have opted for a poetic diction that de-emphasizes rhetorical devices, attempting instead the direct presentation of things and experiences and the exploration oftone.[109]On the other hand,Surrealistshave pushed rhetorical devices to their limits, making frequent use ofcatachresis.[110]
Allegoricalstories are central to the poetic diction of many cultures, and were prominent in the West during classical times, thelate Middle Agesand theRenaissance.Aesop's Fables, repeatedly rendered in both verse and prose since first being recorded about 500 BCE, are perhaps the richest single source of allegorical poetry through the ages.[111]Other notables examples include theRoman de la Rose, a 13th-century French poem,William Langland'sPiers Ploughmanin the 14th century, andJean de la Fontaine'sFables(influenced by Aesop's) in the 17th century. Rather than being fully allegorical, however, a poem may containsymbolsorallusionsthat deepen the meaning or effect of its words without constructing a full allegory.[112]
Another element of poetic diction can be the use of vividimageryfor effect. The juxtaposition of unexpected or impossible images is, for example, a particularly strong element in surrealist poetry andhaiku.[113]Vivid images are often endowed with symbolism or metaphor. Many poetic dictions use repetitive phrases for effect, either a short phrase (such as Homer's "rosy-fingered dawn" or "the wine-dark sea") or a longerrefrain. Such repetition can add a somber tone to a poem, or can be laced with irony as the context of the words changes.[114]
Specific poetic forms have been developed by many cultures. In more developed, closed or "received" poetic forms, the rhyming scheme, meter and other elements of a poem are based on sets of rules, ranging from the relatively loose rules that govern the construction of anelegyto the highly formalized structure of theghazalorvillanelle.[115]Described below are some common forms of poetry widely used across a number of languages. Additional forms of poetry may be found in the discussions of the poetry of particular cultures or periods and in theglossary.
Among the most common forms of poetry, popular from theLate Middle Ageson, is the sonnet, which by the 13th century had become standardized as fourteen lines following a set rhyme scheme and logical structure. By the 14th century and theItalian Renaissance, the form had further crystallized under the pen ofPetrarch, whose sonnets were translated in the 16th century bySir Thomas Wyatt, who is credited with introducing the sonnet form into English literature.[116]A traditional Italian orPetrarchan sonnetfollows the rhyme schemeABBA, ABBA, CDECDE, though some variation, perhaps the most common being CDCDCD, especially within the final six lines (orsestet), is common.[117]TheEnglish (or Shakespearean) sonnetfollows the rhyme scheme ABAB CDCD EFEF GG, introducing a thirdquatrain(grouping of four lines), a finalcouplet, and a greater amount of variety in rhyme than is usually found in its Italian predecessors. By convention, sonnets in English typically useiambic pentameter, while in theRomance languages, thehendecasyllableandAlexandrineare the most widely used meters.
Sonnets of all types often make use of avolta, or "turn," a point in the poem at which an idea is turned on its head, a question is answered (or introduced), or the subject matter is further complicated. Thisvoltacan often take the form of a "but" statement contradicting or complicating the content of the earlier lines. In the Petrarchan sonnet, the turn tends to fall around the division between the first two quatrains and the sestet, while English sonnets usually place it at or near the beginning of the closing couplet.
Sonnets are particularly associated with high poetic diction, vivid imagery, and romantic love, largely due to the influence of Petrarch as well as of early English practitioners such asEdmund Spenser(who gave his name to theSpenserian sonnet),Michael Drayton, and Shakespeare, whosesonnetsare among the most famous in English poetry, with twenty being included in theOxford Book of English Verse.[118]However, the twists and turns associated with thevoltaallow for a logical flexibility applicable to many subjects.[119]Poets from the earliest centuries of the sonnet to the present have used the form to address topics related to politics (John Milton,Percy Bysshe Shelley,Claude McKay), theology (John Donne,Gerard Manley Hopkins), war (Wilfred Owen,E. E. Cummings), and gender and sexuality (Carol Ann Duffy). Further, postmodern authors such asTed BerriganandJohn Berrymanhave challenged the traditional definitions of the sonnet form, rendering entire sequences of "sonnets" that often lack rhyme, a clear logical progression, or even a consistent count of fourteen lines.
Shi(simplified Chinese:诗;traditional Chinese:詩;pinyin:shī;Wade–Giles:shih) Is the main type ofClassical Chinese poetry.[120]Within this form of poetry the most important variations are "folk song" styled verse (yuefu), "old style" verse (gushi), "modern style" verse (jintishi). In all cases, rhyming is obligatory. The Yuefu is a folk ballad or a poem written in the folk ballad style, and the number of lines and the length of the lines could be irregular. For the other variations ofshipoetry, generally either a four line (quatrain, orjueju) or else an eight-line poem is normal; either way with the even numbered lines rhyming. The line length is scanned by an according number of characters (according to the convention that one character equals one syllable), and are predominantly either five or seven characters long, with acaesurabefore the final three syllables. The lines are generally end-stopped, considered as a series of couplets, and exhibit verbal parallelism as a key poetic device.[121]The "old style" verse (Gushi) is less formally strict than thejintishi, or regulated verse, which, despite the name "new style" verse actually had its theoretical basis laid as far back asShen Yue(441–513 CE), although not considered to have reached its full development until the time ofChen Zi'ang(661–702 CE).[122]A good example of a poet known for hisGushipoems isLi Bai(701–762 CE). Among its other rules, the jintishi rules regulate the tonal variations within a poem, including the use of set patterns of thefour tonesofMiddle Chinese. The basic form of jintishi (sushi) has eight lines in four couplets, with parallelism between the lines in the second and third couplets. The couplets with parallel lines contain contrasting content but an identical grammatical relationship between words. Jintishi often have a rich poetic diction, full ofallusion, and can have a wide range of subject, including history and politics.[123][124]One of the masters of the form wasDu Fu(712–770 CE), who wrote during the Tang Dynasty (8th century).[125]
The villanelle is a nineteen-line poem made up of five triplets with a closing quatrain; the poem is characterized by having two refrains, initially used in the first and third lines of the first stanza, and then alternately used at the close of each subsequent stanza until the final quatrain, which is concluded by the two refrains. The remaining lines of the poem have an AB alternating rhyme.[126]The villanelle has been used regularly in the English language since the late 19th century by such poets asDylan Thomas,[127]W. H. Auden,[128]andElizabeth Bishop.[129]
A limerick is a poem that consists of five lines and is often humorous. Rhythm is very important in limericks for the first, second and fifth lines must have seven to ten syllables. However, the third and fourth lines only need five to seven. Lines 1, 2 and 5 rhyme with each other, and lines 3 and 4 rhyme with each other. Practitioners of the limerick includedEdward Lear,Lord Alfred Tennyson,Rudyard Kipling,Robert Louis Stevenson.[130]
Tanka is a form of unrhymedJapanese poetry, with five sections totalling 31on(phonological units identical tomorae), structured in a 5–7–5–7–7 pattern.[131]There is generally a shift in tone and subject matter between the upper 5–7–5 phrase and the lower 7–7 phrase. Tanka were written as early as theAsuka periodby such poets asKakinomoto no Hitomaro(fl.late 7th century), at a time when Japan was emerging from a period where much of its poetry followed Chinese form.[132]Tanka was originally the shorter form of Japanese formal poetry (which was generally referred to as "waka"), and was used more heavily to explore personal rather than public themes. By the tenth century, tanka had become the dominant form of Japanese poetry, to the point where the originally general termwaka("Japanese poetry") came to be used exclusively for tanka. Tanka are still widely written today.[133]
Haiku is a popular form of unrhymed Japanese poetry, which evolved in the 17th century from thehokku, or opening verse of arenku.[134]Generally written in a single vertical line, the haiku contains three sections totalling 17on(morae), structured in a 5–7–5 pattern. Traditionally, haiku contain akireji, or cutting word, usually placed at the end of one of the poem's three sections, and akigo, or season-word.[135]The most famous exponent of the haiku wasMatsuo Bashō(1644–1694). An example of his writing:[136]
Thekhlong(โคลง,[kʰlōːŋ]) is among the oldest Thai poetic forms. This is reflected in its requirements on the tone markings of certain syllables, which must be marked withmai ek(ไม้เอก,Thai pronunciation:[májèːk],◌่) ormai tho(ไม้โท,[májtʰōː],◌้). This was likely derived from when the Thai language had three tones (as opposed to today's five, a split which occurred during theAyutthaya Kingdomperiod), two of which corresponded directly to the aforementioned marks. It is usually regarded as an advanced and sophisticated poetic form.[137]
Inkhlong, a stanza (bot,บท,Thai pronunciation:[bòt]) has a number of lines (bat,บาท,Thai pronunciation:[bàːt], fromPaliandSanskritpāda), depending on the type. Thebatare subdivided into twowak(วรรค,Thai pronunciation:[wák], from Sanskritvarga).[note 2]The firstwakhas five syllables, the second has a variable number, also depending on the type, and may be optional. The type ofkhlongis named by the number ofbatin a stanza; it may also be divided into two main types:khlong suphap(โคลงสุภาพ,[kʰlōːŋsù.pʰâːp]) andkhlong dan(โคลงดั้น,[kʰlōːŋdân]). The two differ in the number of syllables in the secondwakof the finalbatand inter-stanza rhyming rules.[137]
Thekhlong si suphap(โคลงสี่สุภาพ,[kʰlōːŋsìːsù.pʰâːp]) is the most common form still currently employed. It has fourbatper stanza (sitranslates asfour). The firstwakof eachbathas five syllables. The secondwakhas two or four syllables in the first and thirdbat, two syllables in the second, and four syllables in the fourth.Mai ekis required for seven syllables andMai thois required for four, as shown below. "Dead word" syllables are allowed in place of syllables which requiremai ek, and changing the spelling of words to satisfy the criteria is usually acceptable.
Odes were first developed by poets writing in ancient Greek, such asPindar, and Latin, such asHorace. Forms of odes appear in many of the cultures that were influenced by the Greeks and Latins.[138]The ode generally has three parts: astrophe, anantistrophe, and anepode. The strophe and the antistrophe of the ode possess similar metrical structures and, depending on the tradition, similar rhyme structures. In contrast, the epode is written with a different scheme and structure. Odes have a formal poetic diction and generally deal with a serious subject. The strophe and antistrophe look at the subject from different, often conflicting, perspectives, with the epode moving to a higher level to either view or resolve the underlying issues. Odes are often intended to be recited or sung by two choruses (or individuals), with the first reciting the strophe, the second the antistrophe, and both together the epode.[139]Over time, differing forms for odes have developed with considerable variations in form and structure, but generally showing the original influence of the Pindaric or Horatian ode. One non-Western form which resembles the ode is theqasidainArabic poetry.[140]
Theghazal(alsoghazel,gazel,gazal, orgozol) is a form of poetry common inArabic,Bengali,PersianandUrdu. In classic form, theghazalhas from five to fifteen rhyming couplets that share arefrainat the end of the second line. This refrain may be of one or several syllables and is preceded by a rhyme. Each line has an identical meter and is of the same length.[141]The ghazal often reflects on a theme of unattainable love or divinity.[142]
As with other forms with a long history in many languages, many variations have been developed, including forms with a quasi-musical poetic diction inUrdu.[143]Ghazals have a classical affinity withSufism, and a number of major Sufi religious works are written in ghazal form. The relatively steady meter and the use of the refrain produce an incantatory effect, which complements Sufi mystical themes well.[144]Among the masters of the form areRumi, the celebrated 13th-centuryPersianpoet,[145]Attar, 12th century Iranian Sufi mystic poet who Rumi considered his master,[146]and their equally famous near-contemporaryHafez. Hafez uses the ghazal to expose hypocrisy and the pitfalls of worldliness, but also expertly exploits the form to express the divine depths and secular subtleties of love; creating translations that meaningfully capture such complexities of content and form is immensely challenging, but lauded attempts to do so in English includeGertrude Bell'sPoems from the Divan of Hafiz[147]andBeloved: 81 poems from Hafez(Bloodaxe Books) whose Preface addresses in detail the problematic nature of translating ghazals and whose versions (according toFatemeh Keshavarz, Roshan Institute forPersian Studies) preserve "that audacious and multilayered richness one finds in the originals".[148]Indeed, Hafez's ghazals have been the subject of much analysis, commentary and interpretation, influencing post-fourteenth century Persian writing more than any other author.[149][150]TheWest-östlicher DiwanofJohann Wolfgang von Goethe, a collection of lyrical poems, is inspired by the Persian poet Hafez.[151][152][153]
In addition to specific forms of poems, poetry is often thought of in terms of differentgenresand subgenres. A poetic genre is generally a tradition or classification of poetry based on the subject matter, style, or other broader literary characteristics.[154]Some commentators view genres as natural forms of literature. Others view the study of genres as the study of how different works relate and refer to other works.[155]
Narrative poetry is a genre of poetry that tells astory. Broadly it subsumesepic poetry, but the term "narrative poetry" is often reserved for smaller works, generally with more appeal tohuman interest. Narrative poetry may be the oldest type of poetry. Many scholars ofHomerhave concluded that hisIliadandOdysseywere composed of compilations of shorter narrative poems that related individual episodes.
Much narrative poetry—such as Scottish and Englishballads, andBalticandSlavicheroic poems—isperformance poetrywith roots in a preliterateoral tradition. It has been speculated that some features that distinguish poetry from prose, such as meter,alliterationandkennings, once served asmemoryaids forbardswho recited traditional tales.[156]
Notable narrative poets have includedOvid,Dante,Juan Ruiz,William Langland,Chaucer,Fernando de Rojas,Luís de Camões,Shakespeare,Alexander Pope,Robert Burns,Adam Mickiewicz,Alexander Pushkin,Letitia Elizabeth Landon,Edgar Allan Poe,Alfred Tennyson, andAnne Carson.
Lyric poetry is a genre that, unlikeepicand dramatic poetry, does not attempt to tell a story but instead is of a morepersonalnature. Poems in this genre tend to be shorter, melodic, and contemplative. Rather than depictingcharactersand actions, it portrays the poet's ownfeelings,states of mind, andperceptions.[157]Notable poets in this genre includeChristine de Pizan,John Donne,Charles Baudelaire,Gerard Manley Hopkins,Antonio Machado, andEdna St. Vincent Millay.
Epic poetry is a genre of poetry, and a major form ofnarrativeliterature. This genre is often defined as lengthy poems concerning events of a heroic or important nature to the culture of the time. It recounts, in a continuous narrative, the life and works of aheroicormythologicalperson or group of persons.[158]
Examples of epic poems areHomer'sIliadandOdyssey,Virgil'sAeneid, theNibelungenlied,Luís de Camões'Os Lusíadas, theCantar de Mio Cid, theEpic of Gilgamesh, theMahabharata,Lönnrot'sKalevala,Valmiki'sRamayana,Ferdowsi'sShahnama,Nizami(or Nezami)'s Khamse (Five Books), and theEpic of King Gesar. A Sanskrit analogue to the epic poem is themahākāvya.[citation needed]
While the composition of epic poetry, and oflong poemsgenerally, became less common in the west after the early 20th century, some notable epics have continued to be written.The CantosbyEzra Pound,Helen in EgyptbyH.D., andPatersonbyWilliam Carlos Williamsare examples of modern epics.Derek Walcottwon aNobel prizein 1992 to a great extent on the basis of his epic,Omeros.[159]
Poetry can be a powerful vehicle forsatire. TheRomanshad a strong tradition of satirical poetry, often written forpoliticalpurposes. A notable example is the Roman poetJuvenal'ssatires.[160]
The same is true of the English satirical tradition.John Dryden(aTory), the firstPoet Laureate, produced in 1682Mac Flecknoe, subtitled "A Satire on the True Blue Protestant Poet, T.S." (a reference toThomas Shadwell).[161]Satirical poets outside England includePoland'sIgnacy Krasicki,Azerbaijan'sSabir,Portugal'sManuel Maria Barbosa du Bocage, and Korea'sKim Kirim, especially noted for hisGisangdo.
An elegy is a mournful, melancholy or plaintive poem, especially alamentfor the dead or afuneralsong. The term "elegy," which originally denoted a type of poetic meter (elegiacmeter), commonly describes a poem ofmourning. An elegy may also reflect something that seems to the author to be strange or mysterious. The elegy, as a reflection on a death, on a sorrow more generally, or on something mysterious, may be classified as a form of lyric poetry.[162][163]
Notable practitioners of elegiac poetry have includedPropertius,Jorge Manrique,Jan Kochanowski,Chidiock Tichborne,Edmund Spenser,Ben Jonson,John Milton,Thomas Gray,Charlotte Smith,William Cullen Bryant,Percy Bysshe Shelley,Johann Wolfgang von Goethe,Evgeny Baratynsky,Alfred Tennyson,Walt Whitman,Antonio Machado,Juan Ramón Jiménez,William Butler Yeats,Rainer Maria Rilke, andVirginia Woolf.
The fable is an ancientliterary genre, often (though not invariably) set inverse. It is a succinct story that featuresanthropomorphisedanimals,legendary creatures,plants, inanimate objects, or forces of nature that illustrate a moral lesson (a "moral"). Verse fables have used a variety ofmeterandrhymepatterns.[164]
Notable verse fabulists have includedAesop,Vishnu Sarma,Phaedrus,Marie de France,Robert Henryson,Biernat of Lublin,Jean de La Fontaine,Ignacy Krasicki,Félix María de Samaniego,Tomás de Iriarte,Ivan Krylov, andAmbrose Bierce.
Dramatic poetry isdramawritten inverseto be spoken or sung, and appears in varying, sometimes related forms in many cultures.Greek tragedyin verse dates to the 6th century B.C., and may have been an influence on the development of Sanskrit drama,[165]just as Indian drama in turn appears to have influenced the development of thebianwenverse dramas in China, forerunners ofChinese Opera.[166]East Asianverse dramas also include JapaneseNoh. Examples of dramatic poetry inPersian literatureincludeNizami's two famous dramatic works,Layla and MajnunandKhosrow and Shirin,Ferdowsi's tragedies such asRostam and Sohrab,Rumi'sMasnavi,Gorgani's tragedy ofVis and Ramin, andVahshi's tragedy ofFarhad. American poets of 20th century revive dramatic poetry, includingEzra Poundin "Sestina: Altaforte,"[167]T.S. Eliotwith "The Love Song of J. Alfred Prufrock".[168][169]
Speculative poetry, also known as fantastic poetry (of which weird or macabre poetry is a major sub-classification), is a poetic genre which deals thematically with subjects which are "beyond reality", whether viaextrapolationas inscience fictionor via weird and horrific themes as inhorror fiction. Such poetry appears regularly in modern science fiction and horror fiction magazines.
Edgar Allan Poeis sometimes seen as the "father of speculative poetry".[170]Poe's most remarkable achievement in the genre was his anticipation, by three-quarters of a century, of theBig Bang theoryof theuniverse's origin, in his then much-derided 1848essay(which, due to its very speculative nature, he termed a "prose poem"),Eureka: A Prose Poem.[171][172]
Prose poetry is a hybrid genre that shows attributes of both prose and poetry. It may be indistinguishable from themicro-story(a.k.a.the "short short story", "flash fiction"). While some examples of earlier prose strike modern readers as poetic, prose poetry is commonly regarded as having originated in 19th-century France, where its practitioners includedAloysius Bertrand,Charles Baudelaire,Stéphane Mallarmé, andArthur Rimbaud.[173]
Independently of the European poetic tradition, Sanskrit prose-poetry (gadyakāvya) has existed from around the seventh century, with notable works includingKadambari.[174]
Since the late 1980s especially, prose poetry has gained increasing popularity, with entire journals, such asThe Prose Poem: An International Journal,[175]Contemporary Haibun Online,[176]andHaibun Today[177]devoted to that genre and its hybrids.Latin American poetsof the 20th century who wrote prose poems includeOctavio PazandAlejandra Pizarnik.
Light poetry, orlight verse, is poetry that attempts to be humorous. Poems considered "light" are usually brief, and can be on a frivolous or serious subject, and often featureword play, includingpuns, adventurous rhyme and heavyalliteration. Although a few free verse poets have excelled at light verse outside the formal verse tradition, light verse in English usually obeys at least some formal conventions. Common forms include thelimerick, theclerihew, and thedouble dactyl.
While light poetry is sometimes condemned asdoggerel, or thought of as poetry composed casually, humor often makes a serious point in a subtle or subversive way. Many of the most renowned "serious" poets have also excelled at light verse. Notable writers of light poetry includeLewis Carroll,Ogden Nash,X. J. Kennedy,Willard R. Espy,Shel Silverstein,Gavin EwartandWendy Cope.
Slam poetry as a genre originated in 1986 inChicago,Illinois, whenMarc Kelly Smithorganized the first slam.[178][179]Slam performers comment emotively, aloud before an audience, on personal, social, or other matters. Slam focuses on the aesthetics of word play, intonation, and voice inflection. Slam poetry is often competitive, at dedicated "poetry slam" contests.[180]
Performance poetry, similar to slam in that it occurs before an audience, is a genre of poetry that may fuse a variety of disciplines in a performance of a text, such asdance,music, and other aspects ofperformance art.[181][182]
The termhappeningwas popularized by theavant-gardemovements in the 1950s and regard spontaneous, site-specific performances.[183]Language happenings, termed from thepoeticscollectiveOBJECT:PARADISEin 2018, are events which focus less on poetry as a prescriptiveliterarygenre, but more as a descriptivelinguisticact and performance, often incorporating broader forms ofperformance artwhile poetry is read or created in that moment.[184][185]
Encyclopedias
Other critics
English language anthologies
|
https://en.wikipedia.org/wiki/Poetry
|
Instatistics, themultiple comparisons,multiplicityormultiple testing problemoccurs when one considers a set ofstatistical inferencessimultaneously[1]orestimatesa subset of parameters selected based on the observed values.[2]
The larger the number of inferences made, the more likely erroneous inferences become. Several statistical techniques have been developed to address this problem, for example, by requiring astricter significance thresholdfor individual comparisons, so as to compensate for the number of inferences being made. Methods forfamily-wise error rategive the probability of false positives resulting from the multiple comparisons problem.
The problem of multiple comparisons received increased attention in the 1950s with the work of statisticians such asTukeyandScheffé. Over the ensuing decades, many procedures were developed to address the problem. In 1996, the first international conference on multiple comparison procedures took place inTel Aviv.[3]This is an active research area with work being done by, for exampleEmmanuel CandèsandVladimir Vovk.
Multiple comparisons arise when a statistical analysis involves multiple simultaneous statistical tests, each of which has a potential to produce a "discovery". A stated confidence level generally applies only to each test considered individually, but often it is desirable to have a confidence level for the whole family of simultaneous tests.[4]Failure to compensate for multiple comparisons can have important real-world consequences, as illustrated by the following examples:
In both examples, as the number of comparisons increases, it becomes more likely that the groups being compared will appear to differ in terms of at least one attribute. Our confidence that a result will generalize to independent data should generally be weaker if it is observed as part of an analysis that involves multiple comparisons, rather than an analysis that involves only a single comparison.
For example, if one test is performed at the 5% level and the corresponding null hypothesis is true, there is only a 5% risk of incorrectly rejecting the null hypothesis. However, if 100 tests are each conducted at the 5% level and all corresponding null hypotheses are true, theexpected numberof incorrect rejections (also known asfalse positivesorType I errors) is 5. If the tests are statistically independent from each other (i.e. are performed on independent samples), the probability of at least one incorrect rejection is approximately 99.4%.
The multiple comparisons problem also applies toconfidence intervals. A single confidence interval with a 95%coverage probabilitylevel will contain the true value of the parameter in 95% of samples. However, if one considers 100 confidence intervals simultaneously, each with 95% coverage probability, the expected number of non-covering intervals is 5. If the intervals are statistically independent from each other, the probability that at least one interval does not contain the population parameter is 99.4%.
Techniques have been developed to prevent the inflation of false positive rates and non-coverage rates that occur with multiple statistical tests.
The following table defines the possible outcomes when testing multiple null hypotheses.
Suppose we have a numbermof null hypotheses, denoted by:H1,H2, ...,Hm.Using astatistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant.
Summing each type of outcome over allHiyields the following random variables:
Inmhypothesis tests of whichm0{\displaystyle m_{0}}are true null hypotheses,Ris an observable random variable, andS,T,U, andVare unobservablerandom variables.
Multiple testing correctionrefers to making statistical tests more stringent in order to counteract the problem of multiple testing. The best known such adjustment is theBonferroni correction, but other methods have been developed. Such methods are typically designed to control thefamily-wise error rateor thefalse discovery rate.
Ifmindependent comparisons are performed, thefamily-wise error rate(FWER), is given by
Hence, unless the tests are perfectly positively dependent (i.e., identical),α¯{\displaystyle {\bar {\alpha }}}increases as the number of comparisons increases.
If we do not assume that the comparisons are independent, then we can still say:
which follows fromBoole's inequality. Example:0.2649=1−(1−.05)6≤.05×6=0.3{\displaystyle 0.2649=1-(1-.05)^{6}\leq .05\times 6=0.3}
There are different ways to assure that the family-wise error rate is at mostα{\displaystyle \alpha }. The most conservative method, which is free of dependence and distributional assumptions, is theBonferroni correctionα{percomparison}=α/m{\displaystyle \alpha _{\mathrm {\{per\ comparison\}} }={\alpha }/m}. A marginally less conservative correction can be obtained by solving the equation for the family-wise error rate ofm{\displaystyle m}independent comparisons forα{percomparison}{\displaystyle \alpha _{\mathrm {\{per\ comparison\}} }}. This yieldsα{per comparison}=1−(1−α)1/m{\displaystyle \alpha _{\{{\text{per comparison}}\}}=1-{(1-{\alpha })}^{1/m}}, which is known as theŠidák correction. Another procedure is theHolm–Bonferroni method, which uniformly delivers more power than the simple Bonferroni correction, by testing only the lowest p-value (i=1{\displaystyle i=1}) against the strictest criterion, and the higher p-values (i>1{\displaystyle i>1}) against progressively less strict criteria.[5]α{percomparison}=α/(m−i+1){\displaystyle \alpha _{\mathrm {\{per\ comparison\}} }={\alpha }/(m-i+1)}.
For continuous problems, one can employBayesianlogic to computem{\displaystyle m}from the prior-to-posterior volume ratio. Continuous generalizations of theBonferroniandŠidák correctionare presented in.[6]
Traditional methods for multiple comparisons adjustments focus on correcting for modest numbers of comparisons, often in ananalysis of variance. A different set of techniques have been developed for "large-scale multiple testing", in which thousands or even greater numbers of tests are performed. For example, ingenomics, when using technologies such asmicroarrays, expression levels of tens of thousands of genes can be measured, and genotypes for millions of genetic markers can be measured. Particularly in the field ofgenetic associationstudies, there has been a serious problem with non-replication — a result being strongly statistically significant in one study but failing to be replicated in a follow-up study. Such non-replication can have many causes, but it is widely considered that failure to fully account for the consequences of making multiple comparisons is one of the causes.[7]It has been argued that advances inmeasurementandinformation technologyhave made it far easier to generate large datasets forexploratory analysis, often leading to the testing of large numbers of hypotheses with no prior basis for expecting many of the hypotheses to be true. In this situation, very highfalse positive ratesare expected unless multiple comparisons adjustments are made.
For large-scale testing problems where the goal is to provide definitive results, thefamily-wise error rateremains the most accepted parameter for ascribing significance levels to statistical tests. Alternatively, if a study is viewed as exploratory, or if significant results can be easily re-tested in an independent study, control of thefalse discovery rate(FDR)[8][9][10]is often preferred. The FDR, loosely defined as the expected proportion of false positives among all significant tests, allows researchers to identify a set of "candidate positives" that can be more rigorously evaluated in a follow-up study.[11]
The practice of trying many unadjusted comparisons in the hope of finding a significant one is a known problem, whether applied unintentionally or deliberately, is sometimes called "p-hacking".[12][13]
A basic question faced at the outset of analyzing a large set of testing results is whether there is evidence that any of the alternative hypotheses are true. One simple meta-test that can be applied when it is assumed that the tests are independent of each other is to use thePoisson distributionas a model for the number of significant results at a given level α that would be found when all null hypotheses are true.[citation needed]If the observed number of positives is substantially greater than what should be expected, this suggests that there are likely to be some true positives among the significant results.
For example, if 1000 independent tests are performed, each at level α = 0.05, we expect 0.05 × 1000 = 50 significant tests to occur when all null hypotheses are true. Based on the Poisson distribution with mean 50, the probability of observing more than 61 significant tests is less than 0.05, so if more than 61 significant results are observed, it is very likely that some of them correspond to situations where the alternative hypothesis holds. A drawback of this approach is that it overstates the evidence that some of the alternative hypotheses are true when thetest statisticsare positively correlated, which commonly occurs in practice.[citation needed]. On the other hand, the approach remains valid even in the presence of correlation among the test statistics, as long as the Poisson distribution can be shown to provide a good approximation for the number of significant results. This scenario arises, for instance, when mining significant frequent itemsets from transactional datasets. Furthermore, a careful two stage analysis can bound the FDR at a pre-specified level.[14]
Another common approach that can be used in situations where thetest statisticscan be standardized toZ-scoresis to make anormal quantile plotof the test statistics. If the observed quantiles are markedly moredispersedthan the normal quantiles, this suggests that some of the significant results may be true positives.[citation needed]
|
https://en.wikipedia.org/wiki/Multiple_comparisons
|
Incomputer programmingandsoftware testing,smoke testing(alsoconfidence testing,sanity testing,[1]build verification test(BVT)[2][3][4]andbuild acceptance test) is preliminary testing orsanity testingto reveal simple failures severe enough to, for example, reject a prospective software release. Smoke tests are a subset oftest casesthat cover the most important functionality of a component or system, used to aid assessment of whether main functions of the software appear to work correctly.[1][2]When used to determine if a computer program should be subjected to further, more fine-grained testing, a smoke test may be called apretest[5]or anintake test.[1]Alternatively, it is a set of tests run on each new build of aproductto verify that the build is testable before the build is released into the hands of the test team.[6]In theDevOpsparadigm, use of a build verification test step is one hallmark of thecontinuous integrationmaturity stage.[7]
For example, a smoke test may address basic questions like "does the program run?", "does the user interface open?", or "does clicking the main button do anything?" The process of smoke testing aims to determine whether the application is so badly broken as to make further immediate testing unnecessary. As the bookLessons Learned in Software Testing[8]puts it, "smoke tests broadly cover product features in a limited time [...] if key features don't work or if key bugs haven't yet been fixed, your team won't waste further time installing or testing".[3]
Smoke tests frequently run quickly, giving benefits of faster feedback, rather than running more extensivetest suites, which would naturally take longer.
Frequent reintegration with smoke testing is among industrybest practices.[9][need quotation to verify]Ideally, every commit to a source code repository should trigger a Continuous Integration build, to identify regressions as soon as possible. If builds take too long, you might batch up several commits into one build, or very large systems might be rebuilt once a day. Overall, rebuild and retest as often as you can.
Smoke testing is also done by testers before accepting a build for further testing.Microsoftclaims that aftercode reviews, "smoke testingis the most cost-effective method for identifying and fixing defects in software".[10]
One can perform smoke tests either manually or usingan automated tool. In the case of automated tools, the process that generates the build will often initiate the testing.[citation needed]
Smoke tests can befunctional testsorunit tests. Functional tests exercise the complete program with various inputs. Unit tests exercise individual functions, subroutines, or object methods. Functional tests may comprise a scripted series of program inputs, possibly even with an automated mechanism for controlling mouse movements. Unit tests can be implemented either as separate functions within the code itself, or else as a driver layer that links to the code without altering the code being tested.[citation needed]
The term originates from the centuries-old practice ofmechanical smoke testing, where smoke was pumped into pipes or machinery to identify leaks, defects, or disconnections. Widely used in plumbing and industrial applications, this method revealed problem areas by observing where smoke escaped.
Insoftware development, the term was metaphorically adopted to describe a preliminary round of testing that checks for basic functionality. Like its physical counterparts, a software smoke test aims to identify critical failures early, ensuring the system is stable and that all required components are functioning before proceeding to more comprehensive testing, such as end-to-end or load testing.
In the context ofelectronics, the term was humorously reinterpreted to describe an initial power-on test for new hardware. This usage alludes to the visible smoke produced by overloaded or improperly connected components during catastrophic failure. While the imagery is memorable, the occurrence of smoke was never an intended or sustainable testing method. Instead, it underscores the importance of performing basic checks to catch critical issues early.
For example, Cem Kaner, James Bach, and Brett Pettichord explain inLessons Learned in Software Testing:
"The phrase smoke test comes fromelectronic hardware testing. You plug in a new board and turn on the power. If you see smoke coming from the board, turn off the power. You don't have to do any more testing."[3]
|
https://en.wikipedia.org/wiki/Smoke_testing_(software)
|
Lists of acronymscontainacronyms, a type of abbreviation formed from the initial components of the words of a longer name or phrase. They are organized alphabetically and by field.
|
https://en.wikipedia.org/wiki/List_of_acronyms
|
Cramér's theoremis a fundamental result in thetheory of large deviations, a subdiscipline ofprobability theory. It determines therate functionof a series ofiidrandom variables.
A weak version of this result was first shown byHarald Cramérin 1938.
The logarithmicmoment generating function(which is thecumulant-generating function) of arandom variableis defined as:
LetX1,X2,…{\displaystyle X_{1},X_{2},\dots }be a sequence ofiidrealrandom variableswith finite logarithmic moment generating function, i.e.Λ(t)<∞{\displaystyle \Lambda (t)<\infty }for allt∈R{\displaystyle t\in \mathbb {R} }.
Then theLegendre transformofΛ{\displaystyle \Lambda }:
satisfies,
for allx>E[X1].{\displaystyle x>\operatorname {E} [X_{1}].}
In the terminology of the theory of large deviations the result can be reformulated as follows:
IfX1,X2,…{\displaystyle X_{1},X_{2},\dots }is a series of iid random variables, then the distributions(L(1n∑i=1nXi))n∈N{\displaystyle \left({\mathcal {L}}({\tfrac {1}{n}}\sum _{i=1}^{n}X_{i})\right)_{n\in \mathbb {N} }}satisfy alarge deviation principlewithrate functionΛ∗{\displaystyle \Lambda ^{*}}.
|
https://en.wikipedia.org/wiki/Cram%C3%A9r%27s_theorem_(large_deviations)
|
Hidden textiscomputertextthat is displayed in such a way as to be invisible or unreadable. Hidden text is most commonly achieved by setting thefontcolour to the same colour as the background, rendering the text invisible unless the user highlights it.
Hidden text can serve several purposes. Often, websites use it to disguisespoilersfor readers who do not wish to read that text. Hidden text can also be used to hide data from users who are lessInternet-experienced or who are not familiar with a particular website. Another meaning may refer to hidden text to small messages at the bottom of advertisements that are permitted by some law to state a particular liability or requirement in text (also known asfine print). An example of this practice is to display anFTPpassword in hidden text to reduce the number of users who are able to access downloads and thereby save bandwidth.Parodysites (such asUncyclopedia) occasionally use the technique as a joke aboutcensorship, with the "censored" text displayed black-on-black in an obvious manner akin to a theatricalstage whisper.
It is also used by websites as aspamdexingtechnique to fill a page withkeywordsthat asearch enginewill recognize but are not visible to a visitor. However,Googlehas taken steps to prevent this by parsing the color of text as it indexes it and checking to see if it is transparent, and may penalize pages and give them lower rankings.[1]
Conversely,Project Honey Potuses links intended only to be followed by spambots; the links point tohoneypotswhich detect e-mail address harvesting. A link usingrel="nofollow"(to hide it from legitimate search engine spiders) and hidden text (to remove it for human visitors) would remain visible to malicious 'bots.
Compare withmetadata, which is usually also hidden, but is used for different purposes.
Hidden charactersare characters that are required for computer text to render properly but which are not a part of the content, so they are hidden. This includes characters such as those used to add a new line of text or to add space between words, commonly referred to as "white space characters".
ThisWorld Wide Web–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Hidden_text
|
Instatistics, anerrors-in-variables modelor ameasurement error modelis aregression modelthat accounts formeasurement errorsin theindependent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in thedependent variables, or responses.[citation needed]
In the case when some regressors have been measured with errors, estimation based on the standard assumption leads toinconsistentestimates, meaning that the parameter estimates do not tend to the true values even in very large samples. Forsimple linear regressionthe effect is an underestimate of the coefficient, known as theattenuation bias. Innon-linear modelsthe direction of the bias is likely to be more complicated.[1][2][3]
Consider a simple linear regression model of the form
wherext∗{\displaystyle x_{t}^{*}}denotes thetruebutunobserved regressor. Instead, we observe this value with an error:
where the measurement errorηt{\displaystyle \eta _{t}}is assumed to be independent of the true valuext∗{\displaystyle x_{t}^{*}}.A practical application is the standard school science experiment forHooke's law, in which one estimates the relationship between the weight added to a spring and the amount by which the spring stretches.If theyt{\displaystyle y_{t}}′s are simply regressed on thext{\displaystyle x_{t}}′s (seesimple linear regression), then the estimator for the slope coefficient is
which converges as the sample sizeT{\displaystyle T}increases without bound:
This is in contrast to the "true" effect ofβ{\displaystyle \beta }, estimated using thext∗{\displaystyle x_{t}^{*}},:
Variances are non-negative, so that in the limit the estimatedβ^x{\displaystyle {\hat {\beta }}_{x}}is smaller thanβ^{\displaystyle {\hat {\beta }}}, an effect which statisticians callattenuationorregression dilution.[4]Thus the ‘naïve’ least squares estimatorβ^x{\displaystyle {\hat {\beta }}_{x}}is aninconsistentestimator forβ{\displaystyle \beta }. However,β^x{\displaystyle {\hat {\beta }}_{x}}is aconsistent estimatorof the parameter required for a best linear predictor ofy{\displaystyle y}given the observedxt{\displaystyle x_{t}}: in some applications this may be what is required, rather than an estimate of the 'true' regression coefficientβ{\displaystyle \beta }, although that would assume that the variance of the errors in the estimation and prediction is identical. This follows directly from the result quoted immediately above, and the fact that the regression coefficient relating theyt{\displaystyle y_{t}}′s to the actually observedxt{\displaystyle x_{t}}′s, in a simple linear regression, is given by
It is this coefficient, rather thanβ{\displaystyle \beta }, that would be required for constructing a predictor ofy{\displaystyle y}based on an observedx{\displaystyle x}which is subject to noise.
It can be argued that almost all existing data sets contain errors of different nature and magnitude, so that attenuation bias is extremely frequent (although in multivariate regression the direction of bias is ambiguous[5]).Jerry Hausmansees this as aniron law of econometrics: "The magnitude of the estimate is usually smaller than expected."[6]
Usually, measurement error models are described using thelatent variablesapproach. Ify{\displaystyle y}is the response variable andx{\displaystyle x}are observed values of the regressors, then it is assumed there exist some latent variablesy∗{\displaystyle y^{*}}andx∗{\displaystyle x^{*}}which follow the model's "true"functional relationshipg(⋅){\displaystyle g(\cdot )}, and such that the observed quantities are their noisy observations:
whereθ{\displaystyle \theta }is the model'sparameterandw{\displaystyle w}are those regressors which are assumed to be error-free (for example, when linear regression contains an intercept, the regressor which corresponds to the constant certainly has no "measurement errors"). Depending on the specification these error-free regressors may or may not be treated separately; in the latter case it is simply assumed that corresponding entries in the variance matrix ofη{\displaystyle \eta }'s are zero.
The variablesy{\displaystyle y},x{\displaystyle x},w{\displaystyle w}are allobserved, meaning that the statistician possesses adata setofn{\displaystyle n}statistical units{yi,xi,wi}i=1,…,n{\displaystyle \left\{y_{i},x_{i},w_{i}\right\}_{i=1,\dots ,n}}which follow thedata generating processdescribed above; the latent variablesx∗{\displaystyle x^{*}},y∗{\displaystyle y^{*}},ε{\displaystyle \varepsilon }, andη{\displaystyle \eta }are not observed, however.
This specification does not encompass all the existing errors-in-variables models. For example, in some of them, functiong(⋅){\displaystyle g(\cdot )}may benon-parametricor semi-parametric. Other approaches model the relationship betweeny∗{\displaystyle y^{*}}andx∗{\displaystyle x^{*}}as distributional instead of functional; that is, they assume thaty∗{\displaystyle y^{*}}conditionally onx∗{\displaystyle x^{*}}follows a certain (usually parametric) distribution.
Linear errors-in-variables models were studied first, probably becauselinear modelswere so widely used and they are easier than non-linear ones. Unlike standardleast squaresregression (OLS), extending errors in variables regression (EiV) from the simple to the multivariable case is not straightforward, unless one treats all variables in the same way i.e. assume equal reliability.[10]
The simple linear errors-in-variables model was already presented in the "motivation" section:
where all variables arescalar. Hereαandβare the parameters of interest, whereasσεandση—standard deviations of the error terms—are thenuisance parameters. The "true" regressorx*is treated as a random variable (structuralmodel), independent of the measurement errorη(classicassumption).
This model isidentifiablein two cases: (1) either the latent regressorx*isnotnormally distributed, (2) orx*has normal distribution, but neitherεtnorηtare divisible by a normal distribution.[11]That is, the parametersα,βcan be consistently estimated from the data set(xt,yt)t=1T{\displaystyle \scriptstyle (x_{t},\,y_{t})_{t=1}^{T}}without any additional information, provided the latent regressor is not Gaussian.
Before this identifiability result was established, statisticians attempted to apply themaximum likelihoodtechnique by assuming that all variables are normal, and then concluded that the model is not identified. The suggested remedy was toassumethat some of the parameters of the model are known or can be estimated from the outside source. Such estimation methods include[12]
Estimation methods that do not assume knowledge of some of the parameters of the model, include
where (n1,n2) are such thatK(n1+1,n2) — the jointcumulantof (x,y) — is not zero. In the case when the third central moment of the latent regressorx*is non-zero, the formula reduces to
The multivariable model looks exactly like the simple linear model, only this timeβ,ηt,xtandx*tarek×1 vectors.
In the case when (εt,ηt) is jointly normal, the parameterβis not identified if and only if there is a non-singulark×kblock matrix [a A], whereais ak×1 vector such thata′x*is distributed normally and independently ofA′x*. In the case whenεt,ηt1,...,ηtkare mutually independent, the parameterβis not identified if and only if in addition to the conditions above some of the errors can be written as the sum of two independent variables one of which is normal.[15]
Some of the estimation methods for multivariable linear models are
where∘{\displaystyle \circ }designates theHadamard productof matrices, and variablesxt,ythave been preliminarily de-meaned. The authors of the method suggest to use Fuller's modified IV estimator.[17]
A generic non-linear measurement error model takes form
Here functiongcan be either parametric or non-parametric. When functiongis parametric it will be written asg(x*,β).
For a general vector-valued regressorx*the conditions for modelidentifiabilityare not known. However, in the case of scalarx*the model is identified unless the functiongis of the "log-exponential" form[20]
and the latent regressorx*has density
where constantsA,B,C,D,E,Fmay depend ona,b,c,d.
Despite this optimistic result, as of now no methods exist for estimating non-linear errors-in-variables models without any extraneous information. However, there are several techniques which make use of some additional data: either the instrumental variables, or repeated observations.
whereπ0andσ0are (unknown) constant matrices, andζt⊥zt. The coefficientπ0can be estimated using standardleast squaresregression ofxonz. The distribution ofζtis unknown; however, we can model it as belonging to a flexible parametric family – theEdgeworth series:
whereϕis thestandard normaldistribution.
Simulated moments can be computed using theimportance samplingalgorithm: first we generate several random variables {vts~ϕ,s= 1,…,S,t= 1,…,T} from the standard normal distribution, then we compute the moments att-th observation as
whereθ= (β,σ,γ),Ais just some function of the instrumental variablesz, andHis a two-component vector of moments
In this approach two (or maybe more) repeated observations of the regressorx*are available. Both observations contain their own measurement errors; however, those errors are required to be independent:
wherex*⊥η1⊥η2. Variablesη1,η2need not be identically distributed (although if they are efficiency of the estimator can be slightly improved). With only these two observations it is possible to consistently estimate the density function ofx*using Kotlarski'sdeconvolutiontechnique.[22]
where it would be possible to compute the integral if we knew the conditional density functionƒx*|x. If this function could be known or estimated, then the problem turns into standard non-linear regression, which can be estimated for example using theNLLSmethod.Assuming for simplicity thatη1,η2are identically distributed, this conditional density can be computed as
where with slight abuse of notationxjdenotes thej-th component of a vector.All densities in this formula can be estimated using inversion of the empiricalcharacteristic functions. In particular,
To invert these characteristic function one has to apply the inverse Fourier transform, with a trimming parameterCneeded to ensure the numerical stability. For example:
wherewtrepresents variables measured without errors. The regressorx*here is scalar (the method can be extended to the case of vectorx*as well).If not for the measurement errors, this would have been a standardlinear modelwith the estimator
where
It turns out that all the expected values in this formula are estimable using the same deconvolution trick. In particular, for a generic observablewt(which could be 1,w1t, …,wℓ t, oryt) and some functionh(which could represent anygjorgigj) we have
whereφhis theFourier transformofh(x*), but using the same convention as for thecharacteristic functions,
and
|
https://en.wikipedia.org/wiki/Errors-in-variables_models
|
Incomputing,preemptionis the act performed by an externalscheduler— without assistance or cooperation from the task — of temporarilyinterruptinganexecutingtask, with the intention of resuming it at a later time.[1]: 153This preemptive scheduler usually runs in the most privilegedprotection ring, meaning that interruption and then resumption are considered highly secure actions. Such changes to the currently executing task of aprocessorare known ascontext switching.
In any given system design, some operations performed by the system may not be preemptable. This usually applies tokernelfunctions and serviceinterruptswhich, if not permitted torun to completion, would tend to producerace conditionsresulting indeadlock. Barring the scheduler from preempting tasks while they are processing kernel functions simplifies the kernel design at the expense ofsystem responsiveness. The distinction betweenuser modeandkernel mode, which determines privilege level within the system, may also be used to distinguish whether a task is currently preemptable.
Most modern operating systems havepreemptive kernels, which are designed to permit tasks to be preempted even when in kernel mode. Examples of such operating systems areSolaris2.0/SunOS 5.0,[2]Windows NT,Linux kernel(2.5.4 and newer),[3]AIXand someBSDsystems (NetBSD, since version 5).
The termpreemptive multitaskingis used to distinguish amultitasking operating system, which permits preemption of tasks, from acooperative multitaskingsystem wherein processes or tasks must be explicitly programmed toyieldwhen they do not need system resources.
In simple terms: Preemptive multitasking involves the use of aninterrupt mechanismwhich suspends the currently executing process and invokes aschedulerto determine which process should execute next. Therefore, all processes will get some amount of CPU time at any given time.
In preemptive multitasking, the operating systemkernelcan also initiate acontext switchto satisfy thescheduling policy's priority constraint, thus preempting the active task. In general, preemption means "prior seizure of". When the high-priority task at that instance seizes the currently running task, it is known as preemptive scheduling.
The term "preemptive multitasking" is sometimes mistakenly used when the intended meaning is more specific, referring instead to the class of scheduling policies known astime-shared scheduling, ortime-sharing.
Preemptive multitasking allows the computer system to more reliably guarantee each process a regular "slice" of operating time. It also allows the system to rapidly deal with important external events like incoming data, which might require the immediate attention of one or another process.
At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In early systems, processes would often "poll" or "busy-wait" while waiting for requested input (such as disk, keyboard or network input). During this time, the process was not performing useful work, but still maintained complete control of the CPU. With the advent of interrupts and preemptive multitasking, these I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution.
Although multitasking techniques were originally developed to allow multiple users to share a single machine, it became apparent that multitasking was useful regardless of the number of users. Many operating systems, from mainframes down to single-user personal computers and no-usercontrol systems(like those inrobotic spacecraft), have recognized the usefulness of multitasking support for a variety of reasons. Multitasking makes it possible for a single user to run multiple applications at the same time, or to run "background" processes while retaining control of the computer.
The period of time for which a process is allowed to run in a preemptive multitasking system is generally called thetime sliceorquantum.[1]: 158The scheduler is run once every time slice to choose the next process to run. The length of each time slice can be critical to balancing system performance vs process responsiveness - if the time slice is too short then the scheduler itself will consume too much processing time, but if the time slice is too long, processes will take longer to respond to input.
Aninterruptis scheduled to allow theoperating systemkernelto switch between processes when their time slices expire, effectively allowing the processor's time to be shared among a number of tasks, giving the illusion that it is dealing with these tasks in parallel (simultaneously). The operating system which controls such a design is called a multi-tasking system.
Today, nearly all operating systems support preemptive multitasking, including the current versions ofWindows,macOS,Linux(includingAndroid),iOSandiPadOS.
An early microcomputer operating system providing preemptive multitasking wasMicroware'sOS-9, available for computers based on theMotorola 6809, including home computers such as theTRS-80 Color Computer 2when configured with disk drives,[4]with the operating system supplied by Tandy as an upgrade.[5]Sinclair QDOS[6]:18andAmigaOSon theAmigawere also microcomputer operating systems offering preemptive multitasking as a core feature. These both ran onMotorola 68000-familymicroprocessorswithout memory management. Amiga OS useddynamic loadingof relocatable code blocks ("hunks" in Amiga jargon) to multitask preemptively all processes in the same flat address space.
Early operating systems forIBM PC compatiblessuch asMS-DOSandPC DOS, did not support multitasking at all, however alternative operating systems such asMP/M-86(1981) andConcurrent CP/M-86did support preemptive multitasking. OtherUnix-likesystems includingMINIXandCoherentprovided preemptive multitasking on 1980s-era personal computers.
LaterMS-DOScompatible systems natively supporting preemptive multitasking/multithreading includeConcurrent DOS,Multiuser DOS,Novell DOS(later calledCaldera OpenDOSandDR-DOS7.02 and higher). SinceConcurrent DOS 386, they could also run multiple DOS programs concurrently invirtual DOS machines.
The earliest version of Windows to support a limited form of preemptive multitasking wasWindows/386 2.0, which used theIntel 80386'sVirtual 8086 modeto run DOS applications invirtual 8086 machines, commonly known as "DOS boxes", which could be preempted. InWindows 95, 98 and Me, 32-bit applications were made preemptive by running each one in a separate address space, but 16-bit applications remained cooperative for backward compatibility.[7]In Windows 3.1x (protected mode), the kernel and virtual device drivers ran preemptively, but all 16-bit applications were non-preemptive and shared the same address space.
Preemptive multitasking has always been supported byWindows NT(all versions),OS/2(native applications),UnixandUnix-likesystems (such asLinux,BSDandmacOS),VMS,OS/360, and many other operating systems designed for use in the academic and medium-to-large business markets.
Early versions of theclassic Mac OSdid not support multitasking at all, with cooperative multitasking becoming available viaMultiFinderinSystem Software 5and then standard inSystem 7. Although there were plans to upgrade the cooperative multitasking found in the classic Mac OS to a preemptive model (and a preemptive API did exist inMac OS 9, although in a limited sense[8]), these were abandoned in favor ofMac OS X (now called macOS)that, as a hybrid of the old Mac System style andNeXTSTEP, is an operating system based on theMachkernel and derived in part fromBSD, which had always provided Unix-like preemptive multitasking.
|
https://en.wikipedia.org/wiki/Preemptive_scheduling
|
Incomputer security, asandboxis a security mechanism for separating running programs, usually in an effort to mitigate system failures and/or softwarevulnerabilitiesfrom spreading. Thesandboxmetaphor derives from the concept of a child's sandbox—a play area where children can build, destroy, and experiment without causing any real-world damage.[1]It is often used to kill untested or untrusted programs or code, possibly from unverified or untrusted third parties, suppliers, users or websites, without risking harm to the host machine oroperating system.[2]A sandbox typically provides a tightly controlled set of resources for guest programs to run in, such as storage and memoryscratch space. Network access, the ability to inspect the host system, or read from input devices are usually disallowed or heavily restricted.
In the sense of providing a highly controlled environment, sandboxes may be seen as a specific example ofvirtualization. Sandboxing is frequently used to test unverified programs that may contain avirusor othermalicious codewithout allowing the software to harm the host device.[3]
A sandbox is implemented by executing the software in a restricted operating system environment, thus controlling the resources (e.g.file descriptors, memory, file system space, etc.) that a process may use.[4]
Examples of sandbox implementations include the following:
Some of the use cases for sandboxes include the following:
|
https://en.wikipedia.org/wiki/Sandbox_(computer_security)
|
Communications interceptioncan mean:
|
https://en.wikipedia.org/wiki/Communications_interception_(disambiguation)
|
TheIEEE 754-2008standard includes decimal floating-point number formats in which thesignificandand the exponent (and the payloads ofNaNs) can be encoded in two ways, referred to asbinary encodinganddecimal encoding.[1]
Both formats break a number down into a sign bits, an exponentq(betweenqminandqmax), and ap-digit significandc(between 0 and 10p−1). The value encoded is (−1)s×10q×c. In both formats the range of possible values is identical, but they differ in how the significandcis represented. In the decimal encoding, it is encoded as a series ofpdecimal digits (using thedensely packed decimal(DPD) encoding). This makes conversion to decimal form efficient, but requires a specialized decimalALUto process. In thebinary integer decimal(BID) encoding, it is encoded as a binary number.
Using the fact that 210= 1024 is only slightly more than 103= 1000, 3n-digit decimal numbers can be efficiently packed into 10nbinary bits. However, the IEEE formats have significands of 3n+1 digits, which would generally require 10n+4 binary bits to represent.
This would not be efficient, because only 10 of the 16 possible values of the additional four bits are needed. A more efficient encoding can be designed using the fact that the exponent range is of the form 3×2k, so the exponent never starts with11. Using the Decimal32 encoding (with a significand of 3*2+1 decimal digits) as an example (estands for exponent,mfor mantissa, i.e. significand):
The bits shown in parentheses areimplicit: they are not included in the 32 bits of the Decimal32 encoding, but are implied by the two bits after the sign bit.
The Decimal64 and Decimal128 encodings have larger exponent and significand fields, but operate in a similar fashion.
For the Decimal128 encoding, 113 bits of significand is actually enough to encode 34 decimal digits, and the second form is never actually required.
A decimal floating-point number can be encoded in several ways, the different ways represent different precisions, for example 100.0 is encoded as 1000×10−1, while 100.00 is encoded as 10000×10−2. The set of possible encodings of the same numerical value is called acohortin the standard. If the result of a calculation is inexact the largest amount of significant data is preserved by selecting the cohort member with the largest integer that can be stored in the significand along with the required exponent.
The proposed IEEE 754r standard limits the range of numbers to a significand of the form 10n−1, where n is the number of whole decimal digits that can be stored in the bits available so that decimal rounding is effected correctly.
A binary encoding is inherently less efficient for conversions to or from decimal-encoded data, such as strings (ASCII,Unicode, etc.) andBCD. A binary encoding is therefore best chosen only when the data are binary rather than decimal. IBM has published some unverified performance data.[2]
|
https://en.wikipedia.org/wiki/Binary_integer_decimal
|
Artificial general intelligence(AGI)—sometimes calledhuman‑level intelligence AI—is a type ofartificial intelligencethat would match or surpass human capabilities across virtually all cognitive tasks.[1][2]
Some researchers argue that state‑of‑the‑artlarge language modelsalready exhibit early signs of AGI‑level capability, while others maintain that genuine AGI has not yet been achieved.[3]AGI is conceptually distinct fromartificial superintelligence(ASI), which would outperform the best human abilities across every domain by a wide margin.[4]AGI is considered one of the definitions ofstrong AI.
Unlikeartificial narrow intelligence(ANI), whose competence is confined to well‑defined tasks, an AGI system can generalise knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming. The concept does not, in principle, require the system to be an autonomous agent; a static model—such as a highly capable large language model—or an embodied robot could both satisfy the definition so long as human‑level breadth and proficiency are achieved.[5]
Creating AGI is a primary goal of AI research and of companies such asOpenAI,[6]Google,[7]andMeta.[8]A 2020 survey identified 72 active AGIresearch and developmentprojects across 37 countries.[9]
The timeline for achieving human‑level intelligence AI remains deeply contested. Recent surveys of AI researchers give median forecasts ranging from the early 2030s to mid‑century, while still recording significant numbers who expect arrival much sooner—or never at all.[10][11][12]There is debate on the exact definition of AGI and regarding whether modernlarge language models(LLMs) such asGPT-4are early forms of AGI.[3]AGI is a common topic inscience fictionandfutures studies.[13][14]
Contention exists over whether AGI represents anexistential risk.[15][16][17]Many AI expertshave statedthat mitigating the risk of human extinction posed by AGI should be a global priority.[18][19]Others find the development of AGI to be in too remote a stage to present such a risk.[20][21]
AGI is also known as strong AI,[22][23]full AI,[24]human-level AI,[25]human-level intelligent AI, or general intelligent action.[26]
Some academic sources reserve the term "strong AI" for computer programs that will experiencesentienceorconsciousness.[a]In contrast, weak AI (or narrow AI) is able to solve one specific problem but lacks general cognitive abilities.[27][23]Some academic sources use "weak AI" to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.[a]
Related concepts include artificialsuperintelligenceand transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans,[28]while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution.[29]
A framework for classifying AGI by performance and autonomy was proposed in 2023 byGoogle DeepMindresearchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with a threshold of 100%. They consider large language models likeChatGPTorLLaMA 2to be instances of emerging AGI (comparable to unskilled humans). Regarding the autonomy of AGI and associated risks, they define five levels: tool (fully in human control), consultant, collaborator, expert, and agent (fully autonomous).[30]
Various popular definitions ofintelligencehave been proposed. One of the leading proposals is theTuring test. However, there are other well-known definitions, and some researchers disagree with the more popular approaches.[b]
Researchers generally hold that a system is required to do all of the following to be regarded as an AGI:[32]
Manyinterdisciplinaryapproaches (e.g.cognitive science,computational intelligence, anddecision making) consider additional traits such asimagination(the ability to form novel mental images and concepts)[33]andautonomy.[34]
Computer-based systems that exhibit many of these capabilities exist (e.g. seecomputational creativity,automated reasoning,decision support system,robot,evolutionary computation,intelligent agent). There is debate about whether modern AI systems possess them to an adequate degree.[35]
Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:[36]
This includes the ability to detect and respond tohazard.[37]
Although the ability to sense (e.g.see, hear, etc.) and the ability to act (e.g.move and manipulate objects, change location to explore, etc.) can be desirable for some intelligent systems,[36]these physical capabilities are not strictly required for an entity to qualify as AGI—particularly under the thesis that large language models (LLMs) may already be or become AGI. Even from a less optimistic perspective on LLMs, there is no firm requirement for an AGI to have a human-like form; being a silicon-based computational system is sufficient, provided it can process input (language) from the external world in place of human senses. This interpretation aligns with the understanding that AGI has never been proscribed a particular physical embodiment and thus does not demand a capacity for locomotion or traditional "eyes and ears".[37]It can be regarded as sufficient for an intelligent computer tointeract with other systems, to invoke or regulate them, to achieve specific goals, including altering a physical environment, asHALin2001: A Space Odysseywas both programmed and tasked to.[38]
Several tests meant to confirm human-level AGI have been considered, including:[39][40]
The idea of the test is that the machine has to try and pretend to be a man, by answering questions put to it, and it will only pass if the pretence is reasonably convincing. A considerable portion of a jury, who should not be expert about machines, must be taken in by the pretence.[43]
A problem is informally called "AI-complete" or "AI-hard" if it is believed that in order to solve it, one would need to implement AGI, because the solution is beyond the capabilities of a purpose-specific algorithm.[56]
There are many problems that have been conjectured to require general intelligence to solve as well as humans. Examples includecomputer vision,natural language understanding, and dealing with unexpected circumstances while solving any real-world problem.[57]Even a specific task liketranslationrequires a machine to read and write in both languages, follow the author's argument (reason), understand the context (knowledge), and faithfully reproduce the author's original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance.
However, many of these tasks can now be performed by modern large language models. According toStanford University's 2024 AI index, AI has reached human-level performance on manybenchmarksfor reading comprehension and visual reasoning.[58]
Modern AI research began in the mid-1950s.[59]The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades.[60]AI pioneerHerbert A. Simonwrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."[61]
Their predictions were the inspiration forStanley KubrickandArthur C. Clarke's characterHAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneerMarvin Minskywas a consultant[62]on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time. He said in 1967, "Within a generation... the problem of creating 'artificial intelligence' will substantially be solved".[63]
Severalclassical AI projects, such asDoug Lenat'sCycproject (that began in 1984), andAllen Newell'sSoarproject, were directed at AGI.
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".[c]In the early 1980s, Japan'sFifth Generation ComputerProject revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".[67]In response to this and the success ofexpert systems, both industry and government pumped money into the field.[65][68]However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.[69]For the second time in 20 years, AI researchers who predicted the imminent achievement of AGI had been mistaken. By the 1990s, AI researchers had a reputation for making vain promises. They became reluctant to make predictions at all[d]and avoided mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]".[71]
In the 1990s and early 21st century, mainstream AI achieved commercial success and academic respectability by focusing on specific sub-problems where AI can produce verifiable results and commercial applications, such asspeech recognitionandrecommendation algorithms.[72]These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is heavily funded in both academia and industry. As of 2018[update], development in this field was considered an emerging trend, and a mature stage was expected to be reached in more than 10 years.[73]
At the turn of the century, many mainstream AI researchers[74]hoped that strong AI could be developed by combining programs that solve various sub-problems.Hans Moravecwrote in 1988:
I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real-world competence and thecommonsense knowledgethat has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphoricalgolden spikeis driven uniting the two efforts.[74]
However, even at the time, this was disputed. For example, Stevan Harnad of Princeton University concluded his 1990 paper on thesymbol grounding hypothesisby stating:
The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer).[75]
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud[76]in a discussion of the implications of fully automated military production and operations. A mathematical formalism of AGI was proposed byMarcus Hutterin 2000. NamedAIXI, the proposed AGI agent maximises "the ability to satisfy goals in a wide range of environments".[77]This type of AGI, characterized by the ability to maximise a mathematical definition of intelligence rather than exhibit human-like behaviour,[78]was also called universal artificial intelligence.[79]
The term AGI was re-introduced and popularized byShane LeggandBen Goertzelaround 2002.[80]AGI research activity in 2006 was described by Pei Wang and Ben Goertzel[81]as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009[82]by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010[83]and 2011[84]at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course on AGI in 2018, organized byLex Fridmanand featuring a number of guest lecturers.
As of 2023[update], a small number of computer scientists are active in AGI research, and many contribute to a series of AGI conferences. However, increasingly more researchers are interested in open-ended learning,[85][3]which is the idea of allowing AI to continuously learn and innovate like humans do.
As of 2023, the development and potential achievement of AGI remains a subject of intense debate within the AI community. While traditional consensus held that AGI was a distant goal, recent advancements have led some researchers and industry figures to claim that early forms of AGI may already exist.[86]AI pioneerHerbert A. Simonspeculated in 1965 that "machines will be capable, within twenty years, of doing any work a man can do". This prediction failed to come true. Microsoft co-founderPaul Allenbelieved that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".[87]Writing inThe Guardian, roboticistAlan Winfieldclaimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.[88]
A further challenge is the lack of clarity in defining whatintelligenceentails. Does it require consciousness? Must it display the ability to set goals as well as pursue them? Is it purely a matter of scale such that if model sizes increase sufficiently, intelligence will emerge? Are facilities such as planning, reasoning, and causal understanding required? Does intelligence require explicitly replicating the brain and its specific faculties? Does it require emotions?[89]
Most AI researchers believe strong AI can be achieved in the future, but some thinkers, likeHubert DreyfusandRoger Penrose, deny the possibility of achieving strong AI.[90][91]John McCarthyis among those who believe human-level AI will be accomplished, but that the present level of progress is such that a date cannot accurately be predicted.[92]AI experts' views on the feasibility of AGI wax and wane. Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.[93][94]Further current AGI progress considerations can be found aboveTests for confirming human-level AGI.
A report by Stuart Armstrong and Kaj Sotala of theMachine Intelligence Research Institutefound that "over [a] 60-year time frame there is a strong bias towards predicting the arrival of human-level AI as between 15 and 25 years from the time the prediction was made". They analyzed 95 predictions made between 1950 and 2012 on when human-level AI will come about.[95]
In 2023,Microsoftresearchers published a detailed evaluation ofGPT-4. They concluded: "Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system."[96]Another study in 2023 reported that GPT-4 outperforms 99% of humans on theTorrance tests of creative thinking.[97][98]
Blaise Agüera y ArcasandPeter Norvigwrote in 2023 that a significant level of general intelligence has already been achieved withfrontier models. They wrote that reluctance to this view comes from four main reasons: a "healthy skepticism about metrics for AGI", an "ideological commitment to alternative AI theories or techniques", a "devotion to human (or biological) exceptionalism", or a "concern about the economic implications of AGI".[99]
2023 also marked the emergence of large multimodal models (large language models capable of processing or generating multiplemodalitiessuch as text, audio, and images).[100]
In 2024, OpenAI releasedo1-preview, the first of a series of models that "spend more time thinking before they respond". According toMira Murati, this ability to think before responding represents a new, additional paradigm. It improves model outputs by spending more computing power when generating the answer, whereas the model scaling paradigm improves outputs by increasing the model size, training data and training compute power.[101][102]
AnOpenAIemployee, Vahid Kazemi, claimed in 2024 that the company had achieved AGI, stating, "In my opinion, we have already achieved AGI and it's even more clear withO1." Kazemi clarified that while the AI is not yet "better than any human at any task", it is "better than most humans at most tasks." He also addressed criticisms that large language models (LLMs) merely follow predefined patterns, comparing their learning process to the scientific method of observing, hypothesizing, and verifying. These statements have sparked debate, as they rely on a broad and unconventional definition of AGI—traditionally understood as AI that matches human intelligence across all domains. Critics argue that, while OpenAI's models demonstrate remarkable versatility, they may not fully meet this standard. Notably, Kazemi's comments came shortly after OpenAI removed "AGI" from the terms of its partnership withMicrosoft, prompting speculation about the company's strategic intentions.[103]
Progress in artificial intelligence has historically gone through periods of rapid progress separated by periods when progress appeared to stop.[90]Ending each hiatus were fundamental advances in hardware, software or both to create space for further progress.[90][106][107]For example, the computer hardware available in the twentieth century was not sufficient to implement deep learning, which requires large numbers ofGPU-enabledCPUs.[108]
In the introduction to his 2006 book,[109]Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century. As of 2007[update], the consensus in the AGI research community seemed to be that the timeline discussed byRay Kurzweilin 2005 inThe Singularity is Near[110](i.e. between 2015 and 2045) was plausible.[111]Mainstream AI researchers have given a wide range of opinions on whether progress will be this rapid. A 2012 meta-analysis of 95 such opinions found a bias towards predicting that the onset of AGI would occur within 16–26 years for modern and historical predictions alike. That paper has been criticized for how it categorized opinions as expert or non-expert.[112]
In 2012,Alex Krizhevsky,Ilya Sutskever, andGeoffrey Hintondeveloped a neural network calledAlexNet, which won theImageNetcompetition with a top-5 test error rate of 15.3%, significantly better than the second-best entry's rate of 26.3% (the traditional approach used a weighted sum of scores from different pre-defined classifiers).[113]AlexNet was regarded as the initial ground-breaker of the currentdeep learningwave.[113]
In 2017, researchers Feng Liu, Yong Shi, and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI, Apple's Siri, and others. At the maximum, these AIs reached an IQ value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests were carried out in 2014, with the IQ score reaching a maximum value of 27.[114][115]
In 2020,OpenAIdevelopedGPT-3, a language model capable of performing many diverse tasks without specific training. According toGary Grossmanin aVentureBeatarticle, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to be classified as a narrow AI system.[116]
In the same year, Jason Rohrer used his GPT-3 account to develop a chatbot, and provided a chatbot-developing platform called "Project December". OpenAI asked for changes to the chatbot to comply with their safety guidelines; Rohrer disconnected Project December from the GPT-3 API.[117]
In 2022,DeepMinddevelopedGato, a "general-purpose" system capable of performing more than 600 different tasks.[118]
In 2023,Microsoft Researchpublished a study on an early version of OpenAI'sGPT-4, contending that it exhibited more general intelligence than previous AI models and demonstrated human-level performance in tasks spanning multiple domains, such as mathematics, coding, and law. This research sparked a debate on whether GPT-4 could be considered an early, incomplete version of artificial general intelligence, emphasizing the need for further exploration and evaluation of such systems.[3]
In 2023, AI researcherGeoffrey Hintonstated that:[119]
The idea that this stuff could actually get smarter than people – a few people believed that, [...]. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.
He estimated in 2024 (with low confidence) that systems smarter than humans could appear within 5 to 20 years and stressed the attendant existential risks.[120]
In May 2023,Demis Hassabissimilarly said that "The progress in the last few years has been pretty incredible", and that he sees no reason why it would slow down, expecting AGI within a decade or even a few years.[121]In March 2024,Nvidia's CEO,Jensen Huang, stated his expectation that within five years, AI would be capable of passing any test at least as well as humans.[122]In June 2024, the AI researcherLeopold Aschenbrenner, a formerOpenAIemployee, estimated AGI by 2027 to be "strikingly plausible".[123]
While the development oftransformermodels like inChatGPTis considered the most promising path to AGI,[124][125]whole brain emulationcan serve as an alternative approach. With whole brain simulation, a brain model is built byscanningandmappinga biological brain in detail, and then copying and simulating it on a computer system or another computational device. Thesimulationmodel must be sufficiently faithful to the original, so that it behaves in practically the same way as the original brain.[126]Whole brain emulation is a type ofbrain simulationthat is discussed incomputational neuroscienceandneuroinformatics, and for medical research purposes. It has been discussed inartificial intelligenceresearch[111]as an approach to strong AI.Neuroimagingtechnologies that could deliver the necessary detailed understanding are improving rapidly, andfuturistRay Kurzweilin the bookThe Singularity Is Near[110]predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it.
For low-level brain simulation, a very powerful cluster of computers or GPUs would be required, given the enormous quantity ofsynapseswithin thehuman brain. Each of the 1011(one hundred billion)neuronshas on average 7,000 synaptic connections (synapses) to other neurons. The brain of a three-year-old child has about 1015synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014to 5×1014synapses (100 to 500 trillion).[128]An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 1014(100 trillion) synaptic updates per second (SUPS).[129]
In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 1016computations per second (cps).[e](For comparison, if a "computation" was equivalent to one "floating-point operation" – a measure used to rate currentsupercomputers– then 1016"computations" would be equivalent to 10petaFLOPS,achieved in 2011, while 1018wasachieved in 2022.) He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.
TheHuman Brain Project, anEU-funded initiative active from 2013 to 2023, has developed a particularly detailed and publicly accessibleatlasof the human brain.[132]In 2023, researchers from Duke University performed a high-resolution scan of a mouse brain.
Theartificial neuronmodel assumed by Kurzweil and used in many currentartificial neural networkimplementations is simple compared withbiological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biologicalneurons, presently understood only in broad outline. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition, the estimates do not account forglial cells, which are known to play a role in cognitive processes.[133]
A fundamental criticism of the simulated brain approach derives fromembodied cognitiontheory which asserts that human embodiment is an essential aspect of human intelligence and is necessary to ground meaning.[134][135]If this theory is correct, any fully functional brain model will need to encompass more than just the neurons (e.g., a robotic body). Goertzel[111]proposes virtual embodiment (like inmetaverseslikeSecond Life) as an option, but it is unknown whether this would be sufficient.
In 1980, philosopherJohn Searlecoined the term "strong AI" as part of hisChinese roomargument.[136]He proposed a distinction between two hypotheses about artificial intelligence:[f]
The first one he called "strong" because it makes astrongerstatement: it assumes something special has happened to the machine that goes beyond those abilities that we can test. The behaviour of a "weak AI" machine would be precisely identical to a "strong AI" machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.[137]
In contrast to Searle and mainstream AI, some futurists such asRay Kurzweiluse the term "strong AI" to mean "human level artificial general intelligence".[110]This is not the same as Searle'sstrong AI, unless it is assumed thatconsciousnessis necessary for human-level AGI. Academic philosophers such as Searle do not believe that is the case, and to most artificial intelligence researchers the question is out-of-scope.[138]
Mainstream AI is most interested in how a programbehaves.[139]According toRussellandNorvig, "as long as the program works, they don't care if you call it real or a simulation."[138]If the program can behaveas ifit has a mind, then there is no need to know if itactuallyhas mind – indeed, there would be no way to tell. For AI research, Searle's "weak AI hypothesis" is equivalent to the statement "artificial general intelligence is possible". Thus, according to Russell and Norvig, "most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."[138]Thus, for academic AI research, "Strong AI" and "AGI" are two different things.
Consciousness can have various meanings, and some aspects play significant roles in science fiction and theethics of artificial intelligence:
These traits have a moral dimension. AI sentience would give rise to concerns of welfare and legal protection, similarly to animals.[144]Other aspects of consciousness related to cognitive capabilities are also relevant to the concept of AI rights.[145]Figuring out how to integrate advanced AI with existing legal and social frameworks is an emergent issue.[146]
AGI could have a wide variety of applications. If oriented towards such goals, AGI could help mitigate various problems in the world such as hunger, poverty and health problems.[147]
AGI could improve productivity and efficiency in most jobs. For example, in public health, AGI could accelerate medical research, notably against cancer.[148]It could take care of the elderly,[149]and democratize access to rapid, high-quality medical diagnostics. It could offer fun, cheap and personalized education.[149]The need to work to subsist couldbecome obsoleteif the wealth produced is properlyredistributed.[149][150]This also raises the question of the place of humans in a radically automated society.
AGI could also help to make rational decisions, and to anticipate and prevent disasters. It could also help to reap the benefits of potentially catastrophic technologies such asnanotechnologyorclimate engineering, while avoiding the associated risks.[151]If an AGI's primary goal is to prevent existential catastrophes such as human extinction (which could be difficult if theVulnerable World Hypothesisturns out to be true),[152]it could take measures to drastically reduce the risks[151]while minimizing the impact of these measures on our quality of life.
Advancements in medicine and healthcare
AGI would improve healthcare by making medical diagnostics faster, cheaper, and more accurate. AI-driven systems can analyse patient data and detect diseases at an early stage.[153]This means patients will get diagnosed quicker and be able to seek medical attention before their medical condition gets worse. AGI systems could also recommend personalised treatment plans based on genetics and medical history.[154]
Additionally, AGI could accelerate drug discovery by simulating molecular interactions, reducing the time it takes to develop new medicines for conditions like cancer and Alzheimer's.[155]In hospitals, AGI-powered robotic assistants could assist in surgeries, monitor patients, and provide real-time medical support. It could also be used in elderly care, helping aging populations maintain independence through AI-powered caregivers and health-monitoring systems.
By evaluating large datasets, AGI can assist in developing personalised treatment plans tailored to individual patient needs. This approach ensures that therapies are optimised based on a patient's unique medical history and genetic profile, improving outcomes and reducing adverse effects.[156]
Advancements in science and technology
AGI can become a tool for scientific research and innovation. In fields such as physics and mathematics, AGI could help solve complex problems that require massive computational power, such as modeling quantum systems, understanding dark matter, or proving mathematical theorems.[157]Problems that have remained unsolved for decades may be solved with AGI.
AGI could also drive technological breakthroughs that could reshape society. It can do this by optimising engineering designs, discovering new materials, and improving automation. For example, AI is already playing a role in developing more efficient renewable energy sources and optimising supply chains in manufacturing.[158]Future AGI systems could push these innovations even further.
Enhancing education and productivity
AGI can personalize education by creating learning programs that are specific to each student's strengths, weaknesses, and interests. Unlike traditional teaching methods, AI-driven tutoring systems could adapt lessons in real-time, ensuring students understand difficult concepts before moving on.[159]
In the workplace, AGI could automate repetitive tasks, freeing up workers for more creative and strategic roles.[158]It could also improve efficiency across industries by optimising logistics, enhancing cybersecurity, and streamlining business operations. If properly managed, the wealth generated by AGI-driven automation could reduce the need for people to work for a living. Working may become optional.[160]
Mitigating global crises
AGI could play a crucial role in preventing and managing global threats. It could help governments and organizations predict and respond to natural disasters more effectively, using real-time data analysis to forecast hurricanes, earthquakes, and pandemics.[161]By analyzing vast datasets from satellites, sensors, and historical records, AGI could improve early warning systems, enabling faster disaster response and minimising casualties.
In climate science, AGI could develop new models for reducing carbon emissions, optimising energy resources, and mitigating climate change effects. It could also enhance weather prediction accuracy, allowing policymakers to implement more effective environmental regulations. Additionally, AGI could help regulate emerging technologies that carry significant risks, such as nanotechnology and bioengineering, by analysing complex systems and predicting unintended consequences.[157]Furthermore, AGI could assist in cybersecurity by detecting and mitigating large-scale cyber threats, protecting critical infrastructure, and preventing digital warfare.
Revitalising environmental conservation and biodiversity
AGI could significantly contribute to preserving the environment and protecting endangered species. By analyzing satellite imagery, climate data, and wildlife patterns, AGI systems could identify environmental threats earlier and recommend targeted conservation strategies.[162]AGI could help optimize land use, monitor illegal activities like poaching or deforestation in real-time, and support global efforts to restore ecosystems. Advanced predictive models developed by AGI could also assist in reversing biodiversity loss, ensuring the survival of critical species and maintaining ecological balance.[163]
AGI could revolutionize humanity’s ability to explore and settle beyond Earth. With its advanced problem-solving skills, AGI could autonomously manage complex space missions, including navigation, resource management, and emergency response. It could accelerate the design of life support systems, habitats, and spacecraft optimized for extraterrestrial environments. Furthermore, AGI could support efforts to colonize planets like Mars by simulating survival scenarios and helping humans adapt to new worlds, dramatically expanding the possibilities for interplanetary civilization.[164]
AGI may represent multiple types ofexistential risk, which are risks that threaten "the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development".[165]The risk of human extinction from AGI has been the topic of many debates, but there is also the possibility that the development of AGI would lead to a permanently flawed future. Notably, it could be used to spread and preserve the set of values of whoever develops it. If humanity still has moral blind spots similar to slavery in the past, AGI might irreversibly entrench it, preventingmoral progress.[166]Furthermore, AGI could facilitate mass surveillance and indoctrination, which could be used to create a stable repressive worldwide totalitarian regime.[167][168]There is also a risk for the machines themselves. If machines that are sentient or otherwise worthy of moral consideration are mass created in the future, engaging in a civilizational path that indefinitely neglects their welfare and interests could be an existential catastrophe.[169][170]Considering how much AGI could improve humanity's future and help reduce other existential risks,Toby Ordcalls these existential risks "an argument for proceeding with due caution", not for "abandoning AI".[167]
The thesis that AI poses an existential risk for humans, and that this risk needs more attention, is controversial but has been endorsed in 2023 by many public figures, AI researchers and CEOs of AI companies such asElon Musk,Bill Gates,Geoffrey Hinton,Yoshua Bengio,Demis HassabisandSam Altman.[171][172]
In 2014,Stephen Hawkingcriticized widespread indifference:
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here—we'll leave the lights on?' Probably not—but this is more or less what is happening with AI.[173]
The potential fate of humanity has sometimes been compared to the fate of gorillas threatened by human activities. The comparison states that greater intelligence allowed humanity to dominate gorillas, which are now vulnerable in ways that they could not have anticipated. As a result, the gorilla has become an endangered species, not out of malice, but simply as a collateral damage from human activities.[174]
The skepticYann LeCunconsiders that AGIs will have no desire to dominate humanity and that we should be careful not to anthropomorphize them and interpret their intents as we would for humans. He said that people won't be "smart enough to design super-intelligent machines, yet ridiculously stupid to the point of giving it moronic objectives with no safeguards".[175]On the other side, the concept ofinstrumental convergencesuggests that almost whatever their goals,intelligent agentswill have reasons to try to survive and acquire more power as intermediary steps to achieving these goals. And that this does not require having emotions.[176]
Many scholars who are concerned about existential risk advocate for more research into solving the "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximise the probability that their recursively-improving AI would continue to behave in afriendly, rather than destructive, manner after it reaches superintelligence?[177][178]Solving the control problem is complicated by theAI arms race(which could lead to arace to the bottomof safety precautions in order to release products before competitors),[179]and the use of AI in weapon systems.[180]
The thesis that AI can pose existential risk also has detractors. Skeptics usually say that AGI is unlikely in the short-term, or that concerns about AGI distract from other issues related to current AI.[181]FormerGooglefraud czarShuman Ghosemajumderconsiders that for many people outside of the technology industry, existing chatbots and LLMs are already perceived as though they were AGI, leading to further misunderstanding and fear.[182]
Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God.[183]Some researchers believe that the communication campaigns on AI existential risk by certain AI groups (such as OpenAI, Anthropic, DeepMind, and Conjecture) may be an at attempt at regulatory capture and to inflate interest in their products.[184][185]
In 2023, the CEOs of Google DeepMind, OpenAI and Anthropic, along with other industry leaders and researchers, issued a joint statement asserting that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."[172]
Researchers from OpenAI estimated that "80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while around 19% of workers may see at least 50% of their tasks impacted".[186][187]They consider office workers to be the most exposed, for example mathematicians, accountants or web designers.[187]AGI could have a better autonomy, ability to make decisions, to interface with other computer tools, but also to control robotized bodies.
According to Stephen Hawking, the outcome of automation on the quality of life will depend on how the wealth will be redistributed:[150]
Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality
Elon Musk believes that the automation of society will require governments to adopt auniversal basic income.[188]
|
https://en.wikipedia.org/wiki/Artificial_general_intelligence
|
Preference learningis a subfield ofmachine learningthat focuses on modeling and predicting preferences based on observed preference information.[1]Preference learning typically involvessupervised learningusing datasets of pairwise preference comparisons, rankings, or other preference information.
The main task in preference learning concerns problems in "learning to rank". According to different types of preference information observed, the tasks are categorized as three main problems in the bookPreference Learning:[2]
In label ranking, the model has an instance spaceX={xi}{\displaystyle X=\{x_{i}\}\,\!}and a finite set of labelsY={yi|i=1,2,⋯,k}{\displaystyle Y=\{y_{i}|i=1,2,\cdots ,k\}\,\!}. The preference information is given in the formyi≻xyj{\displaystyle y_{i}\succ _{x}y_{j}\,\!}indicating instancex{\displaystyle x\,\!}shows preference inyi{\displaystyle y_{i}\,\!}rather thanyj{\displaystyle y_{j}\,\!}. A set of preference information is used as training data in the model. The task of this model is to find a preference ranking among the labels for any instance.
It was observed that some conventionalclassificationproblems can be generalized in the framework of label ranking problem:[3]if a training instancex{\displaystyle x\,\!}is labeled as classyi{\displaystyle y_{i}\,\!}, it implies that∀j≠i,yi≻xyj{\displaystyle \forall j\neq i,y_{i}\succ _{x}y_{j}\,\!}. In themulti-labelcase,x{\displaystyle x\,\!}is associated with a set of labelsL⊆Y{\displaystyle L\subseteq Y\,\!}and thus the model can extract a set of preference information{yi≻xyj|yi∈L,yj∈Y∖L}{\displaystyle \{y_{i}\succ _{x}y_{j}|y_{i}\in L,y_{j}\in Y\backslash L\}\,\!}. Training a preference model on this preference information and the classification result of an instance is just the corresponding top ranking label.
Instance ranking also has the instance spaceX{\displaystyle X\,\!}and label setY{\displaystyle Y\,\!}. In this task, labels are defined to have a fixed ordery1≻y2≻⋯≻yk{\displaystyle y_{1}\succ y_{2}\succ \cdots \succ y_{k}\,\!}and each instancexl{\displaystyle x_{l}\,\!}is associated with a labelyl{\displaystyle y_{l}\,\!}. Giving a set of instances as training data, the goal of this task is to find the ranking order for a new set of instances.
Object ranking is similar to instance ranking except that no labels are associated with instances. Given a set of pairwise preference information in the formxi≻xj{\displaystyle x_{i}\succ x_{j}\,\!}and the model should find out a ranking order among instances.
There are two practical representations of the preference informationA≻B{\displaystyle A\succ B\,\!}. One is assigningA{\displaystyle A\,\!}andB{\displaystyle B\,\!}with two real numbersa{\displaystyle a\,\!}andb{\displaystyle b\,\!}respectively such thata>b{\displaystyle a>b\,\!}. Another one is assigning a binary valueV(A,B)∈{0,1}{\displaystyle V(A,B)\in \{0,1\}\,\!}for all pairs(A,B){\displaystyle (A,B)\,\!}denoting whetherA≻B{\displaystyle A\succ B\,\!}orB≻A{\displaystyle B\succ A\,\!}. Corresponding to these two different representations, there are two different techniques applied to the learning process.
If we can find a mapping from data to real numbers, ranking the data can be solved by ranking the real numbers. This mapping is calledutility function. For label ranking the mapping is a functionf:X×Y→R{\displaystyle f:X\times Y\rightarrow \mathbb {R} \,\!}such thatyi≻xyj⇒f(x,yi)>f(x,yj){\displaystyle y_{i}\succ _{x}y_{j}\Rightarrow f(x,y_{i})>f(x,y_{j})\,\!}. For instance ranking and object ranking, the mapping is a functionf:X→R{\displaystyle f:X\rightarrow \mathbb {R} \,\!}.
Finding the utility function is aregressionlearning problem[citation needed]which is well developed in machine learning.
The binary representation of preference information is called preference relation. For each pair of alternatives (instances or labels), a binary predicate can be learned by conventional supervised learning approach. Fürnkranz and Hüllermeier proposed this approach in label ranking problem.[4]For object ranking, there is an early approach by Cohen et al.[5]
Using preference relations to predict the ranking will not be so intuitive. Since observed preference relations may not always be transitive due to inconsistencies in the data, finding a ranking that satisfies all the preference relations may not be possible or may result in multiple possible solutions. A more common approach is to find a ranking solution which is maximally consistent with the preference relations. This approach is a natural extension of pairwise classification.[4]
Preference learning can be used in ranking search results according to feedback of user preference. Given a query and a set of documents, a learning model is used to find the ranking of documents corresponding to therelevancewith this query. More discussions on research in this field can be found inTie-Yan Liu's survey paper.[6]
Another application of preference learning isrecommender systems.[7]Online store may analyze customer's purchase record to learn a preference model and then recommend similar products to customers. Internet content providers can make use of user's ratings to provide more user preferred contents.
|
https://en.wikipedia.org/wiki/Preference_learning
|
Informal learningis characterized "by a low degree of planning and organizing in terms of the learning context, learning support, learning time, and learning objectives".[1]It differs fromformal learning,non-formal learning, andself-regulated learning, because it has no set objective in terms of learning outcomes, but an intent to act from the learner's standpoint (e.g., to solve a problem). Typical mechanisms of informal learning includetrial and errororlearning-by-doing,modeling,feedback, andreflection.[2]For learners this includes heuristic language building, socialization, enculturation, and play. Informal learning is a pervasive ongoing phenomenon of learning via participation or learning via knowledge creation, in contrast with the traditional view of teacher-centered learning via knowledge acquisition. Estimates suggest that about 70-90 percent of adult learning takes place informally and outside educational institutions.[3]
The term is often conflated, however, with non-formal learning, andself-directed learning. It is widely used in the context of corporate training and education in relation to return on investment (ROI), or return on learning (ROL). It is also widely used when referring to science education, in relation to citizen science, or informal science education. The conflated meaning of informal and non-formal learning explicates mechanisms of learning that organically occur outside the realm of traditional instructor-led programs, e.g., reading self-selected books, participating in self-study programs, navigating performance support materials and systems, incidental skills practice, receptivity of coaching or mentoring, seeking advice from peers, or participation incommunities of practice, to name a few. Informal learning is common in communities where individuals have opportunities toobserveand participate in social activities.[4]Advantages of informal learning cited include flexibility and adaptation to learning needs, direct transfer of learning into practice, and rapid resolution of (work-related) problems.[5]For improving employees' performance, task execution is considered the most important source of learning.[6]
Informal learning can be characterized as the following:
The origin of informal learning has been traced back toJohn Deweythrough his theories about learning from experience.[9]The American philosopherMary Parker Follettbroadened the context of informal education from school to all areas of everyday life and described education as a continuous life task. Building on this work by Dewey and Follett, the American educator Eduard C. Lindemann first used the term "informal learning".[10]The term was later introduced byMalcolm Knowleswhen he published his work,Informal Adult Educationin 1950.[9]
At first, informal learning was only delimited from formal school learning andnonformal learningin courses.[11]Marsick and Watkins take up this approach and go one step further in their definition. They, too, begin with the organizational form of learning and call those learning processes informal which are non-formal or not formally organized and are not financed by institutions.[12]An example for a wider approach is Livingstone's definition which is oriented towardsautodidacticand self-directed learning and places special emphasis on the self-definition of the learning process by the learner.[13]Livingstone explained that explicit informal learning is distinguished from tacit informal learning and socialization in the sense that the individual seeks learning in this setting and creates the conditions for it by putting himself in situations or engaging with others so that learning is possible.[14]
As noted above, informal learning is often confused with non-formal learning. Non-formal learning has been used to often describe organized learning outside of the formal education system, either being short-term, voluntary, and having, few if any, prerequisites.[15]However, they typically have a curriculum and often a facilitator.[15]As stated on the non-formal learning page,[unreliable source]non-formal learning can be seen in various structured learning situations, such as swimming lessons, community-based sports programs and conference style seminars.
Merriam et al. in 2007 stated:[16]
Informal learning, Schugurensky (2000) suggests, has its own internal forms that are important to distinguish in studying the phenomenon. He proposes three forms:self-directed learning,incidental learning, andsocialization, or tacit learning. These differ among themselves in terms of intentionality and awareness at the time of the learning experience. Self-directed learning, for example, is intentional and conscious; incidental learning, which Marsick and Watkins (1990) describe as an accidental by-product of doing something else, is unintentional but after the experience she or he becomes aware that some learning has taken place; and finally, socialization or tacit learning is neither intentional nor conscious (although we can become aware of this learning later through 'retrospective recognition') (Marsick, & Watkins, 1990, p. 6)
In 2012, Bennett extended Schugurenksky's conceptualization from 2000 of informal learning by recommending four modes of informal learning:[17]
Drawing upon implicit processing literature, she further defined integrative learning as "a learning process that combines intentional nonconscious processing of tacit knowledge with conscious access to learning products and mental images"[17]: 4and she theorized two possible sub-processes: knowledge shifting and knowledge sublimation, which describe limited access learners have to tacit knowledge.
However, the assumption that informal learning can also be non-intentional contradicts more recent definitions of informal learning.[2][3]If the learning person has a learning goal in mind and independently monitors goal achievement, it isself-regulated learning.[18]
People in many Indigenous communities of the Americas often learn through observation and participation in everyday life of their respective communities and families. Barbara Rogoff, a professor of psychology, and her colleagues describe the ways in which children in Indigenous communities can learn by observing and participating in community endeavors, having an eagerness to contribute, fulfilling valuable roles, and finding a sense of belonging in their community.[19]These learning experiences rely on children's incorporation in the community and the child's attentiveness. This form of informal learning allows the children to collaborate in social endeavors, which grants the child the opportunity to learn by pitching in.
Learning occurs through socialization processes in one's culture and community.[20]Learning by observing and pitching in (LOPI) is an Informal learning model often seen in many Indigenous communities of the Americas.[20]Children can be seen participating alongside adults in many daily activities within the community. An example is the process where children learn slash-and-burn agriculture by being present in the situation and contributing when possible.[21]Noteworthy is children's own initiative and assumption of responsibility to perform tasks for the households' benefit. Many Indigenous communities provide self-paced opportunities to kids, and allow exploration and education without parental coercion. Collaborative input is highly encouraged and valued.[22]Both children and adults are actively involved in shared endeavors. Their roles as learner and expert are flexible, while the observer participates with active concentration.[23]Indigenous ways of learning include practices such asobservation, experiential learning, and apprenticeship.[24]
Child work, alongside and combined with play, occupies an important place in American Indigenous children's time and development. The interaction of a Navajo girl assisting her mother weaving and who eventually becomes a master weaver herself illustrates how the child's presence and the availability of these activities allow the child to learn through observation.[25]Children start at the periphery, observing and imitating those around them, before moving into the center of activities under supervision and guidance. An example of two-year-old Indigenous Mexican girl participating in digging-the-holes project with her mother highlights children's own initiation to help, after watching, and enthusiasm to share the task with family and community.[26]Work is part of a child's development from an early age, starting with simple tasks that merge with play and develop to various kinds of useful work.[27]The circumstances of everyday routine create opportunities for the culturally meaningful activities and sensitive interactions on which a child's development depends.[28]Children of the Chillihuani observe their environment as a place of respect, and learn from observation. Many of them become herders by informal learning in observation.[29]
Children in Nicaragua will often learn to work the land or learn to become street vendors by watching other individuals in their community perform it.[30]These activities provide opportunities for children to learn and develop through forms of social learning which are made up of everyday experiences rather than a deliberate curriculum, and contain ordinary setting in which children's social interaction and behavior occur. Informal learning for children in American Indigenous communities can take place at work where children are expected to contribute.[31]
In terms of the cultural variation between traditional Indigenous American and European-American middle class, the prevalence of nonverbal communication can be viewed as being dependent on each culture's definition of achievement. Often in mainstream middle-class culture, success in school and work settings is gained through practicing competitiveness and working for personal gain.[32]The learning and teaching practices of traditional Indigenous Americans generally prioritize harmony and cooperation over personal gain. In order to achieve mutual respect in teachings, what is often relied on in Indigenous American culture is nonverbal communication.[33]
Nonverbal communication in Indigenous communities creates pathways of knowledge by watching and then doing.[34]An example where nonverbal behavior can be used as a learning tool can be seen in Chillihuani culture. Children in this community learn about growing crops by observing the actions and respect adults have for the land. They learn that caring for their crops is vital for them to grow and in turn for the community to thrive. Similarly, when children participate in rituals, they learn the importance of being part of the community by watching how everyone interacts. This again needs no explicit verbal communication, it relies solely on observing the world around. Chillihuani culture does not explicitly verbalize expectations. Their knowledge is experienced rather than explained through modeled behavior for community benefit.[35]
In the indigenous culture of the Matsigenka, infants are kept in close proximity to their mother and members of the community. The infant does not go far from the mother at any time. In this way, the child is encouraged to explore away from the mother and other family members who will still keep watch. As the child wanders he may come to a place that is unknown and potentially dangerous but the mother will not stop him, she will just watch as he explores. The lack of verbal reprimand or warning from an adult or elder enable the child to assimilate his surroundings more carefully.[36]
To fully understand informal learning it is useful to define the terms "formal" and "informal" education. Formal education can be defined as a setting that is highly institutionalized, can be possibly bureaucratic, while being curriculum driven, and formally recognized with grades, diplomas, or other forms of certifications.[15]Informal education is closely tied in with informal learning, which occurs in a variety of places, such as at home, work, and through daily interactions and shared relationships among members of society. Informal learning often takes place outside educational establishments, and does not follow a specified curriculum and may originate accidentally, or sporadically, in association with certain occasions, although that is not always the case. Informal education can occur in the formal arena when concepts are adapted to the unique needs of individual students.
Merriam and others (2007) state: "studies of informal learning, especially those asking about adults' self-directed learning projects, reveal that upwards of 90 percent of adults are engaged in hundreds of hours of informal learning. It has also been estimated that the great majority (upwards of 70 percent) of learning in the workplace is informal ... although billions of dollars each year are spent by business and industry on formal training programs".[16]Both formal and informal learning are considered integral processes for Virtual Human Resource Development,[37]with informal learning the stronger form.
Coffield[38]: 1uses the metaphor of an iceberg to illustrate the dominant status of informal learning, which at the same time has much lower visibility in the education sector compared to formal learning: The part of the iceberg that is visibly above the water surface and makes up one third represents formal learning; the two thirds below the water surface that are invisible at first glance represent informal learning. While formal learning can be compared to a bus ride—the route is predetermined and the same for all passengers—informal learning is more like a ride on a bicycle, where the person riding can determine the route and speed individually.[40]
Informal knowledge is information that has not been externalized or captured and the primary locus of the knowledge may be inside someone's head.[41]For example, in the cause oflanguage acquisition, a mother may teach a child basic concepts of grammar and language at home, prior to the child entering a formal education system.[42]In such a case, the mother has a tacit understanding of language structures, syntax and morphology, but she may not be explicitly aware of what these are. She understands the language and passes her knowledge on to her offspring.
Other examples of informal knowledge transfer include instant messaging, a spontaneous meeting on the Internet, a phone call to someone who has information you need, a live one-time-only sales meeting introducing a new product, a chat-room in real time, a chance meeting by the water cooler, a scheduled Web-based meeting with a real-time agenda, a tech walking you through a repair process, or a meeting with your assigned mentor or manager.
Experience indicates that much of the learning for performance is informal.[43]Those who transfer their knowledge to a learner are usually present in real time. Such learning can take place over the telephone or through the Internet, as well as in person.
In the UK, the government formally recognized the benefits of informal learning in "The Learning Revolution" White Paper published on March 23, 2009.[44]The Learning Revolution Festival ran in October 2009 and funding has been used by libraries—which offer a host of informal learning opportunities such as book groups, "meet the author" events and family history sessions—to run activities such as The North East Festival of Learning.[45]
40% of adults have self-taught themselves at some point and respondents in a survey indicated that they were twice as likely to participate in independent learning as traditional learning.[46]The average adult spends 10 hours a week (500 hours a year) on informal learning practices.[46]As a whole, this type of knowledge is more learner-centered andsituationalin response to the interests or needed application of the skill to a particular workforce. Formal training programs have limited success in increasing basic skills for individuals older than age 25, therefore, these individuals rely mostly onon-the-job training.
Although rates of formal education have increased, many adults entering the workforce are lacking the basic math, reading andinterpersonal skillsthat the "unskilled" labor force requires.[47]The lines between formal and informal learning have been blurred due to the higher rates of college attendance. The largest increase in population for manual or low-skilled labor is in individuals who attended college but did not receive a degree. A recent collection of cross-sectional surveys were conducted and polled employers across the United States to gauge which skills are required for jobs which do not require college degrees. These surveys concluded that 70% require some kind of customer service aspect, 61% require reading or writing paragraphs, 65% require math, 51% require the use of computers. In regards to training and academic credentials, 71% require a high school diploma, 61% require specific vocational experience.[47]The rates of men entering the low-skilled labor force have remained static over the last fifty years, indicating a shift of less than 1%. Women's participation in the unskilled labor force has steadily increased and projections indicate that this trend will continue.
The majority of companies that provide training are currently involved only with the formal side of the continuum. Most of today's investments are on the formal side. The net result is that companies spend the most money on the smallest part—25%—of the learning equation. The other 75% of learning happens as the learner creatively "adopts and adapts to ever changing circumstances". The informal piece of the equation is not only larger, it's crucial to learning how to do anything.
Managers often wonder how they can promote informal learning of their employees. However, a direct support of informal learning is considered difficult, because learning happens within the work process and cannot be planned by companies.[48]An indirect support of learning by providing a positive learning environment is however possible.Social supportby colleagues and managers should be mentioned in particular. More experienced colleagues can act as learning experts andmentors.[3]Managers can act as role models with regard to obtaining and offering feedback on their own work performance. Admitting own failures and dealing with failures constructively also encourages employees to take advantage of learning opportunities at work.[49]
Lifelong learning, as defined by theOECD, includes a combination of formal, non-formal and informal learning. Of these three, informal learning may be the most difficult to quantify or prove, but it remains critical to an individual's overall cognitive and social development throughout the lifespan.
|
https://en.wikipedia.org/wiki/Informal_learning
|
Personal information management(PIM) is the study and implementation of the activities that people perform in order to acquire or create, store, organize, maintain, retrieve, and useinformationalitems such asdocuments(paper-based and digital),web pages, andemailmessages for everyday use to complete tasks (work-related or not) and fulfill a person's various roles (as parent, employee, friend, member of community, etc.);[1][2]it isinformation managementwith intrapersonal scope.Personal knowledge managementis by some definitions a subdomain.
One ideal of PIM is that people should always have the right information in the right place, in the right form, and of sufficient completeness and quality to meet their current need. Technologies and tools can help so that people spend less time with time-consuming and error-prone clerical activities of PIM (such as looking for and organising information). But tools and technologies can also overwhelm people with too much information leading toinformation overload.
A special focus of PIM concerns how people organize and maintain personal information collections, and methods that can help people in doing so. People may manage information in a variety of settings, for a variety of reasons, and with a variety of types of information. For example, a traditional office worker might manage physical documents in a filing cabinet by placing them in hanging folders organized alphabetically by project name. More recently, this office worker might organize digital documents into the virtual folders of a local, computer-basedfile systemor into a cloud-based store using afile hosting service(e.g.,Dropbox,Microsoft OneDrive,Google Drive). People manage information in many more private, personal contexts as well. A parent may, for example, collect and organize photographs of their children into a photo album which might be paper-based or digital.
PIM considers not only the methods used to store and organize information, but also is concerned with how peopleretrieve informationfrom their collections for re-use. For example, the office worker might re-locate a physical document by remembering the name of the project and then finding the appropriate folder by an alphabetical search. On a computer system with ahierarchical file system, a person might need to remember the top-level folder in which a document is located, and then browse through the folder contents to navigate to the desired document. Email systems often support additional methods for re-finding such as fielded search (e.g., search by sender, subject, date). The characteristics of the document types, the data that can be used to describe them (meta-data), and features of the systems used to store and organize them (e.g. fielded search) are all components that may influence how users accomplish personal information management.
The purview of PIM is broad. A person's perception of and ability to effect change in the world is determined, constrained, and sometimes greatly extended, by an ability to receive, send and otherwise manage information.
Research in the field of personal information management has considered six senses in which information can be personal (to "me") and so an object of that person's PIM activities:[2]
An encyclopaedic review of PIM literature suggests that all six senses of personal information listed above and the tools and technologies used to work with such information (from email applications and word processors topersonal information managersandvirtual assistants) combine to form apersonal space of information(PSI, pronounced as in theGreek letter, alternately referred to as apersonal information space) that is unique for each individual.[3]Within a person's PSI arepersonal information collections(PICs) or, simply, collections. Examples include:
Activities of PIM – i.e., the actions people take to manage information that is personal to them in one or more of the ways listed above – can be seen as an effort to establish, use, and maintain a mapping between information and need.[2]
Two activities of PIM occur repeatedly throughout a person's day and are often prompted by external events.
Meta-level activities focus more broadly on aspects of the mapping itself.
PIM activities overlap with one another. For example, the effort to keep an email attachment as a document in a personal file system may prompt an activity to organize the file system e.g., by creating a new folder for the document. Similarly, activities to organize may be prompted by a person's efforts to find a document as when, for example, a person discovers that two folders have overlapping content and should be consolidated.
Meta-level activities overlap not only with finding and keeping activities but, even more so, with each other. For example, efforts to re-organize a personal file system can be motivated by the evaluation that the current file organization is too time-consuming to maintain and doesn't properly highlight the information most in need of attention.
Information sent and received takes many different information forms in accordance with a growing list of communication modes, supporting tools, and people's customs, habits, and expectations. People still send paper-based letters, birthday cards, and thank you notes. But increasingly, people communicate using digital forms of information including emails, digital documents shared (as attachments or via afile hosting servicesuch asDropbox),blog postsandsocial mediaupdates (e.g., using a service such asFacebook),text messagesand links, text, photos, and videos shared via services such asTwitter,Snapchat,Reddit, andInstagram.
People work with information items as packages of information with properties that vary depending upon the information form involved. Files, emails, "tweets", Facebook updates, blog posts, etc. are each examples of the information item. The ways in which an information item can be manipulated depend upon its underlying form. Items can be created but not always deleted (completely). Most items can be copied, sent and transformed as in, for example, when a digital photo is taken of a paper document (transforming from paper to digital) and then possibly further transformed as when optical character recognition is used to extract text from the digital photo, and then transformed yet again when this information is sent to others via a text message.
Information fragmentation[4][2]is a key problem of PIM often made worse by the many information forms a person must work with. Information is scattered widely across information forms on different devices, in different formats, in different organizations, with different supporting tools.
Information fragmentation creates problems for each kind of PIM activity. Where to keep new information? Where to look for (re-find) information already kept? Meta-level activities, such as maintaining and organizing, are also more difficult and time-consuming when different stores on different devices must be separately maintained. Problems of information fragmentation are especially manifest when a person must look across multiple devices and applications to gather together the information needed to complete a project.[5]
PIM is a new field with ancient roots. When theoralrather than the written word dominated, human memory was the primary means for information preservation.[6]As information was increasingly rendered in paper form, tools were developed over time to meet the growing challenges of management. For example, the verticalfiling cabinet, now such a standard feature of home and workplace offices, was first commercially available in 1893.[7]
With the increasing availability of computers in the 1950s came an interest in the computer as a source of metaphors and a test bed for efforts to understand the human ability toprocess informationand tosolve problems.NewellandSimonpioneered the computer's use as a tool to model human thought.[8][9]They produced "TheLogic Theorist", generally thought to be the first running artificial intelligence (AI) program. The computer of the 1950s was also an inspiration for the development of an information processing approach to human behavior and performance.[10]
After the 1950s research showed that the computer, as a symbol processor, could "think" (to varying degrees of fidelity) like people do, the 1960s saw an increasing interest in the use of the computer to help people to think better and to process information more effectively. Working withAndries van Damand others,Ted Nelson, who coined the word "hypertext",[11]developed one of the first hypertext systems, The Hypertext Editing System, in 1968.[12]That same year,Douglas Engelbartalso completed work on a hypertext system called NLS (oN-Line System).[13]Engelbart advanced the notion that the computer could be used to augment the human intellect.[14][15]As heralded by the publication ofUlric Neisser's bookCognitive Psychology,[16]the 1960s also saw the emergence of cognitive psychology as a discipline that focused primarily on a better understanding of the human ability to think, learn, and remember.
The computer as aid to the individual, rather than remotenumber cruncherin a refrigerated room, gained further validity from work in the late 1970s and through the 1980s to producepersonal computersof increasing power and portability. These trends continue:computational powerroughly equivalent to that of adesktop computerof a decade ago can now be found in devices that fit into the palm of a hand.
The phrase "Personal Information Management" was itself apparently first used in the 1980s in the midst of general excitement over the potential of the personal computer to greatly enhance the human ability to process and manage information.[17]The 1980s also saw the advent of so-called "PIM tools" that provided limited support for the management of such things as appointments and scheduling, to-do lists, phone numbers, and addresses. A community dedicated to the study and improvement of human–computer interaction also emerged in the 1980s.[18][19]
As befits the "information" focus of PIM, PIM-relevant research of the 1980s and 1990s extended beyond the study of a particular device or application towards larger ecosystems of information management to include, for example, the organization of the physical office and the management of paperwork.[20][21]Malone characterized personal organization strategies as 'neat' or 'messy' and described 'filing' and 'piling' approaches to the organization of information.[22]Other studies showed that people vary their methods for keeping information according to anticipated uses of that information in the future.[23]Studies explored the practical implications that human memory research might carry in the design of, for example, personal filing systems,[24][25][26]and information retrieval systems.[27]Studies demonstrated a preference for navigation (browsing, "location-based finding) in the return to personal files,[28]a preference that endures today notwithstanding significant improvements in search support.[29][30][31][32]and an increasing use of search as the preferred method of return to e-mails.[33][34]
PIM, as a contemporary field of inquiry with a self-identified community of researchers, traces its origins to a Special Interest Group (SIG) session on PIM at the CHI 2004 conference and to a specialNational Science Foundation(NSF)-sponsored workshop held in Seattle in 2005.[35][36]
Much PIM research can be grouped according to the PIM activity that is the primary focus of the research. These activities are reflected in the two main models of PIM, i.e., that primary PIM activities are finding/re-finding, keeping and meta-level activities[37][2](see sectionActivities of PIM) or, alternatively, keeping, managing, and exploiting.[38][39]Important research is also being done under the special topics: Personality, mood, and emotion both as impacting and impacted by a person's practice of PIM, the management of personal health information and the management of personal information over the long run and for legacy.
Throughout a typical day, people repeatedly experience the need for information in large amounts and small (e.g., "When is my next meeting?"; "What's the status of the budget forecast?" "What's in the news today?") prompting activities to find and re-find.
A large body of research ininformation seeking,information behavior, andinformation retrievalrelates and especially to efforts to find information in public spaces such as the Web or a traditional library. There is a strong personal component even in efforts to find new information, never before experienced, from a public store such as the Web. For example, efforts to find information may be directed by a personally created outline, self-addressed email reminder or a to-do list. In addition, information inside a person's PSI can be used to support a more targeted, personalized search of the web.[40]
A person's efforts to find useful information are often a sequence of interactions rather than a single transaction. Under a "berry picking" model of finding, information is gathered in bits and pieces through a series of interactions, and during this time, a person's expression of need, as reflected in the current query, evolves.[41]People may favor stepwise approach to finding needed information to preserve a greater sense of control and context over the finding process and smaller steps may also reduce the cognitive burden associated with query formulation.[42]In some cases, there simply is not a "direct" way to access the information. For example, a person's remembrance for a needed Web site may only be through an email message sent by a colleague i.e., a person may not recall a Web address nor even keywords that might get be used in a Web search but the person does recall that the Web site was mentioned recently in an email from a colleague).
People may find (rather than re-find) information even when this information is ostensibly under their control. For example, items may be "pushed" into the PSI (e.g., via the inbox, podcast subscriptions, downloads). If these items are discovered later, it is through an act of finding not re-finding (since the person has no remembrance for the information).
Lansdale[17]characterized the retrieval of information as a two-step process involving interplay between actions torecallandrecognize. The steps of recall and recognition can iterate to progressively narrow the efforts to find the desired information. This interplay happens, for example, when people move through a folder hierarchy to a desired file or e-mail message or navigate through a website to a desired page.
But re-finding begins first with another step:Rememberto look in the first place. People may take the trouble to create Web bookmarks or to file away documents and then forget about this information so that, in worst case, the original effort is wasted.[43][44][45][46]
Also, finding/re-finding often means not just assembling a single item of information but rather a set of information. The person may need torepeatthe finding sequence several times. A challenge in tool support is to provide people with ways to group or interrelate information items so that their chances improve of retrieving a complete set of the information needed to complete a task.[3]
Over the years, PIM studies have determined that people prefer to return to personal information, most notably the information kept in personal digital files, by navigating rather than searching.[28][30][32]
Support for searching personal information has improved dramatically over the years most notably in the provision for full-text indexing to improve search speed.[47]With these improvements, preference may be shifting to search as a primary means for locating email messages (e.g., search on subject or sender, for messages not in view).[48][49]
However, a preference persists for navigation as the primary means of re-finding personal files (e.g., stepwise folder traversal; scanning a list of files within a folder for the desired file), notwithstanding ongoing improvements in search support.[30]The enduring preference for navigation as a primary means of return to files may have a neurological basis[50]i.e., navigation to files appears to use mental facilities similar to those people use to navigate in the physical world.
Preference for navigation is also in line with aprimacy effectrepeatedly observed in psychological research such that preferred method of return aligns with initial exposure. Under afirst impressionshypothesis, if a person's initial experience with a file included its placement in a folder, where the folder itself was reached by navigating through a hierarchy of containing folders, then the person will prefer a similar method – navigation – for return to the file later.[49]
There have been some prototyping efforts to explore an in-context creation e.g., creation in the context of a project the person is working on, of not only files, but also other forms of information such as web references and email.[51]Prototyping efforts have also explored ways to improve support for navigation e.g., by highlighting and otherwise making it easier to follow, the paths people are more likely to take in their navigation back to a file.[52]
Many events of daily life are roughly the converse of finding events: People encounter information and try to determine what, if anything, they should do with this information, i.e., people must match the information encountered to current or anticipated needs. Decisions and actions relating to encountered information are collectively referred to as keeping activities.
The ability to effectively handle information that is encountered by happenstance is essential to a person's ability to discover new material and make new connections.[53]People also keep information that they have actively sought but do not have time to process currently. A search on the web, for example, often produces much more information than can be consumed in the current session. Both the decision to keep this information for later use and the steps to do so are keeping activities.
Keeping activities are also triggered when people are interrupted during a current task and look for ways of preserving the current state so that work can be quickly resumed later.[54]People keep appointments by entering reminders into a calendar and keep good ideas or "things to pick up at the grocery store" by writing down a few cryptic lines on a loose piece of paper. People keep not only to ensure they have the information later, but also to build reminders to look for and use this information. Failure to remember to use information later is one kind ofprospective memoryfailure.[55]In order to avoid such a failure, people may, for example, self-e-mail a web page reference in addition to or instead of making a bookmark because the e-mail message with the reference appears in the inbox where it is more likely to be noticed and used.[56]
The keeping decision can be characterized as a signal detection task subject to errors of two kinds: 1) an incorrect rejection ("miss") when information is ignored that later is needed and should have been kept (e.g., proof of charitable donations needed now to file a tax return) and 2) a false positive when information kept as useful (incorrectly judged as "signal") turns out not to be used later.[57]Information kept and never used only adds to the clutter – digital and physical – in a person's life.[58]
Keeping can be a difficult and error prone effort. Filing i.e., placing information items such as paper documents, digital documents and emails, into folders, can be especially so.[59][60]To avoid, or delay filing information (e.g., until more is known concerning where the information might be used), people may opt to put information in "piles" instead.[22](Digital counterparts to physical piling include leaving information in the email inbox or placing digital documents and web links into a holding folder such as "stuff to look at later".) But information kept in a pile, physical or virtual, is easily forgotten as the pile fades into a background of clutter and research indicates that a typical person's ability to keep track of different piles, by location alone, is limited.[61]
Tagging provides another alternative to filing information items into folders. A strict folder hierarchy does not readily allow for the flexible classification of information even though, in a person's mind, an information item might fit in several different categories.[62]A number of tag-related prototypes for PIM have been developed over the years.[63][64]A tagging approach has also been pursued in commercial systems, most notably Gmail (as "labels"), but the success of tags so far is mixed. Bergman et al. found that users, when provided with options to use folders or tags, preferred folders to tags and, even when using tags, they typically refrained from adding more than a single tag per information item.[65][66]Civan et al., through an engagement of participants in critical, comparative observation of both tagging and the use of folders were able to elicit some limitations of tagging not previously discussed openly such as, for example, that once a person decides to use multiple tags, it is usually important to continue doing so (else the tag not applied consistently becomes ineffective as a means of retrieving a complete set of items).[67]
Technologies may help to reduce the costs, in personal time and effort, of keeping and the likelihood of error. For example, the ability to take a digital photo of a sign, billboard announcement or the page of a paper document can obviate the task of otherwise transcribing (or photocopying) the information.
A person's ongoing use of a smartphone through the day can create a time-stamped record of events as a kind of automated keeping and especially of information "experienced by me" (see section, "The senses in which information is personal") with potential use in a person's efforts to journal or to return to information previously experienced ("I think I read the email while in the taxi on the way to the airport...").Activity trackingtechnology can further enrich the record of a person's daily activity with tremendous potential use for people to enrich their understanding of their daily lives and the healthiness of their diet and their activities.[68]
Technologies to automate the keeping of personal information segue to personal informatics and thequantified selfmovement, life logging, in the extreme, a 'total capture" of information.[69]Tracking technologies raise serious issues of privacy (see "Managing privacy and the flow of information"). Additional questions arise concerning the utility and even the practical accessibility of "total capture".[70]
Activities of finding and, especially, keeping can segue into activities to maintain and organize as when, for example, efforts to keep a document in the file system prompt the creation of a new folder or efforts to re-find a document highlight the need to consolidate two folders with overlapping content and purpose.
Differences between people are especially apparent in their approaches to the maintenance and organization of information. Malone[22]distinguished between "neat" and "messy" organizations of paper documents. "Messy" people had more piles in their offices and appeared to invest less effort than "neat" people in filing information. Comparable differences have been observed in the ways people organize digital documents, emails, and web references.[71]
Activities of keeping correlate with activities of organizing so that, for example, people with more elaborate folder structures tend to file information more often and sooner.[71]However, people may be selective in the information forms for which they invest efforts to organize. The schoolteachers who participated in one study, for example, reported having regular "spring cleaning" habits for organization and maintenance of paper documents but no comparable habits for digital information.[72]
Activities of organization (e.g., creating and naming folders) segue into activities of maintenance such as consolidating redundant folders, archiving information no longer in active use, and ensuring that information is properlybacked upand otherwisesecured. (See also section, "Managing privacy and the flow of information").
Studies of people's folder organizations for digital information indicate that these have uses going far beyond the organization of files for later retrieval. Folders are information in their own right – representing, for example, a person's evolving understanding of a project and its components. A folder hierarchy can sometimes represent an informal problem decomposition with a parent folder representing a project and subfolders representing major components of the project (e.g., "wedding reception" and "church service" for a "wedding" project).[73]
However, people generally struggle to keep their information organized[74]and often do not have reliable backup routines.[75]People have trouble maintaining and organizing many distinct forms of information (e.g., digital documents, emails, and web references)[76]and are sometimes observed to make special efforts to consolidate different information forms into a single organization.[56]
With ever increasing stores of personal digital information, people face challenges ofdigital curationfor which they are not prepared.[77][78][79]At the same time, these stores offer their owners the opportunity, with the right training and tool support, forexploitationof their information in new, useful ways.[80]
Empirical observations of PIM studies motivate prototyping efforts towards information tools to provide better support for the maintenance, organization and, going further, curation of personal information. For example,GrayArea[81]applies the demotion principle of the user-subjective approach to allow people to move less frequently used files in any given folder to a gray area at the bottom end of the listing of this folder. These files can still be accessed but are less visible and so less distracting of a person's attention.
ThePlanz[51]prototype supports an in-context creation and integration of project-related files, emails, web references, informal notes and other forms of information into a simplified, document-like interface meant to represent the project with headings corresponding to folders in the personal file system and subheadings (for tasks, sub-projects, or other project components) corresponding to subfolders. The intention is that a single, useful organization should emerge incidentally as people focus on the planning and completion of their projects.
People face a continual evaluation of tradeoffs in deciding what information "flows" into and out of their PSI. Each interaction poses some degree of risk to privacy and security. Letting out information to the wrong recipients can lead toidentity theft. Letting in the wrong kind of information can mean that a person's devices are "infected" and the person's data is corrupted or "locked" forransom. By some estimates, 30% or more of the computers in the United States are infected.[82]However, the exchange of information, incoming and outgoing, is an essential part of living in the modern world. To order goods and services online, people must be prepared to "let out" their credit card information. To try out a potentially useful, new information tool, people may need to "let in" a download that could potentially make unwelcome changes to the web browser or the desktop. Providing for adequate control over the information, coming into and out of a PSI, is a major challenge. Even more challenging is the user interface to make clear the implications for various privacy choices particularly regardingInternet privacy. What, for example, are the personal information privacy implications of clicking the "Sign Up" button for use of social media services such as Facebook.[83]
People seek to understand how they might improve various aspects of their PIM practices with questions such as "Do I really need to keep all this information?"; "Is this tool (application, applet, device) worth the troubles (time, frustration) of its use?" and, perhaps most persistent, "Where did the day go? Where has the time gone? What did I accomplish?". These last questions may often be voiced in reflection, perhaps on the commute home from work at the end of the workday.
But there is increasing reason to expect that answers will be based on more than remembrance and reflection. Increasingly data incidentally, automatically captured over the course of a person's day and the person's interactions with various information tools to work with various forms of information (files, emails, texts, pictures, etc.) can be brought to bear in evaluations of a person's PIM practice and the identification of possible ways to improve.[84]
Efforts to make sense of information represent another set of meta-level activity that operate on personal information and the mapping between information and need. People must often assemble and analyze a larger collection of information to decide what to do next. "Which job applicant is most likely to work best for us?", "Which retirement plan to choose?", "What should we pack for our trip?". These and many other decisions are generally based not on a single information item but on a collection of information items – documents, emails (e.g., with advice or impressions from friends and colleagues), web references, etc.
Making sense of information is "meta" not only for its broader focus on information collections but also because it permeates most PIM activity even when the primary purpose may ostensibly be something else. For example, as people organize information into folders, ostensibly to ensure its subsequent retrieval, people may also be making sense and coming to a deeper understanding of this information.
Personalityandmoodcan impact a person's practice of PIM and, in turn, a person's emotions can be impacted by the person's practice of PIM.
In particular,personality traits(e.g., "conscientiousness" or "neuroticism") have, in certain circumstances, been shown to correlate with the extent to which a person keeps and organizes information into a personal archive such as a personal filing system.[85]However, another recent study found personality traits were not correlated with any aspects of personal filing systems, suggesting that PIM practices are influenced less by personality than by external factors such as the operating system used (i.e. Mac OS or Windows), which were seen to be much more predictive.[86]
Aside from the correlation between practices of PIM and more enduring personality traits, there is evidence to indicate that a person's (more changeable) mood impacts activities of PIM so that, for example, a person experiencing negative moods, when organizing personal information, is more likely to create a structure with more folders where folders, on average, contain fewer files.[87]
Conversely, the information a person keeps or routinely encounters (e.g., via social media), can profoundly impact a person's mood. Even as explorations continue into the potential for the automatic, incidental capture of information (see sectionKeeping) there is growing awareness for the need to design for forgetting as well as for remembrance as, for example, when a person realizes the need to dispose of digital belongings in the aftermath of a romantic breakup or the death of a loved one.[88]
Beyond the negative feelings induced by information associated with a failed relationship, people experience negative feelings about their PIM practices, per se. People are shown in general to experience anxiety and dissatisfaction with respect to their personal information archives including both concerns of possible loss of the information and also express concerns about their ability and effectiveness in managing and organizing their information.[89][90]
Traditional, personal health information resides in variousinformation systemsin healthcare institutions (e.g., clinics, hospitals, insurance providers), often in the form ofmedical records. People often have difficulty managing or even navigating a variety of paper orelectronic medical recordsacross multiple health services in different specializations and institutions.[91]Also referred to aspersonal health records, this type of personal health information usually requires people (i.e., patients) to engage in additional PIM finding activities to locate and gain access to health information and then to generate a comprehensible summary for their own use.
With the rise of consumer-facing health products includingactivity trackersand health-relatedmobile apps, people are able to access new types of personal health data (e.g., physical activity, heart rate) outside healthcare institutions. PIM behavior also changes. Much of the effort to keep information is automated. But people may experience difficulties making sense of a using the information later, e.g., to plan future physical activities based on activity tracker data. People are also frequently engaged in other meta-level activities, such as maintaining and organizing (e.g., syncing data across different health-related mobile apps).[92]
The purpose of PIM study is both descriptive and prescriptive. PIM research seeks to understand what people do now and the problems they encounter i.e., in the management of information and the use of information tools. This understanding is useful on its own but should also have application to understand what might be done in techniques, training and, especially, tool design to improve a person's practice of PIM.
The nature of PIM makes its study challenging.[93]The techniques and preferred methods of a person's PIM practice can vary considerably with information form (e.g., files vs. emails) and over time.[71][49][94]Theoperating systemand the defaultfile managerare also shown to impact PIM practices especially in the management of files.[32][95]A person's practice is also observed to vary in significant ways with gender, age and current life circumstances.[96][97][98][99]Certainly, differences among people on different sides of the so-called "digital divide" will have profound impact on PIM practices. And, as noted in section "Personality, mood, and emotion", personality traits and even a person's current mood can impact PIM behavior.
For research results to generalize, or else to be properly qualified, PIM research, at least in aggregate, should include the study of people, with a diversity of backgrounds and needs, over time as they work in many different situations, with different forms of information and different tools of information management.
At the same time, PIM research, at least in initial exploratory phases, must often be done in situ (e.g., in a person's workplace or office or at least where people have access to their laptops, smartphones and other devices of information management) so that people can be observed as they manage information that is "personal" to them (see section "The senses in which information is personal"). Exploratory methods are demanding in the time of both observer and participant and can also be intrusive for the participants. Consequently, the number and nature of participants is likely to be limited i.e., participants may often be people "close at hand" to the observer as family, friends, colleagues or other members of the observer's community.
For example, theguided tour, in which the participant is asked to give an interviewer a "tour" of the participant's various information collections (e.g., files, emails, Web bookmarks, digital photographs, paper documents, etc.), has proven a very useful, but expensive method of study with results bound by caveats reflecting the typically small number and narrow sampling of participants.
The guided tour method is one of several methods that are excellent for exploratory work but expensive and impractical to do with a larger, more diverse sampling of people. Other exploratory methods include the use ofthink aloud protocolscollected, for example, as a participant completes a keeping or finding task,[56]and theexperience sampling methodwherein participants report on their PIM actions and experiences over time possibly as prompted (e.g., by a beep or a text on a smartphone).
A challenge is to combine, within or across studies, time-consuming (and often demographically biased) methods of exploratory observation with other methods that have broader, more economical reach. The exploratory methods bring out interesting patterns; the follow-on methods add in numbers and diversity of participants. Among these methods are:
Another method using theDelphi techniquefor achieving consensus has been used to leverage the expertise and experience of PIM researchers as means of extending, indirectly, the number and diversity of PIM practices represented.[102]
The purview of PIM tool design applies to virtually any tool people use to work with their information including "sticky notes" andhanging foldersfor paper-based information to a wide range of computer-based applications for the management of digital information, ranging from applications people use every day such asWeb browsers,email applicationsandtexting applicationsto personal information managers.
With respect to methods for the evaluation of alternatives in PIM tools design, PIM researchers again face an "in situ" challenge. How to evaluate an alternative, as nearly as possible, in the working context of a person's PSI? One "let it lie" approach[103]would provide forinterfacesbetween the tool under evaluation and a participant's PSI so that the tool can work with a participant's other tools and the participant's personal information (as opposed to working in a separate environment with "test" data). Dropbox and other file hosting services exemplify this approach: Users can continue to work with their files and folders locally on their computers through the file manager even as an installed applet works to seamlessly synchronize the users files and folders with a Web store for the added benefits of a backup and options to synchronize this information with other devices and share this information with other users.
As what is better described as a methodology of tool design rather than a method, Bergman reports good success in the application of auser-subjective approach. The user-subjective approach advances three design principles. In brief, the design should allow the following: 1) all project-related items no matter their form (or format) are to be organized together (the subjective project classification principle); 2) the importance of information (to the user) should determine its visual salience and accessibility (the subjective importance principle); and 3) information should be retrieved and used by the user in the same context as it was previously used in (the subjective context principle). The approach may suggest design principles that serve not only in evaluating and improving existing systems but also in creating new implementations. For example, according to the demotion principle, information items of lower subjective importance should be demoted (i.e., by making them less visible) so as not to distract the user but be kept within their original context just in case they are needed. The principle has been applied in the creation of several interesting prototypes.[104][81]
Finally, a simple "checklist" methodology of tool design",[3]follows from an assessment of a proposed tool design with respect to each of the six senses in which information can be personal (see section "The senses in which information is personal") and each of the six activities of PIM (finding, keeping and the four meta-level activities, see section "Activities of PIM"). A tool that is good with respect to one kind of personal information or one PIM activity, may be bad with respect to another. For example, a new smartphone app that promises to deliver information potentially "relevant to me" (the "6th sense" in which information is personal) may do so only at the cost of a distracting increase in the information "directed to me" and by keeping too much personal information "about me" in a place not under the person's control.
PIM is a practical meeting ground for many disciplines includingcognitive psychology,cognitive science,human-computer interaction(HCI),human information interaction(HII),library and information science(LIS),artificial intelligence(AI), information retrieval, information behavior, organizationalinformation management, andinformation science.
Cognitive psychology, as the study of how people learn and remember, problem solve, and make decisions, necessarily also includes the study of how people make smart use of available information. The related field of cognitive science, in its efforts to apply these questions more broadly to the study and simulation of intelligent behavior, is also related to PIM. (Cognitive science, in turn, has significant overlap with the field of artificial intelligence).
There is great potential for a mutually beneficial interplay between cognitive science and PIM. Sub-areas of cognitive science of clear relevance to PIM include problem solving anddecision making. For example, folders created to hold information for a big project such as "plan my wedding" may sometimes resemble aproblem-decomposition.[105]To take another example, thesignal detection task[106]has long been used to frame and explain human behavior and has recently been used as a basis for analyzing our choices concerning what information to keep and how – a key activity of PIM.[57]Similarly, there is interplay between the psychological study ofcategorizationandconcept formationand the PIM study of how people use tags and folders to describe and organize their information.
Now large portions of a document may be the product of"copy-and-paste" operations(from our previous writings) rather than a product of original writing. Certainly, management of text pieces pasted for re-use is a PIM activity, and this raises several interesting questions. How do we go about deciding when to re-use and when to write from scratch? We may sometimes spend more time chasing down a paragraph we have previously written than it would have taken to simply write a new paragraph expressing the same thoughts. Beyond this, we can wonder at what point a reliance on an increasing (and increasingly available) supply of previously written material begins to impact our creativity.
As people do PIM they work in an external environment that includes other people, available technology, and, often, an organizational setting. This means thatsituated cognition,distributed cognition, andsocial cognitionall relate to the study of PIM.
The study of PIM is also related to the field of human–computer interaction (HCI). Some of the more influential papers on PIM over the years have been published in HCI journals and conference proceedings. However, the "I" in PIM is for information – in various forms, paper-based and digital (e.g., books, digital documents, emails and, even, the letter magnets on a refrigerator in the kitchen). The "I" in HCI stands for "interaction" as this relates to the "C" – computers. (An argument has been advanced that HCI should be focused more on information rather than computers.[107])
Group information management(GIM, usually pronounced with a soft "G") has been written about elsewhere in the context of PIM.[108][109]The study of GIM, in turn, has clear relevance to the study ofcomputer-supported cooperative work(CSCW). GIM is to CSCW as PIM is to HCI. Just as concerns of PIM substantially overlap with but are not fully subsumed by concerns of HCI (nor vice versa), concerns of GIM overlap with but are not subsumed by concerns of CSCW. Information in support of GIM activities can be in non-digital forms such as paper calendars and bulletin boards that do not involve computers.
Group and social considerations frequently enter into a person's PIM strategy.[110]For example, one member of a household may agree to manage medical information for everyone in the household (e.g., shot records) while another member of the household manages financial information for the household. But the collaborative organization and sharing of information is often difficult because, for example, the people working together in a group may have many different perspectives on how best to organize information.[111][112]
In larger organizational settings, the GIM goals of the organization may conflict with the PIM goals of individuals working within the organization, where the goals of different individuals may also conflict.[113]Individuals may, for example, keep copies of secure documents on their private laptops for the sake of convenience even though doing so violates group (organizational) security.[114]Given drawbacks—real or perceived—in the use of web services that support a shared use of folders,[115][116]people working in a group may opt to share information instead through the use of e-mail attachments.[117]
Concerns of data management relate to PIM especially with respect to the safe, secure, long-term preservation of personal information in digital form. The study of information management and knowledge management in organizations also relates to the study of PIM and issues seen first at an organizational level often migrate to the PIM domain.[118]
Concerns of knowledge management on a personal (vs. organizational) level have given rise to arguments for a field ofpersonal knowledge management(PKM). However, knowledge is not a "thing" to be managed directly but rather indirectly e.g., through items of information such as Web pages, emails and paper documents. PKM is best regarded as a useful subset of PIM[118]with special focus on important issues that might otherwise be overlooked such as self-directed efforts of knowledge elicitation ("What do I know? What have I learned?") and knowledge instillation ("how better to learn what it is I want to know?")
Bothtime managementandtask managementon a personal level make heavy use of information tools and external forms of information such as to-do lists, calendars, timelines, and email exchange. These are another form of information to be managed. Over the years, email, in particular, has been used in an ad hoc manner in support of task management.[119][120]
Much of the useful information a person receives comes, often unprompted, through a person's network of family, friends and colleagues. People reciprocate and much of the information a person sends to others reflects an attempt to build relationships and influence the behavior of others. As such,personal network management(PNM) is a crucial aspect of PIM and can be understood as the practice of managing the links and connections to other people for social and professional benefits.
|
https://en.wikipedia.org/wiki/Personal_information_management
|
Concurrent computingis a form ofcomputingin which severalcomputationsare executedconcurrently—during overlapping time periods—instead ofsequentially—with one completing before the next starts.
This is a property of a system—whether aprogram,computer, or anetwork—where there is a separate execution point or "thread of control" for each process. Aconcurrent systemis one where a computation can advance without waiting for all other computations to complete.[1]
Concurrent computing is a form ofmodular programming. In itsparadigman overall computation isfactoredinto subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing includeEdsger Dijkstra,Per Brinch Hansen, andC.A.R. Hoare.[2]
The concept of concurrent computing is frequently confused with the related but distinct concept ofparallel computing,[3][4]although both can be described as "multiple processes executingduring the same period of time". In parallel computing, execution occurs at the same physical instant: for example, on separateprocessorsof amulti-processormachine, with the goal of speeding up computations—parallel computing is impossible on a (one-core) single processor, as only one computation can occur at any instant (during any single clock cycle).[a]By contrast, concurrent computing consists of processlifetimesoverlapping, but execution does not happen at the same instant. The goal here is to model processes that happen concurrently, like multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel.[5]: 1
For example, concurrent processes can be executed on one core by interleaving the execution steps of each process viatime-sharingslices: only one process runs at a time, and if it does not complete during its time slice, it ispaused, another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant.[citation needed]
Concurrent computationsmaybe executed in parallel,[3][6]for example, by assigning each process to a separate processor or processor core, ordistributinga computation across a network.
The exact timing of when tasks in a concurrent system are executed depends on thescheduling, and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2:[citation needed]
The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished,concurrent/sequentialandparallel/serialare used as opposing pairs.[7]A schedule in which tasks execute one at a time (serially, no parallelism), without interleaving (sequentially, no concurrency: no task begins until the prior task ends) is called aserial schedule. A set of tasks that can be scheduled serially isserializable, which simplifiesconcurrency control.[citation needed]
The main challenge in designing concurrent programs isconcurrency control: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions.[6]Potential problems includerace conditions,deadlocks, andresource starvation. For example, consider the following algorithm to make withdrawals from a checking account represented by the shared resourcebalance:
Supposebalance = 500, and two concurrentthreadsmake the callswithdraw(300)andwithdraw(350). If line 3 in both operations executes before line 5 both operations will find thatbalance >= withdrawalevaluates totrue, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources benefit from the use of concurrency control, ornon-blocking algorithms.
There are advantages of concurrent computing:
Introduced in 1962,Petri netswere an early attempt to codify the rules of concurrent execution. Dataflow theory later built upon these, andDataflow architectureswere created to physically implement the ideas of dataflow theory. Beginning in the late 1970s,process calculisuch asCalculus of Communicating Systems(CCS) andCommunicating Sequential Processes(CSP) were developed to permit algebraic reasoning about systems composed of interacting components. Theπ-calculusadded the capability for reasoning about dynamic topologies.
Input/output automatawere introduced in 1987.
Logics such as Lamport'sTLA+, and mathematical models such astracesandActor event diagrams, have also been developed to describe the behavior of concurrent systems.
Software transactional memoryborrows fromdatabase theorythe concept ofatomic transactionsand applies them to memory accesses.
Concurrent programming languages and multiprocessor programs must have aconsistency model(also known as a memory model). The consistency model defines rules for how operations oncomputer memoryoccur and how results are produced.
One of the first consistency models wasLeslie Lamport'ssequential consistencymodel. Sequential consistency is the property of a program that its execution produces the same results as a sequential program. Specifically, a program is sequentially consistent if "the results of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program".[10]
A number of different methods can be used to implement concurrent programs, such as implementing each computational execution as anoperating system process, or implementing the computational processes as a set ofthreadswithin a single operating system process.
In some concurrent computing systems, communication between the concurrent components is hidden from the programmer (e.g., by usingfutures), while in others it must be handled explicitly. Explicit communication can be divided into two classes:
Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing is greater than for a procedure call. These differences are often overwhelmed by other performance factors.
Concurrent computing developed out of earlier work on railroads andtelegraphy, from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as viatime-division multiplexing(1870s).
The academic study of concurrent algorithms started in the 1960s, withDijkstra (1965)credited with being the first paper in this field, identifying and solvingmutual exclusion.[11]
Concurrency is pervasive in computing, occurring from low-level hardware on a single chip to worldwide networks. Examples follow.
At the programming language level:
At the operating system level:
At the network level, networked systems are generally concurrent by their nature, as they consist of separate devices.
Concurrent programming languagesare programming languages that use language constructs forconcurrency. These constructs may involvemulti-threading, support fordistributed computing,message passing,shared resources(includingshared memory) orfutures and promises. Such languages are sometimes described asconcurrency-oriented languagesorconcurrency-oriented programming languages(COPL).[12]
Today, the most commonly used programming languages that have specific constructs for concurrency areJavaandC#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided bymonitors(although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model,Erlangis probably the most widely used in industry at present.[citation needed]
Many concurrent programming languages have been developed more as research languages (e.g.Pict) rather than as languages for production use. However, languages such asErlang,Limbo, andoccamhave seen industrial use at various times in the last 20 years. A non-exhaustive list of languages which use or provide concurrent programming facilities:
Many other languages provide support for concurrency in the form of libraries, at levels roughly comparable with the above list.
|
https://en.wikipedia.org/wiki/Concurrent_computing
|
wolfSSLis a small, portable, embedded SSL/TLS library targeted for use by embedded systems developers. It is anopen sourceimplementation ofTLS(SSL 3.0, TLS 1.0, 1.1, 1.2, 1.3, andDTLS1.0, 1.2, and 1.3) written in theC programming language. It includes SSL/TLS client libraries and an SSL/TLS server implementation as well as support for multiple APIs, including those defined bySSLandTLS. wolfSSL also includes anOpenSSLcompatibility interface with the most commonly used OpenSSL functions.[4][5]
wolfSSL is currently available forMicrosoft Windows,Linux,macOS,Solaris,ESP32,ESP8266,ThreadX,VxWorks,FreeBSD,NetBSD,OpenBSD,embedded Linux,Yocto Project,OpenEmbedded,WinCE,Haiku,OpenWrt,iPhone,Android,Wii, andGameCubethrough DevKitPro support,QNX,MontaVista,Tronvariants,NonStop OS,OpenCL, Micrium'sMicroC/OS-II,FreeRTOS,SafeRTOS,Freescale MQX,Nucleus,TinyOS,TI-RTOS,HP-UX, uTasker, uT-kernel, embOS,INtime,mbed,RIOT, CMSIS-RTOS, FROSTED,Green Hills INTEGRITY, Keil RTX, TOPPERS, PetaLinux,Apache Mynewt, andPikeOS.[6]
The genesis of wolfSSL dates to 2004.OpenSSLwas available at the time, and was dual licensed under theOpenSSL Licenseand theSSLeay license.[7]yaSSL, alternatively, was developed and dual-licensed under both a commercial license and the GPL.[8]yaSSL offered a more modern API, commercial style developer support and was complete with an OpenSSL compatibility layer.[4]The first major user of wolfSSL/CyaSSL/yaSSL wasMySQL.[9]Through bundling with MySQL, yaSSL has achieved extremely high distribution volumes in the millions.
In February 2019,Daniel Stenberg, the creator ofcURL, was hired by the wolfSSL project to work on cURL.[10]
The wolfSSL lightweight SSL library implements the following protocols:[11]
Protocol Notes:
wolfSSL uses the following cryptography libraries:
By default, wolfSSL uses the cryptographic services provided by wolfCrypt.[13]wolfCrypt ProvidesRSA,ECC,DSS,Diffie–Hellman,EDH,NTRU(deprecated and removed),DES,Triple DES,AES(CBC, CTR, CCM, GCM),Camellia,IDEA,ARC4,HC-128,ChaCha20,MD2,MD4,MD5,SHA-1,SHA-2,SHA-3,BLAKE2,RIPEMD-160,Poly1305, Random Number Generation, Large Integer support, base 16/64 encoding/decoding, and post-quantum cryptographic algorithms:ML-KEM(certified under FIPS 203) and ML-DSA (certified under FIPS 204).
wolfCrypt also includes support for the recentX25519andEd25519algorithms.
wolfCrypt acts as a back-end crypto implementation for several popular software packages and libraries, includingMIT Kerberos[14](where it can be enabled using a build option).
CyaSSL+ includesNTRU[15]public key encryption. The addition of NTRU in CyaSSL+ was a result of the partnership between yaSSL and Security Innovation.[15]NTRU works well in mobile and embedded environments due to the reduced bit size needed to provide the same security as other public key systems. In addition, it's not known to be vulnerable to quantum attacks. Several cipher suites utilizing NTRU are available with CyaSSL+ including AES-256, RC4, and HC-128.
wolfSSL supports the followingSecure Elements:
wolfSSL supports the following hardware technologies:
The following tables list wolfSSL's support for using various devices' hardware encryption with various algorithms.
(Xeon and Core processor families)
Cryptographic Accelerator and Assurance Module (CAAM)
(NXP MCF547X and MCF548X)
K50, K60, K70, and K80 (ARM Cortex-M4 core)
F1, F2, F4, L1, W Series (ARM Cortex - M3/M4)
(III/V PX processors)
(Embedded Connectivity)
(ARM Cortex-M4F)
(Series SoC family, 32-bit ARM Cortex M0 processor core)
- "All" denotes 128, 192, and 256-bit supported block sizes
(NXP MCF547X and MCF548X)
K50, K60, K70, and K80 (ARM Cortex-M4 core)
F1, F2, F4, L1, W Series (ARM Cortex - M3/M4)
(III/V PX processors)
(Embedded Connectivity)
(ARM Cortex-M4F)
(Intel and AMD x86)
(III/V PX processors)
(Intel and AMD x86)
K50, K60, K70, and K80 (ARM Cortex-M4 core)
F1, F2, F4, L1, W Series (ARM Cortex - M3/M4)
(Embedded Connectivity)
(III/V PX processors)
192, 224, 256, 384, 521
ATECC508A (compatible with any MPU or MCU including: Atmel SMART and AVR MCUs)
(NIST-P256)
(Intel and AMD x86)
(III/V PX processors)
(Embedded Connectivity)
F1, F2, F4, L1, W Series (ARM Cortex - M3/M4)
(III/V PX processors)
(Series SoC family, 32-bit ARM Cortex M0 processor core)
wolfSSL supports the following certifications:
wolfSSL is dual licensed:
|
https://en.wikipedia.org/wiki/WolfSSL
|
Incomputer science, asegmented scanis a modification of theprefix sumwith an equal-sized array of flag bits to denote segment boundaries on which the scan should be performed.[1]
In the following, the '1' flag bits indicate the beginning of each segment.
An alternative method used byHigh Performance Fortranis to begin a new segment at every transition of flag value. An advantage of this representation is that it is useful with both prefix and suffix (backwards) scans without changing its interpretation. In HPF, Fortran logical data type is used to represent segments. So the equivalent flag array for the above example would be as follows:
123456inputTTTFFTflag values136496segmented scan +{\displaystyle {\begin{array}{|rrrrrr|l|}1&2&3&4&5&6&{\text{input}}\\\hline T&T&T&F&F&T&{\text{flag values}}\\\hline 1&3&6&4&9&6&{\text{segmented scan +}}\\\end{array}}}
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Segmented_scan
|
Tektology(sometimes transliterated astectology) is a term used byAlexander Bogdanovto describe a new universal science that consisted of unifying all social, biological and physical sciences by considering them as systems of relationships and by seeking the organizational principles that underlie all systems. Tektology is now regarded as a precursor ofsystems theoryand related aspects ofsynergetics.[1]The word "tectology" was introduced byErnst Haeckel,[2]but Bogdanov used it for a different purpose.[3][4]
His workTektology: Universal Organization Science, published in Russia between 1912 and 1917, anticipated many of the ideas that were popularized later byNorbert WienerinCyberneticsandLudwig von Bertalanffyin theGeneral Systems Theory. There are suggestions that both Wiener and von Bertalanffy might have read the German edition ofTektologywhich was published in 1928.[5][6]
InSources and Precursors of Bogdanov's Tectology, James White (1998) acknowledged the intellectual debt of Bogdanov's work on tectology to the ideas ofLudwig Noiré. His work drew on the ideas of Noiré who in the 1870s also attempted to construct a monistic system using the principle of conservation of energy as one of its structural elements.
More recently, in her 2016 bookMolecular Red: Theory for the Anthropocene,McKenzie Warkattempts to establish Bogdanov as a precursor to contemporaryAnthropocenetheorists, likeDonna Haraway, by considering Bogdanov's works of fiction as an extension of his general work in Tectology. In this, Wark also considers Tectology as an alternative to the Soviet state philosophy ofdialectical materialism, which may help in explainingLenin's vehement opposition to Tectology in his ownMaterialism and Empirio-Criticism.
According to Bogdanov[7]"the aim of Tectology is the systematization of organizedexperience", through the identification of universal organizationalprinciples: "all things are organizational, allcomplexescould only be understood through their organizational character."[8]Bogdanov considered that any complex should correspond to its environment and adapt to it. A stable and organized complex is greater than the sum of its parts. In Tectology, the term 'stability' refers not to adynamic stability, but to the possibility of preserving the complex in the given environment. A 'complex' is not identical to a 'complicated, a hard-to-comprehend, largeunit.
In Tectology, Bogdanov made the first 'modern' attempt to formulate the most generallawsoforganization. Tectology addressed issues such asholistic,emergentphenomena and systemic development. Tectology as a constructive science built elements into a functional entity using general laws of organization.
According to his "empirio-monistic" principle (1899), he does not recognize differences betweenobservationandperception[further explanation needed]and thus creates the beginning of a general empirical, trans-disciplinary science of physical organization, as an expedientunityand precursor ofSystems TheoryandHolism.
The "whole" in Tectology, and the laws of its integrity, were derived from biological rather than the physicalistic view of the world. Regarding the three scientific cycles which comprise the basis of Tectology (mathematical, physico-biological, and natural-philosophical), it is from the physico-biological cycle that the central concepts have been taken and universalized.[citation needed]
The starting point in Bogdanov'sUniversal Science of Organization - Tectology(1913-1922) was that nature has a general, organized character,with one set of laws of organization for all objects. This set of laws also organizes the internal development of the complex units, as implied bySimona Poustilnik's "macro-paradigm", which induces synergistic consequences into an adaptive assembling phenomenon (1995). Bogdanov's visionary view of nature was one of an 'organization' with interconnected systems.[example needed]
Alexander Bogdanov wrote several works about Tectology:
|
https://en.wikipedia.org/wiki/Tektology
|
Inlinguistics,head directionalityis a proposedparameterthat classifies languages according to whether they arehead-initial(theheadof aphraseprecedes itscomplements) orhead-final(the head follows its complements). Theheadis the element that determines the category of a phrase: for example, in averb phrase, the head is a verb. Therefore, head initial would be"VO" languagesand head final would be"OV" languages.[1]
Some languages are consistently head-initial or head-final at all phrasal levels.Englishis considered to be mainly head-initial (verbs precede their objects, for example), whileJapaneseis an example of a language that is consistently head-final. In certain other languages, such asGermanandGbe, examples of both types of head directionality occur. Various theories have been proposed to explain such variation.
Head directionality is connected with the type ofbranchingthat predominates in a language: head-initial structures areright-branching, while head-final structures areleft-branching.[2]On the basis of these criteria, languages can be divided into head-final (rigid and non-rigid) and head-initial types. The identification of headedness is based on the following:[3]
In some cases, particularly with noun and adjective phrases, it is not always clear which dependents are to be classed as complements, and which asadjuncts. Although in principle the head-directionality parameter concerns the order of heads and complements only, considerations of head-initiality and head-finality sometimes take account of the position of the head in the phrase as a whole, including adjuncts. The structure of the various types of phrase is analyzed below in relation to specific languages, with a focus on the ordering of head and complement. In some cases (such as English and Japanese) this ordering is found to be the same in practically all types of phrase, whereas in others (such as German and Gbe) the pattern is less consistent. Different theoretical explanations of these inconsistencies are discussed later in the article. There are various types of phrase in which the ordering of head and complement(s) may be considered when attempting to determine the head directionality of a language, including:
Englishis a mainly head-initial language. In a typical verb phrase, for example, the verb precedes its complements, as in the following example:[6]
The head of the phrase (the verbeat) precedes its complement (the determiner phrasean apple). Switching the order to "[VP[DPan apple] [Veat]]" would be ungrammatical.
Nouns also tend to precede any complements, as in the following example, where therelative clause(orcomplementizer phrase) that follows the noun may be considered to be a complement:[7]
Nouns do not necessarily begin their phrase; they may be preceded byattributive adjectives, but these are regarded asadjunctsrather than complements. Adjectives themselves may be preceded by adjuncts, namelyadverbs, as inextremely happy.[8]However, when an adjective phrase contains a true complement, such as a prepositional phrase, the head adjective precedes it:[9]
English adpositional phrases are also head-initial; that is, English hasprepositionsrather than postpositions:[10]
On thedeterminer phrase(DP) view, where adetermineris taken to be the head of its phrase (rather than the associated noun), English can be seen to be head-initial in this type of phrase too. In the following example[11]the head is taken to be the determinerany, and the complement is the noun (phrase)book:
English also has head-initialcomplementizer phrases, as in this example[12]where the complementizerthatprecedes its complement, the tense phraseMary did not swim:
Grammatical words marking tense and aspect generally precede the semantic verb. This indicates that, if finite verb phrases are analyzed astense phrasesor aspect phrases, these are again head-initial in English. In the example above,didis considered a (past) tense marker, and precedes its complement, the verb phrasenot swim. In the following,hasis a (perfect) aspect marker;[13]again it appears before the verb (phrase) which is its complement.
The following example shows a sequence of nested phrases in which each head precedes its complement.[14]In thecomplementizer phrase(CP) in (a), the complementizer (C) precedes its tense phrase (TP) complement. In thetense phrasein (b), thetense-marking element (T) precedes its verb phrase (VP) complement. (The subject of the tense phrase,the girl, is aspecifier, which does not need to be considered when analyzing the ordering of head and complement.) In theverb phrasein (c), the verb (V) precedes its two complements, namely the determiner phrase (DP)the bookand the prepositional phrase (PP)on the table. In (d), wherea pictureis analyzed as a determiner phrase, the determiner (D)aprecedes its noun phrase (NP) complement, while in (e), thepreposition(P)onprecedes its DP complementyour desk.
Indonesianis an example of an SVO head-initial language.[1][15]The characteristic of it being a head-initial language can be examined through a dependency perspective or through a word order perspective. Both approaches lead to the conclusion that Indonesian is a head-initial language.
When examining Indonesian through a dependency perspective, it is considered head initial as thegovernorof both constituents are positioned before thedependent.[16]
Placing the head before a dependent minimizes the overall dependency distance, which is the distance between the twoconstituents.[16]Minimizing dependency distance allows for less cognitive demand as a head-final dependency requires the constituents in the dependent clause to be stored in working memory until the head is realized.[16]
In Indonesian, the number of constituencies affects the dependency direction. When there are 6 constituents — which is a relatively short sentence — there is a preference for head initial relation.[16]However, when there are 11-30 constituents, there appears to be a balance of head-initial and head-final dependencies.[16]Regardless, Indonesian displays an overall head-initial preference on all levels of dependency structure as it consistently attempts to position the head as early on in the sentence even though it produces a longer dependency distance rather than placing the head after its dependents.[16]Furthermore, Indonesian has an overall preference towards head-initial when comparing head-initial and head-final relation on all levels of constituent length for both spoken and written data.[16]
The subject of the sentence followed by the verb, representing SVO order.[17]The following examples demonstrate head-initial directionality inIndonesian(note thatperdana menteri"prime minister" is unusually beinghead-final):
Perdana
Prime
menteri
minister
sudah
already
pulang
home
Perdana menteri sudah pulang
Prime minister already home
"The Prime minister has returned home"
[CP[DPPerdana menteri] [VPsudah pulang]]
Classifiersandpartitivescan function as the head nouns ofnoun phrases. Below is an example of the internal structure of a noun phrase and its head-initial word order.
Botol
Bottle
ini
DET-this
retak
crack
Botol ini retak
Bottle DET-this crack
"This bottle is cracked"
[CP[DPbotol ini][VPretak]]
Head-initial word order is seen in the internal structure of theverb phrasein the following example where the V is in the head position of the verb phrase and thus appears before its complement:
Dokter
Doctor
memeriksa
checks
mata
eye
saya
PN-my
Dokter memeriksa mata saya
Doctor checks eye PN-my
"The doctor checked my eyes"
[CP[DPDokter][VP[Vmemeriksa][DPmata saya]]]
InIndonesiana noun can be followed by anothermodifying nounwhose primary function is to provide more specific information about the preceding head noun, such as indicating what the head noun is made of, gender, locative sense, and what the head noun does, etc. However, no other word is able to intervene between a head noun and its following modifying noun. If a word follows the modifying noun, then it provides reference to thehead nounand not the modifying noun.[17]
guru
teacher
bahasa
language
guru bahasa
teacher language
"language teacher"
guru
teacher
sekolah
school
itu
DET-that
guru sekolah itu
teacher school DET-that
"that schoolteacher"
toko
shop
buku
book
toko buku
shop book
"Bookshop"
toko
shop
buku
book
yang
DET-a
besar
big
toko buku yang besar
shop book DET-a big
a big bookshop
sate
satay
ayam
chicken
sate ayam
satay chicken
"chicken satay"
Japaneseis an example of a strongly head-final language. This can be seen in verb phrases and tense phrases: the verb (tabein the example) comes after its complement, while the tense marker (ru) comes after the whole verb phrase which is its complement.[6]
リンゴを
ringo-o
apple-ACC
食べる
tabe-ru
eat-NPAST
リンゴを 食べる
ringo-o tabe-ru
apple-ACC eat-NPAST
"eat an apple"
[TP[VP[DPringo-o] [Vtabe]] [Tru]]
Nouns also typically come after any complements, as in the following example where the PPNew York-de-nomay be regarded as a complement:[18]
ジョンの
John-no
John-GEN
昨日の
kinoo-no
yesterday-GEN
ニューヨークでの
New York-de-no
New York-in-GEN
講義
koogi
lecture
ジョンの 昨日の ニューヨークでの 講義
John-no kinoo-no {New York-de-no} koogi
John-GEN yesterday-GEN {New York-in-GEN} lecture
"John's lecture in New York yesterday"
[NP[PPNew York-de-no] [Nkoogi]]
Adjectives also follow any complements they may have. In this example the complement of quantity,ni-juu-meetoru("twenty meters"), precedes the head adjectivetakai("tall"):[19]
この
Kono
this
ビルは
biru-wa
building-TOP
20メートル
ni-juu-meetoru
two-ten-meter
高い
takai
tall
この ビルは 20メートル 高い
Kono biru-wa ni-juu-meetoru takai
this building-TOP two-ten-meter tall
"This building is twenty meters tall."
[AP[Qni-juu-meetoru] [Atakai]]
Japanese uses postpositions rather than prepositions, so its adpositional phrases are again head-final:[20]
僕が
Boku-ga
I-NOM
高須村に
Takasu-mura-ni
Takasu-village-in
住んでいる
sunde-iru
live-PRES
僕が 高須村に 住んでいる
Boku-ga Takasu-mura-ni sunde-iru
I-NOM Takasu-village-in live-PRES
"I live in Takasu village."
[PP[DPTakasu-mura] [Pni]]
Determiner phrases are head-final as well:[11]
誰
dare
person
も
mo
any
誰 も
dare mo
person any
"anyone"
[DP[NPdare] [Dmo]]
A complementizer (herekoto, equivalent to English "that") comes after its complement (here a tense phrase meaning "Mary did not swim"), thus Japanese complementizer phrases are head-final:[12]
メリーが
Mary-ga
Mary-NOM
泳がなかったこと
oyog-ana-katta-koto
swim-NEG-PAST-that
メリーが 泳がなかったこと
Mary-ga oyog-ana-katta-koto
Mary-NOM swim-NEG-PAST-that
"that Mary did not swim"
[CP[TPMary-ga oyog-ana-katta] [Ckoto]]
Turkishis an agglutinative, head-final, and left-branching language that uses aSOVword order.[21]As such, Turkish complements and adjuncts typically precede their head under neutral prosody, andadpositionsare postpositional. Turkish employs a case marking system[22]whichaffixesto the right boundary of the word it is modifying. As such, all case markings in Turkish are suffixes. For example, the set ofaccusativecase marking suffixes-(y)ı-, -(y)i-, -(y)u-, -(y)ü-in Turkish indicate that it is the direct object of a verb. Additionally, while some kinds of definite determiners andpostpositionsin Turkish can be marked by case, other types also exist as free morphemes.[22]In the following examples, Turkish case marker suffixes are analyzed as complements to the head.
In Turkish, tense is denoted by a case marking suffix on the verb.[23]
Ahmet
Ahmet
anne-sin-i
mother-3SG-ACC
ziyaret
visit
et-ti
do-PAST
Ahmet anne-sin-i ziyaret et-ti
Ahmet mother-3SG-ACC visit do-PAST
'Ahmet visited his mother.'
[TP[VPet][T-ti]]
In neutral prosody, Turkish verb phrases are primarily head-final, as the verb comes after its complement. Variation in object-verb ordering is not strictly rigid. However, constructions where the verb precedes the object are less common.[24]
Çocuk-lar
child-PL
çikolata
chocolate
sever
like
Çocuk-lar çikolata sever
child-PL chocolate like
'Children like chocolate.'
[VP[DPçikolata][Vsever]]
In Turkish, definite determiners may be marked with a case marker suffix on the noun, such as when the noun is the direct object of a verb. They may also exist as free morphemes that attach to a head-initial determiner phrase, such as when the determiner is a demonstrative. Like other case markers in Turkish, when the morpheme carrying the demonstrative meaning is a case marker, they attach at the end of the word. As such, the head of the phrase, in this case the determiner, follows its complement like in the example below:[22]
Dün
Yesterday
çok
very
garip
strange
kitap-lar-ı
book-PL-ACC
oku-du-m
read-PAST-1SG
Dün çok garip kitap-lar-ı oku-du-m
Yesterday very strange book-PL-ACC read-PAST-1SG
'Yesterday I read the very strange books.'
[DP[NPkitap-lar][D-ı]]
Turkish adpositions are postpositions that can affix as a case marker at the end of a word. They can also be a separate word that attaches to the head-final postpositional phrase, as is the case in the example below:[24]
Bu
This
kitab-ı
book-ACC
Ahmet
Ahmet
için
for
al-dı-m
buy-PAST-1SG
Bu kitab-ı Ahmet için al-dı-m
This book-ACC Ahmet for buy-PAST-1SG
'I bought this book for Ahmet.'
[PP[DPAhmet][Piçin]]
Turkish employs acase markingsystem that allows some constituents in Turkish clauses to participate in permutations of its canonical SOV word order, thereby in some ways exhibiting a 'free' word order. Specifically, constituents of anindependent clausecan be moved around and constituents of phrasal categories can occur outside of theprojectionsthey are elements of. As a result, it is possible for the major case-marked constituents of a clause in Turkish to appear in all possible orders in a sentence, such that SOV, SVO, OSV, OVS, VSO, and VOS word orders are acceptable.[25]
This free word order allows for the verbal phrase to occur in any position in an independent clause, unlike other head-final languages (such asJapaneseandKorean, in which any variation in word order must occur in the preverbal domain and the verb remains at the end of the clause(see§ Japanese, above)). Because of this relatively high degree of variation in word order in Turkish, its status as a head-final language is generally considered to be less strict and not absolute like Japanese or Korean, since while embedded clauses must remain verb-final, matrix clauses can show variability in word order.[25]
In the canonical word order of Turkish, as is typical in a head-final language, subjects come at the beginning of the sentence, then objects, with verbs coming in last:
1. Subject-Object-Verb (SOV, canonical word order)
Yazar
author
makale-yi
article-ACC
bitir-di
finish-PAST
Yazar makale-yi bitir-di
author article-ACC finish-PAST
'The author finished the article.'
However, several variations on this order can occur on matrix clauses, such that the subject, object, and verb can occupy all different positions within a sentence. Because Turkish uses a case-marking system to denote how each word functions in a sentence in relation to the rest, case-marked elements can be moved around without a loss in meaning. These variations, also called permutations,[26][25]can change the discourse focus of the constituents in the sentence:
2. Object-Subject-Verb (OSV)
Makale-yi
article-ACC
yazar
author
bitir-di
finish-PAST
Makale-yi yazar bitir-di
article-ACC author finish-PAST
'The author finished the article.'
In this variation, the object moves to the beginning of the sentence, the subject follows, and the verb remains in final position.
3. Object-Verb-Subject (OVS)
Makale-yi
article-ACC
bitir-di
finish-PAST
yazar
author
Makale-yi bitir-di yazar
article-ACC finish-PAST author
'The author finished the article.'
In this variation, the subject moves to end of the sentence. This is an example of how verbs in Turkish can move to other positions in the clause, even though other head-final languages, such as Japanese and Korean, typically see verbs coming only at the end of the sentence.
4. Subject-Verb-Object (SVO)
Yazar
author
bitir-di
finish-PAST
makale-yi
article-ACC
Yazar bitir-di makale-yi
author finish-PAST article-ACC
'The author finished the article.'
In this variation, the object moves to the end of the sentence and the verb phrase now directly precedes the subject, which remains at the beginning of the sentence. This word order is akin toEnglishword order.
5. Verb-Subject-Object (VSO)
Bitir-di
finish-PAST
yazar
author
makale-yi
article-ACC
Bitir-di yazar makale-yi
finish-PAST author article-ACC
'The author finished the article.'
In this variation, the verb phrase moves from the end of the sentence to the beginning of the sentence.
6. Verb-Object-Subject (VOS)
Bitir-di
finish-PAST
makale-yi
article-ACC
yazar
author
Bitir-di makale-yi yazar
finish-PAST article-ACC author
'The author finished the article.'
In this variation, the verb phrase moves to the beginning of the sentence, the object moves so that it is directly following the verb, and the subject is at the end of the sentence.
German, while being predominantly head-initial, is less conclusively so than in the case of English. German also features certain head-final structures. For example, in anonfiniteverb phrase the verb is final. In a finite verb phrase (or tense/aspect phrase) the verb (tense/aspect) is initial, although it may move to final position in asubordinate clause. In the following example,[27]the non-finite verb phrasees findenis head-final, whereas in the tensed main clauseich werde es finden(headed by theauxiliary verbwerdeindicatingfuture tense), the finite auxiliary precedes its complement (as an instance of averb-secondconstruction; in the example below, this V2-position is called "T").
Ich
I
werde
will
es
it
finden
find
Ich werde es finden
I will it find
"I will find it."
Noun phrases containing complements are head-initial; in this example[28]the complement, the CPder den Befehl überbrachte, follows the head nounBoten.
Man
one
beschimpfte
insulted
den
the
Boten,
messenger
der
who
den
the
Befehl
command
überbrachte
delivered
Man beschimpfte den Boten, der den Befehl überbrachte
one insulted the messenger who the command delivered
"The messenger, who delivered the command, was insulted."
Adjective phrases may be head-final or head-initial. In the next example the adjective (stolze) follows its complement (auf seine Kinder).[29]
der
the
auf
of
seine
his
Kinder
children
stolze
proud
Vater
father
der auf seine Kinder stolze Vater
the of his children proud father
"the father (who is) proud of his children"
However, when essentially the same adjective phrase is usedpredicativelyrather than attributively, it can also be head-initial:[30]
weil
since
er
he
stolz
proud
auf
of
seine
his
Kinder
children
ist
is
weil er stolz auf seine Kinder ist
since he proud of his children is
"since he is proud of his children"
Most adpositional phrases are head-initial (as German has mostly prepositions rather than postpositions), as in the following example, whereaufcomes before its complementden Tisch:[31]
Peter
Peter
legt
puts
das
the
Buch
book
auf
on
den
the.ACC
Tisch
table
Peter legt das Buch auf den Tisch
Peter puts the book on the.ACC table
"Peter puts the book on the table."
German also has somepostpositions, however (such asgegenüber"opposite"), and so adpositional phrases can also sometimes be head-final. Another example is provided by the analysis of the following sentence:[32]
Die
the
Schnecke
snail
kroch
crept
das
the
Dach
roof
hinauf
up
Die Schnecke kroch das Dach hinauf
the snail crept the roof up
"The snail crept up the roof"
Like in English, determiner phrases and complementizer phrases in German are head-initial. The next example is of a determiner phrase, headed by the articleder:[33]
der
the
Mann
man
der Mann
the man
"the man"
In the following example, the complementizerdassprecedes the tense phrase which serves as its complement:[34]
dass
that
Lisa
Lisa
eine
a
Blume
flower
gepflanzt
planted
hat
has
dass Lisa eine Blume gepflanzt hat
that Lisa a flower planted has
"that Lisa planted a flower"
Standard Chinese(whose syntax is typical ofChinese varietiesgenerally) features a mixture of head-final and head-initial structures. Noun phrases are head-final. Modifiers virtually always precede the noun they modify.
In the case of strict head/complement ordering, however, Chinese appears to be head-initial. Verbs normally precede their objects. Both prepositions and postpositions are reported, but the postpositions can be analyzed as a type of noun (the prepositions are often calledcoverbs).
InGbe, a mixture of head-initial and head-final structures is found. For example, a verb may appear after or before its complement, which means that both head-initial and head-final verb phrases occur.[35]In the first example the verb for "use" appears after its complement:
Kɔ̀jó
Kojo
tó
IMPERF
àmí
oil
lɔ́
DET
zân
use
Kɔ̀jó tó àmí lɔ́ zân
Kojo IMPERF oil DET use
"Kojo is using the oil."
In the second example the verb precedes the complement:
Kɔ̀jó
Kojo
nɔ̀
HAB
zán
use-PERF
àmí
oil
lɔ́
DET
Kɔ̀jó nɔ̀ zán àmí lɔ́
Kojo HAB use-PERF oil DET
"Kojo habitually used the oil/Kojo habitually uses the oil."
It has been debated whether the first example is due to objectmovementto the left side of the verb[36]or whether the lexical entry of the verb simply allows head-initial and head-final structures.[37]
Tense phrases and aspect phrases are head-initial since aspect markers (such astóandnɔ̀above) and tense markers (such as the future markernáin the following example, but that does not apply to tense markers shown by verbinflection) come before the verb phrase.[38]
dàwé
man
lɔ̀
DET
ná
FUT
xɔ̀
buy
kɛ̀kɛ́
bicycle
dàwé lɔ̀ ná xɔ̀ kɛ̀kɛ́
man DET FUT buy bicycle
"The man will buy a bicycle."
Gbe noun phrases are typically head-final, as in this example:[39]
Kɔ̀kú
Koku
sín
CASE
ɖìdè
sketch
lɛ̀
PL
Kɔ̀kú sín ɖìdè lɛ̀
KokuCASEsketch PL
"sketches of Koku"
In the following example of an adjective phrase, Gbe follows a head-initial pattern, as the headyùprecedes theintensifiertàùú.[40]
àǔn
dog
yù
black
tàùú
INT
àǔn yù tàùú
dog black INT
"really black dogs"
Gbe adpositional phrases are head-initial, with prepositions preceding their complement:[41]
Kòfi
Kofi
zé
take-PERF
kwɛ́
money
xlán
to
Àsíbá
Asiba
Kòfi zé kwɛ́ xlán Àsíbá
Kofi take-PERF money to Asiba
"Kofi sent money to Asiba."
Determiner phrases, however, are head-final:[42]
Asíbá
Asiba
xɔ̀
buy-PERF
àvɔ̀
cloth
àmàmú
green
màtàn-màtàn
odd
ɖé
DEF
Asíbá xɔ̀ àvɔ̀ àmàmú màtàn-màtàn ɖé
Asiba buy-PERF cloth green odd DEF
"Asiba bought a specific ugly green cloth"
Complementizer phrases are head-initial:[43]
ɖé
that
Dòsà
Dosa
gbá
build-PERF
xwé
house
ɔ̀
DEF
ɔ̀
DET
ɖé Dòsà gbá xwé ɔ̀ ɔ̀
that Dosa build-PERF house DEF DET
"that Dosa built the house"
The idea that syntactic structures reduce to binary relations was introduced byLucien Tesnièrein 1959 within the framework ofdependency theory, which was further developed in the 1960s. Tesnière distinguished two structures that differ in the placement of the structurally governing element (head):[44]centripetal structures, in which heads precede theirdependents, andcentrifugal structures, in which heads follow their dependents. Dependents here may includecomplements,adjuncts, andspecifiers.
Joseph Greenberg, who worked in the field oflanguage typology, put forward an implicational theory ofword order, whereby:[45]
The first set of properties make heads come at the start of their phrases, while the second set make heads come at the end. However, it has been claimed that many languages (such asBasque) do not fulfill the above conditions, and that Greenberg's theory fails to predict the exceptions.[46]
Winfred P. Lehmann, expanding upon Greenberg's theory, proposed aFundamental Principle of Placement (FPP)in 1973. The FPP states that the order of object and verb relative to each other in a language determines other features of that language's typology, beyond the features that Greenberg identified.
Lehmann also believed that the subject is not a primary element of a sentence, and that the traditional six-order typology of languages should be reduced to just two, VO and OV, based on head-directionality alone. Thus, for example, SVO and VSO would be considered the same type in Lehmann's classification system.
Noam Chomsky'sPrinciples and Parameters theoryin the 1980s[48]introduced the idea that a small number of innate principles are common to every human language (e.g. phrases are oriented around heads), and that these general principles are subject to parametric variation (e.g. the order of heads and other phrasal components may differ). In this theory, the dependency relation between heads, complements, specifiers, and adjuncts is regulated byX-bar theory, proposed by Jackendoff[49]in the 1970s. The complement is sister to the head, and they can be ordered in one of two ways. A head-complement order is called ahead-initial structure, while a complement-head order is called ahead-final structure. These are special cases of Tesnière's centripetal and centrifugal structures, since here only complements are considered, whereas Tesnière considered all types of dependents.
In the principles and parameters theory, a head-directionality parameter is proposed as a way ofclassifying languages. A language which has head-initial structures is considered to be ahead-initial language, and one which has head-final structures is considered to be ahead-final language. It is found, however, that very few, if any, languages are entirely one direction or the other. Linguists have come up with a number of theories to explain the inconsistencies, sometimes positing a more consistentunderlyingorder, with the phenomenon of phrasalmovementbeing used to explain the surface deviations.
According to theAntisymmetrytheory proposed byRichard S. Kayne, there is no head-directionality parameter as such: it is claimed that at an underlying level, all languages are head-initial. In fact, it is argued that all languages have the underlying order Specifier-Head-Complement. Deviations from this order are accounted for by differentsyntactic movementsapplied by languages. Kayne argues that a theory that allows both directionalities would imply an absence ofasymmetriesbetween languages, whereas in fact languages fail to be symmetrical in many respects. Kayne argues using the concept of a probe-goal search (based on the ideas of theMinimalist program), whereby aheadacts as a probe and looks for a goal, namely itscomplement. Kayne proposes that the direction of the probe-goal search must share the direction of languageparsingand production.[50]Parsing and production proceed in a left-to-right direction: the beginning of sentence is heard or spoken first, and the end of the sentence is heard or spoken last. This implies (according to the theory) an ordering whereby probe comes before goal, i.e. head precedes complement.
Some linguists have rejected the conclusions of the Antisymmetry approach. Some have pointed out that in predominantly head-final languages such asJapaneseandBasque, the change from an underlying head-initial form to a largely head-final surface form would involve complex and massive leftward movement, which is not in accordance with the ideal of grammatical simplicity.[46]Some take a "surface true" viewpoint: that analysis of head direction must take place at the level ofsurface derivations, or even thePhonetic Form(PF), i.e. the order in which sentences are pronounced in natural speech. This rejects the idea of an underlying ordering which is then subject to movement, as posited in Antisymmetry and in certain other approaches. It has been argued that a head parameter must only reside at PF, as it is unmaintainable in its original form as a structural parameter.[51]
Some linguists have provided evidence which may be taken to support Kayne's scheme, such as Lin,[52]who considered Standard Chinese sentences with thesentence-final particlele. Certain restrictions on movement from within verb phrases preceding such a particle are found (if various other assumptions from the literature are accepted) to be consistent with the idea that the verb phrase has moved from its underlying position after its head (the particlelehere being taken as the head of anaspect phrase). However, Takita (2009) observes that similar restrictions do not apply in Japanese, in spite of its surface head-final character, concluding that if Lin's assumptions are correct, then Japanese must be considered to be a true head-final language, contrary to the main tenet of Antisymmetry.[53]More details about these arguments can be found in theAntisymmetryarticle.
Some scholars, such as Tesnière, argue that there are no absolute head-initial or head-final languages. According to this approach, it is true that some languages have more head-initial or head-final elements than other languages do, but almost any language contains both head-initial and head-final elements. Therefore, rather than being classifiable into fixed categories, languages can be arranged on acontinuumwith head-initial and head-final as the extremes, based on the frequency distribution of theirdependencydirections. This view was supported in a study by Haitao Liu (2010), who investigated 20 languages using a dependencytreebank-based method.[54]For instance, Japanese is close to the head-final end of the continuum, while English and German, which have mixed head-initial and head-final dependencies, are plotted in relatively intermediate positions on the continuum.
Polinsky (2012) identified the following five head-directionality sub-types:
She identified a strong correlation between the head-directionality type of a language and the ratio of verbs to nouns in the lexical inventory. Languages with a scarcity of simple verbs tend to be rigidly head-final, as in the case of Japanese, whereas verb-rich languages tend to be head-initial languages.[55]
|
https://en.wikipedia.org/wiki/Head-directionality_parameter
|
Incomputing,BIOS(/ˈbaɪɒs,-oʊs/,BY-oss, -ohss;Basic Input/Output System, also known as theSystem BIOS,ROM BIOS,BIOS ROMorPC BIOS) is a type offirmwareused to provide runtime services foroperating systemsandprogramsand to performhardwareinitialization during thebootingprocess (power-on startup).[1]The firmware comes pre-installed on the computer'smotherboard.
The name originates from theBasicInput/OutputSystem used in theCP/Moperating system in 1975.[2][3]The BIOS firmware was originallyproprietaryto theIBM PC; it wasreverse engineeredby some companies (such asPhoenix Technologies) looking to create compatible systems. Theinterfaceof that original system serves as ade factostandard.
The BIOS in older PCs initializes and tests the system hardware components (power-on self-testor POST for short), and loads aboot loaderfrom a mass storage device which then initializes akernel. In the era ofDOS, the BIOS providedBIOS interrupt callsfor the keyboard, display, storage, and otherinput/output(I/O) devices that standardized an interface to application programs and the operating system. More recent operating systems do not use the BIOS interrupt calls after startup.[4]
Most BIOS implementations are specifically designed to work with a particular computer ormotherboardmodel, by interfacing with various devices especially systemchipset. Originally, BIOS firmware was stored in aROMchip on the PC motherboard. In later computer systems, the BIOS contents are stored onflash memoryso it can be rewritten without removing the chip from the motherboard. This allows easy, end-user updates to the BIOS firmware so new features can be added or bugs can be fixed, but it also creates a possibility for the computer to become infected with BIOSrootkits. Furthermore, a BIOS upgrade that fails couldbrickthe motherboard.
Unified Extensible Firmware Interface(UEFI) is a successor to the PC BIOS, aiming to address its technical limitations.[5]UEFI firmware may include legacy BIOS compatibility to maintain compatibility with operating systems and option cards that do not support UEFI native operation.[6][7][8]Since 2020, all PCs for Intel platforms no longer support legacy BIOS.[9]The last version ofMicrosoft Windowsto officially support running on PCs which use legacy BIOS firmware isWindows 10asWindows 11requires a UEFI-compliant system (except for IoT Enterprise editions of Windows 11 sinceversion 24H2[10]).
The term BIOS (Basic Input/Output System) was created byGary Kildall[11][12]and first appeared in theCP/Moperating system in 1975,[2][3][12][13][14][15]describing the machine-specific part of CP/M loaded during boot time that interfaces directly with thehardware.[3](A CP/M machine usually has only a simpleboot loaderin its ROM.)
Versions ofMS-DOS,PC DOSorDR-DOScontain a file called variously "IO.SYS", "IBMBIO.COM", "IBMBIO.SYS", or "DRBIOS.SYS"; this file is known as the "DOS BIOS" (also known as the "DOS I/O System") and contains the lower-level hardware-specific part of the operating system. Together with the underlying hardware-specific but operating system-independent "System BIOS", which resides inROM, it represents the analogue to the "CP/M BIOS".
The BIOS originallyproprietaryto theIBM PChas beenreverse engineeredby some companies (such asPhoenix Technologies) looking to create compatible systems.
With the introduction of PS/2 machines, IBM divided the System BIOS into real- and protected-mode portions. The real-mode portion was meant to provide backward compatibility with existing operating systems such as DOS, and therefore was named "CBIOS" (for "Compatibility BIOS"), whereas the "ABIOS" (for "Advanced BIOS") provided new interfaces specifically suited for multitasking operating systems such asOS/2.[16]
The BIOS of the originalIBM PCandXThad no interactive user interface. Error codes or messages were displayed on the screen, or coded series of sounds were generated to signal errors when thepower-on self-test(POST) had not proceeded to the point of successfully initializing a video display adapter. Options on the IBM PC and XT were set by switches and jumpers on the main board and onexpansion cards. Starting around the mid-1990s, it became typical for the BIOS ROM to include a"BIOS configuration utility"(BCU[17]) or "BIOS setup utility", accessed at system power-up by a particular key sequence. This program allowed the user to set system configuration options, of the type formerly set usingDIP switches, through an interactive menu system controlled through the keyboard. In the interim period, IBM-compatible PCs—including theIBM AT—held configuration settings in battery-backed RAM and used a bootable configuration program on floppy disk, not in the ROM, to set the configuration options contained in this memory. The floppy disk was supplied with the computer, and if it was lost the system settings could not be changed. The same applied in general to computers with anEISAbus, for which the configuration program was called an EISA Configuration Utility (ECU).
A modernWintel-compatible computer provides a setup routine essentially unchanged in nature from the ROM-resident BIOS setup utilities of the late 1990s; the user can configure hardware options using the keyboard and video display. The modern Wintel machine may store the BIOS configuration settings in flash ROM, perhaps the same flash ROM that holds the BIOS itself.
Peripheral cards such as hard disk drivehost bus adaptersandvideo cardshave their own firmware, and BIOS extensionoption ROMcode may be a part of the expansion card firmware; that code provides additional capabilities in the BIOS. Code in option ROMs runs before the BIOS boots the operating system frommass storage. These ROMs typically test and initialize hardware, add new BIOS services, or replace existing BIOS services with their own services. For example, aSCSI controllerusually has a BIOS extension ROM that adds support for hard drives connected through that controller. An extension ROM could in principle contain operating system, or it could implement an entirely different boot process such asnetwork booting. Operation of an IBM-compatible computer system can be completely changed by removing or inserting an adapter card (or a ROM chip) that contains a BIOS extension ROM.
The motherboard BIOS typically contains code for initializing and bootstrapping integrated display and integrated storage. The initialization process can involve the execution of code related to the device being initialized, for locating the device, verifying the type of device, then establishing base registers, settingpointers, establishing interrupt vector tables,[18]selecting paging modes which are ways for organizing availableregistersin devices, setting default values for accessing software routines related tointerrupts,[19]and setting the device's configuration using default values.[20]In addition, plug-in adapter cards such asSCSI,RAID,network interface cards, andvideo cardsoften include their own BIOS (e.g.Video BIOS), complementing or replacing the system BIOS code for the given component. Even devices built into the motherboard can behave in this way; their option ROMs can be a part of the motherboard BIOS.
An add-in card requires an option ROM if the card is not supported by the motherboard BIOS and the card needs to be initialized or made accessible through BIOS services before the operating system can be loaded (usually this means it is required in the boot process). An additional advantage of ROM on some early PC systems (notably including the IBM PCjr) was that ROM was faster than main system RAM. (On modern systems, the case is very much the reverse of this, and BIOS ROM code is usually copied ("shadowed") into RAM so it will run faster.)
Option ROMs normally reside on adapter cards. However, the original PC, and perhaps also the PC XT, have a spare ROM socket on the motherboard (the "system board" in IBM's terms) into which an option ROM can be inserted, and the four ROMs that contain the BASIC interpreter can also be removed and replaced with custom ROMs which can be option ROMs. TheIBM PCjris unique among PCs in having two ROM cartridge slots on the front. Cartridges in these slots map into the same region of the upper memory area used for option ROMs, and the cartridges can contain option ROM modules that the BIOS would recognize. The cartridges can also contain other types of ROM modules, such as BASIC programs, that are handled differently. One PCjr cartridge can contain several ROM modules of different types, possibly stored together in one ROM chip.
The8086and8088start at physical address FFFF0h.[21]The80286starts at physical address FFFFF0h.[22]The80386and later x86 processors start at physical address FFFFFFF0h.[23][24][25]When the system is initialized, the first instruction of the BIOS appears at that address.
If the system has just been powered up or the reset button was pressed ("cold boot"), the fullpower-on self-test(POST) is run. If Ctrl+Alt+Delete was pressed ("warm boot"), a special flag value stored innonvolatile BIOS memory("CMOS") tested by the BIOS allows bypass of the lengthy POST and memory detection.
The POST identifies, tests and initializes system devices such as theCPU,chipset,RAM,motherboard,video card,keyboard,mouse,hard disk drive,optical disc driveand otherhardware, includingintegrated peripherals.
Early IBM PCs had a routine in the POST that would download a program into RAM through the keyboard port and run it.[26][27]This feature was intended for factory test or diagnostic purposes.
After the motherboard BIOS completes its POST, most BIOS versions search for option ROM modules, also called BIOS extension ROMs, and execute them. The motherboard BIOS scans for extension ROMs in a portion of the "upper memory area" (the part of the x86 real-mode address space at and above address 0xA0000) and runs each ROM found, in order. To discover memory-mapped option ROMs, a BIOS implementation scans the real-mode address space from0x0C0000to0x0F0000on 2KB(2,048 bytes) boundaries, looking for a two-byte ROMsignature: 0x55 followed by 0xAA. In a valid expansion ROM, this signature is followed by a single byte indicating the number of 512-byte blocks the expansion ROM occupies in real memory, and the next byte is the option ROM'sentry point(also known as its "entry offset"). If the ROM has a valid checksum, the BIOS transfers control to the entry address, which in a normal BIOS extension ROM should be the beginning of the extension's initialization routine.
At this point, the extension ROM code takes over, typically testing and initializing the hardware it controls and registeringinterrupt vectorsfor use by post-boot applications. It may use BIOS services (including those provided by previously initialized option ROMs) to provide a user configuration interface, to display diagnostic information, or to do anything else that it requires.
An option ROM should normally return to the BIOS after completing its initialization process. Once (and if) an option ROM returns, the BIOS continues searching for more option ROMs, calling each as it is found, until the entire option ROM area in the memory space has been scanned. It is possible that an option ROM will not return to BIOS, pre-empting the BIOS's boot sequence altogether.
After the POST completes and, in a BIOS that supports option ROMs, after the option ROM scan is completed and all detectedROMmodules with validchecksumshave been called, the BIOS callsinterrupt 19hto start boot processing. Post-boot, programs loaded can also call interrupt 19h to reboot the system, but they must be careful to disable interrupts and other asynchronous hardware processes that may interfere with the BIOS rebooting process, or else the system may hang or crash while it is rebooting.
When interrupt 19h is called, the BIOS attempts to locateboot loadersoftware on a "boot device", such as ahard disk, afloppy disk,CD, orDVD. It loads and executes the first bootsoftwareit finds, giving it control of the PC.[28]
The BIOS uses the boot devices set inNonvolatile BIOS memory(CMOS), or, in the earliest PCs,DIP switches. The BIOS checks each device in order to see if it is bootable by attempting to load the first sector (boot sector). If the sector cannot be read, the BIOS proceeds to the next device. If the sector is read successfully, some BIOSes will also check for the boot sector signature 0x55 0xAA in the last two bytes of the sector (which is 512 bytes long), before accepting a boot sector and considering the device bootable.[b]
When a bootable device is found, the BIOS transfers control to the loaded sector. The BIOS does not interpret the contents of the boot sector other than to possibly check for the boot sector signature in the last two bytes. Interpretation of data structures like partition tables and BIOS Parameter Blocks is done by the boot program in the boot sector itself or by other programs loaded through the boot process.
A non-disk device such as anetwork adapterattempts booting by a procedure that is defined by itsoption ROMor the equivalent integrated into the motherboard BIOS ROM. As such, option ROMs may also influence or supplant the boot process defined by the motherboard BIOS ROM.
With theEl Torito optical media boot standard, the optical drive actually emulates a 3.5" high-density floppy disk to the BIOS for boot purposes. Reading the "first sector" of a CD-ROM or DVD-ROM is not a simply defined operation like it is on a floppy disk or a hard disk. Furthermore, the complexity of the medium makes it difficult to write a useful boot program in one sector. The bootable virtual floppy disk can contain software that provides access to the optical medium in its native format.
If an expansion ROM wishes to change the way the system boots (such as from a network device or a SCSI adapter) in a cooperative way, it can use theBIOS Boot Specification(BBS)APIto register its ability to do so. Once the expansion ROMs have registered using the BBS APIs, the user can select among the available boot options from within the BIOS's user interface. This is why most BBS compliant PC BIOS implementations will not allow the user to enter the BIOS's user interface until the expansion ROMs have finished executing and registering themselves with the BBS API.[citation needed]
Also, if an expansion ROM wishes to change the way the system boots unilaterally, it can simply hook interrupt 19h or other interrupts normally called from interrupt 19h, such as interrupt 13h, the BIOS disk service, to intercept the BIOS boot process. Then it can replace the BIOS boot process with one of its own, or it can merely modify the boot sequence by inserting its own boot actions into it, by preventing the BIOS from detecting certain devices as bootable, or both. Before the BIOS Boot Specification was promulgated, this was the only way for expansion ROMs to implement boot capability for devices not supported for booting by the native BIOS of the motherboard.[citation needed]
The user can select the boot priority implemented by the BIOS. For example, most computers have a hard disk that is bootable, but sometimes there is a removable-media drive that has higher boot priority, so the user can cause a removable disk to be booted.
In most modern BIOSes, the boot priority order can be configured by the user. In older BIOSes, limited boot priority options are selectable; in the earliest BIOSes, a fixed priority scheme was implemented, with floppy disk drives first, fixed disks (i.e., hard disks) second, and typically no other boot devices supported, subject to modification of these rules by installed option ROMs. The BIOS in an early PC also usually would only boot from the first floppy disk drive or the first hard disk drive, even if there were two drives installed.
On the originalIBM PCand XT, if no bootable disk was found, the BIOS would try to startROM BASICwith the interrupt call tointerrupt 18h. Since few programs used BASIC in ROM, clone PC makers left it out; then a computer that failed to boot from a disk would display "No ROM BASIC" and halt (in response to interrupt 18h).
Later computers would display a message like "No bootable disk found"; some would prompt for a disk to be inserted and a key to be pressed to retry the boot process. A modern BIOS may display nothing or may automatically enter the BIOS configuration utility when the boot process fails.
The environment for the boot program is very simple: the CPU is in real mode and the general-purpose and segment registers are undefined, except SS, SP, CS, and DL. CS:IP always points to physical address0x07C00. What values CS and IP actually have is not well defined. Some BIOSes use a CS:IP of0x0000:0x7C00while others may use0x07C0:0x0000.[29]Because boot programs are always loaded at this fixed address, there is no need for a boot program to be relocatable. DL may contain the drive number, as used withinterrupt 13h, of the boot device. SS:SP points to a valid stack that is presumably large enough to support hardware interrupts, but otherwise SS and SP are undefined. (A stack must be already set up in order for interrupts to be serviced, and interrupts must be enabled in order for the system timer-tick interrupt, which BIOS always uses at least to maintain the time-of-day count and which it initializes during POST, to be active and for the keyboard to work. The keyboard works even if the BIOS keyboard service is not called; keystrokes are received and placed in the 15-character type-ahead buffer maintained by BIOS.) The boot program must set up its own stack, because the size of the stack set up by BIOS is unknown and its location is likewise variable; although the boot program can investigate the default stack by examining SS:SP, it is easier and shorter to just unconditionally set up a new stack.[30]
At boot time, all BIOS services are available, and the memory below address0x00400contains theinterrupt vector table. BIOS POST has initialized the system timers, interrupt controller(s), DMA controller(s), and other motherboard/chipset hardware as necessary to bring all BIOS services to ready status. DRAM refresh for all system DRAM in conventional memory and extended memory, but not necessarily expanded memory, has been set up and is running. Theinterrupt vectorscorresponding to the BIOS interrupts have been set to point at the appropriate entry points in the BIOS, hardware interrupt vectors for devices initialized by the BIOS have been set to point to the BIOS-provided ISRs, and some other interrupts, including ones that BIOS generates for programs to hook, have been set to a default dummy ISR that immediately returns. The BIOS maintains a reserved block of system RAM at addresses0x00400–0x004FFwith various parameters initialized during the POST. All memory at and above address0x00500can be used by the boot program; it may even overwrite itself.[31][32]
The BIOS ROM is customized to the particular manufacturer's hardware, allowing low-level services (such as reading a keystroke or writing a sector of data to diskette) to be provided in a standardized way to programs, including operating systems. For example, an IBM PC might have either a monochrome or a color display adapter (using different display memory addresses and hardware), but a single, standard, BIOSsystem callmay be invoked to display a character at a specified position on the screen intext modeorgraphics mode.
The BIOS provides a smalllibraryof basic input/output functions to operate peripherals (such as the keyboard, rudimentary text and graphics display functions and so forth). When using MS-DOS, BIOS services could be accessed by an application program (or by MS-DOS) by executing an interrupt 13hinterrupt instructionto access disk functions, or by executing one of a number of other documentedBIOS interrupt callsto accessvideo display,keyboard, cassette, and other device functions.
Operating systemsand executive software that are designed to supersede this basic firmware functionality provide replacement software interfaces to application software. Applications can also provide these services to themselves. This began even in the 1980s underMS-DOS, when programmers observed that using the BIOS video services for graphics display were very slow. To increase the speed of screen output, many programs bypassed the BIOS and programmed the video display hardware directly. Other graphics programmers, particularly but not exclusively in thedemoscene, observed that there were technical capabilities of the PC display adapters that were not supported by the IBM BIOS and could not be taken advantage of without circumventing it. Since the AT-compatible BIOS ran in Intelreal mode, operating systems that ran in protected mode on 286 and later processors required hardware device drivers compatible with protected mode operation to replace BIOS services.
In modern PCs running modernoperating systems(such asWindowsandLinux) theBIOS interrupt callsare used only during booting and initial loading of operating systems. Before the operating system's first graphical screen is displayed, input and output are typically handled through BIOS. A boot menu such as the textual menu of Windows, which allows users to choose an operating system to boot, to boot into thesafe mode, or to use the last known good configuration, is displayed through BIOS and receives keyboard input through BIOS.[4]
Many modern PCs can still boot and run legacy operating systems such as MS-DOS or DR-DOS that rely heavily on BIOS for their console and disk I/O, providing that the system has a BIOS, or a CSM-capable UEFI firmware.
Intelprocessors have reprogrammablemicrocodesince theP6microarchitecture.[33][34][35]AMDprocessors have reprogrammable microcode since theK7microarchitecture. The BIOS contain patches to the processor microcode that fix errors in the initial processor microcode; microcode is loaded into processor'sSRAMso reprogramming is not persistent, thus loading of microcode updates is performed each time the system is powered up. Without reprogrammable microcode, an expensive processor swap would be required;[36]for example, thePentium FDIV bugbecame an expensive fiasco for Intel as it required aproduct recallbecause the original Pentium processor's defective microcode could not be reprogrammed. Operating systems can updatemain processormicrocode also.[37][38]
Some BIOSes contain a software licensing description table (SLIC), a digital signature placed inside the BIOS by theoriginal equipment manufacturer(OEM), for exampleDell. The SLIC is inserted into the ACPI data table and contains no active code.[39][40]
Computer manufacturers that distribute OEM versions of Microsoft Windows and Microsoft application software can use the SLIC to authenticate licensing to the OEM Windows Installation disk and systemrecovery disccontaining Windows software. Systems with a SLIC can be preactivated with an OEM product key, and they verify an XML formatted OEM certificate against the SLIC in the BIOS as a means of self-activating (seeSystem Locked Preinstallation, SLP). If a user performs a fresh install of Windows, they will need to have possession of both the OEM key (either SLP or COA) and the digital certificate for their SLIC in order to bypass activation.[39]This can be achieved if the user performs a restore using a pre-customised image provided by the OEM. Power users can copy the necessary certificate files from the OEM image, decode the SLP product key, then perform SLP activation manually.
Some BIOS implementations allowoverclocking, an action in which theCPUis adjusted to a higherclock ratethan its manufacturer rating for guaranteed capability. Overclocking may, however, seriously compromise system reliability in insufficiently cooled computers and generally shorten component lifespan. Overclocking, when incorrectly performed, may also cause components to overheat so quickly that they mechanically destroy themselves.[41]
Some olderoperating systems, for exampleMS-DOS, rely on the BIOS to carry out most input/output tasks within the PC.[42]
Callingreal modeBIOS services directly is inefficient forprotected mode(andlong mode) operating systems.BIOS interrupt callsare not used by modern multitasking operating systems after they initially load.
In the 1990s, BIOS provided someprotected modeinterfaces forMicrosoft WindowsandUnix-likeoperating systems, such asAdvanced Power Management(APM),Plug and Play BIOS,Desktop Management Interface(DMI),VESA BIOS Extensions(VBE),e820andMultiProcessor Specification(MPS). Starting from the year 2000, most BIOSes provideACPI,SMBIOS,VBEande820interfaces for modern operating systems.[43][44][45][46][47]
Afteroperating systemsload, theSystem Management Modecode is still running in SMRAM. Since 2010, BIOS technology is in a transitional process towardUEFI.[5]
Historically, the BIOS in the IBM PC and XT had no built-in user interface. The BIOS versions in earlier PCs (XT-class) were not software configurable; instead, users set the options viaDIP switcheson the motherboard. Later computers, including most IBM-compatibles with 80286 CPUs, had a battery-backednonvolatile BIOS memory(CMOS RAM chip) that held BIOS settings.[48]These settings, such as video-adapter type, memory size, and hard-disk parameters, could only be configured by running a configuration program from a disk, not built into the ROM. A special "reference diskette" was inserted in anIBM ATto configure settings such as memory size.[49]
Early BIOS versions did not have passwords or boot-device selection options. The BIOS was hard-coded to boot from the first floppy drive, or, if that failed, the first hard disk. Access control in early AT-class machines was by a physical keylock switch (which was not hard to defeat if the computer case could be opened). Anyone who could switch on the computer could boot it.[citation needed]
Later, 386-class computers started integrating the BIOS setup utility in the ROM itself, alongside the BIOS code; these computers usually boot into the BIOS setup utility if a certain key or key combination is pressed, otherwise the BIOS POST and boot process are executed.
A modern BIOS setup utility has atext user interface(TUI) orgraphical user interface(GUI) accessed by pressing a certain key on the keyboard when the PC starts. Usually, the key is advertised for short time during the early startup, for example "Press DEL to enter Setup".
The actual key depends on specific hardware. The settings key is most oftenDelete(Acer,ASRock,AsusPC,ECS,Gigabyte,MSI,Zotac) andF2(Asus motherboard,Dell,Lenovolaptop,Origin PC,Samsung,Toshiba), but it can also beF1(Lenovo desktop) andF10(HP).[50]
Features present in the BIOS setup utility typically include:
A modern BIOS setup screen often features aPC Health Statusor aHardware Monitoringtab, which directly interfaces with a Hardware Monitor chip of the mainboard.[51]This makes it possible to monitor CPU andchassistemperature, the voltage provided by thepower supply unit, as well as monitor andcontrol the speed of the fansconnected to the motherboard.
Once the system is booted, hardware monitoring andcomputer fan controlis normally done directly by the Hardware Monitor chip itself, which can be a separate chip, interfaced throughI²CorSMBus, or come as a part of aSuper I/Osolution, interfaced throughIndustry Standard Architecture(ISA) orLow Pin Count(LPC).[52]Some operating systems, likeNetBSDwithenvsysandOpenBSDwith sysctlhw.sensors, feature integrated interfacing with hardware monitors.
However, in some circumstances, the BIOS also provides the underlying information about hardware monitoring throughACPI, in which case, the operating system may be using ACPI to perform hardware monitoring.[53][54]
In modern PCs the BIOS is stored in rewritableEEPROM[55]orNOR flash memory,[56]allowing the contents to be replaced and modified. This rewriting of the contents is sometimes termedflashing.It can be done by a special program, usually provided by the system's manufacturer, or atPOST, with a BIOS image in a hard drive or USB flash drive. A file containing such contents is sometimes termed "a BIOS image". A BIOS might be reflashed in order to upgrade to a newer version to fix bugs or provide improved performance or to support newer hardware. Some computers also support updating the BIOS via an update floppy disk or a special partition on the hard drive.[57]
The original IBM PC BIOS (and cassette BASIC) was stored on mask-programmedread-only memory(ROM) chips in sockets on the motherboard. ROMs could be replaced,[58]but not altered, by users. To allow for updates, many compatible computers used re-programmable BIOS memory devices such asEPROM,EEPROMand laterflash memory(usuallyNOR flash) devices. According to Robert Braver, the president of the BIOS manufacturer Micro Firmware,Flash BIOSchips became common around 1995 because the electrically erasable PROM (EEPROM) chips are cheaper and easier to program than standardultravioleterasable PROM (EPROM) chips. Flash chips are programmed (and re-programmed) in-circuit, while EPROM chips need to be removed from the motherboard for re-programming.[59]BIOS versions are upgraded to take advantage of newer versions of hardware and to correct bugs in previous revisions of BIOSes.[60]
Beginning with the IBM AT, PCs supported a hardware clock settable through BIOS. It had a century bit which allowed for manually changing the century when the year 2000 happened. Most BIOS revisions created in 1995 and nearly all BIOS revisions in 1997 supportedthe year 2000by setting the century bit automatically when the clock rolled past midnight, 31 December 1999.[61]
The first flash chips were attached to theISA bus. Starting in 1998, the BIOS flash moved to theLPCbus, following a new standard implementation known as "firmware hub" (FWH). In 2005, the BIOS flash memory moved to theSPIbus.[62]
The size of the BIOS, and the capacity of the ROM, EEPROM, or other media it may be stored on, has increased over time as new features have been added to the code; BIOS versions now exist with sizes up to 32 megabytes. For contrast, the original IBM PC BIOS was contained in an 8 KB mask ROM. Some modern motherboards are including even bigger NANDflash memoryICs on board which are capable of storing whole compact operating systems, such as someLinux distributions. For example, some ASUS notebooks includedSplashtop OSembedded into their NAND flash memory ICs.[63]However, the idea of including an operating system along with BIOS in the ROM of a PC is not new; in the 1980s, Microsoft offered a ROM option for MS-DOS, and it was included in the ROMs of some PC clones such as theTandy 1000 HX.
Another type of firmware chip was found on the IBM PC AT and early compatibles. In the AT, thekeyboard interfacewas controlled by amicrocontrollerwith its own programmable memory. On the IBM AT, that was a 40-pin socketed device, while some manufacturers used an EPROM version of this chip which resembled an EPROM. This controller was also assigned theA20 gatefunction to manage memory above the one-megabyte range; occasionally an upgrade of this "keyboard BIOS" was necessary to take advantage of software that could use upper memory.[citation needed]
The BIOS may contain components such as theMemory Reference Code(MRC), which is responsible for the memory initialization (e.g.SPDandmemory timingsinitialization).[64]: 8[65]
Modern BIOS[66]includesIntel Management EngineorAMD Platform Security Processorfirmware.
IBM published the entire listings of the BIOS for its original PC, PC XT, PC AT, and other contemporary PC models, in an appendix of theIBM PC Technical Reference Manualfor each machine type. The effect of the publication of the BIOS listings is that anyone can see exactly what a definitive BIOS does and how it does it.
In May 1984,Phoenix Software Associatesreleased its first ROM-BIOS. This BIOS enabled OEMs to build essentially fully compatible clones without having to reverse-engineer the IBM PC BIOS themselves, as Compaq had done for thePortable; it also helped fuel the growth in the PC-compatibles industry and sales of non-IBM versions of DOS.[69]The firstAmerican Megatrends(AMI) BIOS was released in 1986.
New standards grafted onto the BIOS are usually without complete public documentation or any BIOS listings. As a result, it is not as easy to learn the intimate details about the many non-IBM additions to BIOS as about the core BIOS services.
Many PC motherboard suppliers licensed the BIOS "core" and toolkit from a commercial third party, known as an "independent BIOS vendor" or IBV. The motherboard manufacturer then customized this BIOS to suit its own hardware. For this reason, updated BIOSes are normally obtained directly from the motherboard manufacturer. Major IBVs includedAmerican Megatrends(AMI),Insyde Software,Phoenix Technologies, and Byosoft. Microid Research andAward Softwarewere acquired byPhoenix Technologiesin 1998; Phoenix later phased out the Award brand name (although Award Software is still credited in newer AwardBIOS versions and in UEFI firmwares).[when?]General Software, which was also acquired by Phoenix in 2007, sold BIOS for embedded systems based on Intel processors.
SeaBIOSis an open-source BIOS implementation.
The open-source community increased their effort to develop a replacement for proprietary BIOSes and their future incarnations with an open-sourced counterparts.Open Firmwarewas an early attempt to make an open specification for boot firmware. It was initially endorsed by IEEE in itsIEEE 1275-1994standard but was withdrawn in 2005.[70][71]Later examples include theOpenBIOS,corebootandlibrebootprojects.AMDprovided product specifications for some chipsets using coreboot, andGoogleis sponsoring the project.MotherboardmanufacturerTyanofferscorebootnext to the standard BIOS with theirOpteronline of motherboards.
EEPROMandflash memorychips are advantageous because they can be easily updated by the user; it is customary for hardware manufacturers to issue BIOS updates to upgrade their products, improve compatibility and removebugs. However, this advantage had the risk that an improperly executed or aborted BIOS update could render the computer or device unusable. To avoid these situations, more recent BIOSes use a "boot block"; a portion of the BIOS which runs first and must be updated separately. This code verifies if the rest of the BIOS is intact (usinghashchecksumsor other methods) before transferring control to it. If the boot block detects any corruption in the main BIOS, it will typically warn the user that a recovery process must be initiated by booting fromremovable media(floppy, CD or USB flash drive) so the user can try flashing the BIOS again. Somemotherboardshave abackupBIOS (sometimes referred to as DualBIOS boards) to recover from BIOS corruptions.
There are at least five known viruses that attack the BIOS. Two of which were for demonstration purposes. The first one found in the wild wasMebromi, targeting Chinese users.
The first BIOS virus was BIOS Meningitis, which instead of erasing BIOS chips it infected them. BIOS Meningitis was relatively harmless, compared to a virus likeCIH.
The second BIOS virus wasCIH, also known as the "Chernobyl Virus", which was able to erase flash ROM BIOS content on compatible chipsets. CIH appeared in mid-1998 and became active in April 1999. Often, infected computers could no longer boot, and people had to remove the flash ROM IC from the motherboard and reprogram it. CIH targeted the then-widespread Intel i430TX motherboard chipset and took advantage of the fact that theWindows 9xoperating systems, also widespread at the time, allowed direct hardware access to all programs.
Modern systems are not vulnerable to CIH because of a variety of chipsets being used which are incompatible with the Intel i430TX chipset, and also other flash ROM IC types. There is also extra protection from accidental BIOS rewrites in the form of boot blocks which are protected from accidental overwrite or dual and quad BIOS equipped systems which may, in the event of a crash, use a backup BIOS. Also, all modern operating systems such asFreeBSD,Linux,macOS,Windows NT-based Windows OS likeWindows 2000,Windows XPand newer, do not allowuser-modeprograms to have direct hardware access using ahardware abstraction layer.[72]
As a result, as of 2008, CIH has become essentially harmless, at worst causing annoyance by infecting executable files and triggering antivirus software. Other BIOS viruses remain possible, however;[73]since most Windows home users without Windows Vista/7's UAC run all applications with administrative privileges, a modern CIH-like virus could in principle still gain access to hardware without first using an exploit.[citation needed]The operating systemOpenBSDprevents all users from having this access and the grsecurity patch for the Linux kernel also prevents this direct hardware access by default, the difference being an attacker requiring a much more difficult kernel level exploit or reboot of the machine.[citation needed]
The third BIOS virus was a technique presented by John Heasman, principal security consultant for UK-based Next-Generation Security Software. In 2006, at the Black Hat Security Conference, he showed how to elevate privileges and read physical memory, using malicious procedures that replaced normalACPIfunctions stored in flash memory.[74]
The fourth BIOS virus was a technique called "Persistent BIOS infection." It appeared in 2009 at the CanSecWest Security Conference in Vancouver, and at the SyScan Security Conference in Singapore. ResearchersAnibal Sacco[75]and Alfredo Ortega, from Core Security Technologies, demonstrated how to insert malicious code into the decompression routines in the BIOS, allowing for nearly full control of the PC at start-up, even before the operating system is booted. The proof-of-concept does not exploit a flaw in the BIOS implementation, but only involves the normal BIOS flashing procedures. Thus, it requires physical access to the machine, or for the user to be root. Despite these requirements, Ortega underlined the profound implications of his and Sacco's discovery: "We can patch a driver to drop a fully workingrootkit. We even have a little code that can remove or disable antivirus."[76]
Mebromi is atrojanwhich targets computers withAwardBIOS,Microsoft Windows, andantivirus softwarefrom two Chinese companies: Rising Antivirus and Jiangmin KV Antivirus.[77][78][79]Mebromi installs a rootkit which infects theMaster boot record.
In a December 2013 interview with60 Minutes, Deborah Plunkett, Information Assurance Director for the USNational Security Agencyclaimed the NSA had uncovered and thwarted a possible BIOS attack by a foreign nation state, targeting the US financial system.[80]The program cited anonymous sources alleging it was a Chinese plot.[80]However follow-up articles inThe Guardian,[81]The Atlantic,[82]Wired[83]andThe Register[84]refuted the NSA's claims.
Newer Intel platforms haveIntel Boot Guard(IBG) technology enabled, this technology will check the BIOS digital signature at startup, and the IBG public key is fused into thePCH. End users can't disable this function.
Unified Extensible Firmware Interface(UEFI) supplements the BIOS in many new machines. Initially written for theIntel Itanium architecture, UEFI is now available forx86andArmplatforms; the specification development is driven by theUnified EFI Forum, an industryspecial interest group. EFI booting has been supported in onlyMicrosoft Windowsversions supportingGPT,[85]theLinux kernel2.6.1 and later, andmacOSonIntel-based Macs.[86]As of 2014[update], new PC hardware predominantly ships with UEFI firmware. The architecture of the rootkit safeguard can also prevent the system from running the user's own software changes, which makes UEFI controversial as a legacy BIOS replacement in theopen hardwarecommunity. Also,Windows 11requires UEFI to boot,[87]with the exception of IoT Enterprise editions of Windows 11.[10]UEFI is required for devices shipping with Windows 8[88][89]and above.
After the popularity of UEFI in 2010s, the older BIOS that supportedBIOS interrupt callswas renamed to "legacy BIOS".[citation needed]
Other alternatives to the functionality of the "Legacy BIOS" in the x86 world includecorebootandlibreboot.
Some servers and workstations use a platform-independentOpen Firmware(IEEE-1275) based on theForthprogramming language; it is included with Sun'sSPARCcomputers, IBM'sRS/6000line, and otherPowerPCsystems such as theCHRPmotherboards, along with the x86-basedOLPC XO-1.
As of at least 2015,Applehas removed legacy BIOS support from the UEFI monitor inIntel-based Macs. As such, the BIOS utility no longer supports the legacy option, and prints "Legacy mode not supported on this system".
In 2017, Intel announced that it would remove legacy BIOS support by 2020. Since 2019, new Intel platform OEM PCs no longer support the legacy option.[90]
|
https://en.wikipedia.org/wiki/BIOS
|
Incomputerized business management,single version of the truth(SVOT), is a technical concept describing thedata warehousingideal of having either a single centraliseddatabase, or at least a distributed synchronised database, which stores all of an organisation's data in a consistent andnon-redundantform. This contrasts with the related concept ofsingle source of truth(SSOT), which refers to a data storage principle to always source a particular piece of information from one place.[citation needed]
In some systems and in the context of message processing systems (oftenreal-time systems), this term also refers to the goal of establishing a single agreed sequence of messages within a database formed by a particular but arbitrary sequencing of records. The key concept is that data combined in a certain sequence is a "truth" which may be analyzed and processed giving particular results, and that although the sequence is arbitrary (and thus another correct but equally arbitrary sequencing would ultimately provide different results in any analysis), it is desirable to agree that the sequence enshrined in the "single version of the truth" is the version that will be considered "the truth", and that any conclusions drawn from analysis of the database are valid and unarguable, and (in a technical context) the database may be duplicated to a backup environment to ensure a persistent record is kept of the "single version of the truth".
The key point is when the database is created using an external data source (such as a sequence of trading messages from a stock exchange) an arbitrary selection is made of one possibility from two or more equally valid representations of the input data, but henceforth the decision sets "in stone" one and only one version of the truth.
Critics of SVOT as applied to message sequencing argue that this concept is not scalable. As the world moves towards systems spread over many processing nodes, the effort involved in negotiating a single agreed-upon sequence becomes prohibitive.
But as pointed out by Owen Rubel at hisAPIWorld talk 'The New API Pattern', the SVOT is always going to be the service layer in a distributed architecture whereinput/output(I/O) meet; this also is where the endpoint binding belongs to allow for modularization and better abstraction of the I/O data across the architecture to avoid the architecturalcross cutting concern.[1]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Single_version_of_the_truth
|
Astationary stateis aquantum statewith allobservablesindependent of time. It is aneigenvectorof theenergy operator(instead of aquantum superpositionof different energies). It is also calledenergy eigenvector,energy eigenstate,energy eigenfunction, orenergyeigenket. It is very similar to the concept ofatomic orbitalandmolecular orbitalin chemistry, with some slight differences explainedbelow.
A stationary state is calledstationarybecause the system remains in the same state as time elapses, in every observable way. For a single-particleHamiltonian, this means that the particle has a constantprobability distributionfor its position, its velocity, itsspin, etc.[1](This is true assuming the particle's environment is also static, i.e. the Hamiltonian is unchanging in time.) Thewavefunctionitself is not stationary: It continually changes its overall complexphase factor, so as to form astanding wave. The oscillation frequency of the standing wave, multiplied by thePlanck constant, is the energy of the state according to thePlanck–Einstein relation.
Stationary states arequantum statesthat are solutions to the time-independentSchrödinger equation:H^|Ψ⟩=EΨ|Ψ⟩,{\displaystyle {\hat {H}}|\Psi \rangle =E_{\Psi }|\Psi \rangle ,}where
This is aneigenvalue equation:H^{\displaystyle {\hat {H}}}is alinear operatoron a vector space,|Ψ⟩{\displaystyle |\Psi \rangle }is an eigenvector ofH^{\displaystyle {\hat {H}}}, andEΨ{\displaystyle E_{\Psi }}is its eigenvalue.
If a stationary state|Ψ⟩{\displaystyle |\Psi \rangle }is plugged into the time-dependent Schrödinger equation, the result is[2]iℏ∂∂t|Ψ⟩=EΨ|Ψ⟩.{\displaystyle i\hbar {\frac {\partial }{\partial t}}|\Psi \rangle =E_{\Psi }|\Psi \rangle .}
Assuming thatH^{\displaystyle {\hat {H}}}is time-independent (unchanging in time), this equation holds for any timet. Therefore, this is adifferential equationdescribing how|Ψ⟩{\displaystyle |\Psi \rangle }varies in time. Its solution is|Ψ(t)⟩=e−iEΨt/ℏ|Ψ(0)⟩.{\displaystyle |\Psi (t)\rangle =e^{-iE_{\Psi }t/\hbar }|\Psi (0)\rangle .}
Therefore, a stationary state is astanding wavethat oscillates with an overall complexphase factor, and its oscillationangular frequencyis equal to its energy divided byℏ{\displaystyle \hbar }.
As shown above, a stationary state is not mathematically constant:|Ψ(t)⟩=e−iEΨt/ℏ|Ψ(0)⟩.{\displaystyle |\Psi (t)\rangle =e^{-iE_{\Psi }t/\hbar }|\Psi (0)\rangle .}
However, all observable properties of the state are in fact constant in time. For example, if|Ψ(t)⟩{\displaystyle |\Psi (t)\rangle }represents a simple one-dimensional single-particle wavefunctionΨ(x,t){\displaystyle \Psi (x,t)}, the probability that the particle is at locationxis|Ψ(x,t)|2=|e−iEΨt/ℏΨ(x,0)|2=|e−iEΨt/ℏ|2|Ψ(x,0)|2=|Ψ(x,0)|2,{\displaystyle |\Psi (x,t)|^{2}=\left|e^{-iE_{\Psi }t/\hbar }\Psi (x,0)\right|^{2}=\left|e^{-iE_{\Psi }t/\hbar }\right|^{2}\left|\Psi (x,0)\right|^{2}=\left|\Psi (x,0)\right|^{2},}which is independent of the timet.
TheHeisenberg pictureis an alternativemathematical formulation of quantum mechanicswhere stationary states are truly mathematically constant in time.
As mentioned above, these equations assume that the Hamiltonian is time-independent. This means simply that stationary states are only stationary when the rest of the system is fixed and stationary as well. For example, a1s electronin ahydrogen atomis in a stationary state, but if the hydrogen atom reacts with another atom, then the electron will of course be disturbed.
Spontaneous decay complicates the question of stationary states. For example, according to simple (nonrelativistic)quantum mechanics, thehydrogen atomhas many stationary states:1s, 2s, 2p, and so on, are all stationary states. But in reality, only the ground state 1s is truly "stationary": An electron in a higher energy level willspontaneously emitone or morephotonsto decay into the ground state.[3]This seems to contradict the idea that stationary states should have unchanging properties.
The explanation is that theHamiltonianused in nonrelativistic quantum mechanics is only an approximation to the Hamiltonian fromquantum field theory. The higher-energy electron states (2s, 2p, 3s, etc.) are stationary states according to the approximate Hamiltonian, butnotstationary according to the true Hamiltonian, because ofvacuum fluctuations. On the other hand, the 1s state is truly a stationary state, according to both the approximate and the true Hamiltonian.
An orbital is a stationary state (or approximation thereof) of a one-electron atom or molecule; more specifically, anatomic orbitalfor an electron in an atom, or amolecular orbitalfor an electron in a molecule.[4]
For a molecule that contains only a single electron (e.g. atomichydrogenorH2+), an orbital is exactly the same as a total stationary state of the molecule. However, for a many-electron molecule, an orbital is completely different from a total stationary state, which is amany-particle staterequiring a more complicated description (such as aSlater determinant).[5]In particular, in a many-electron molecule, an orbital is not the total stationary state of the molecule, but rather the stationary state of a single electron within the molecule. This concept of an orbital is only meaningful under the approximation that if we ignore the electron–electron instantaneous repulsion terms in the Hamiltonian as a simplifying assumption, we can decompose the total eigenvector of a many-electron molecule into separate contributions from individual electron stationary states (orbitals), each of which are obtained under the one-electron approximation. (Luckily, chemists and physicists can often (but not always) use this "single-electron approximation".) In this sense, in a many-electron system, an orbital can be considered as the stationary state of an individual electron in the system.
In chemistry, calculation of molecular orbitals typically also assume theBorn–Oppenheimer approximation.
|
https://en.wikipedia.org/wiki/Stationary_state
|
TheWald–Wolfowitz runs test(or simplyruns test), named after statisticiansAbraham WaldandJacob Wolfowitzis anon-parametricstatistical test that checks a randomness hypothesis for a two-valueddata sequence. More precisely, it can be used totest the hypothesisthat the elements of the sequence are mutuallyindependent.
Arunof a sequence is a maximal non-empty segment of the sequence consisting of adjacent equal elements. For example, the 21-element-long sequence
consists of 6 runs, with lengths 4, 3, 3, 1, 6, and 4. The run test is based on thenull hypothesisthat each element in the sequence is independently drawn from the same distribution.
Under the null hypothesis, the number of runs in a sequence ofNelements[note 1]is arandom variablewhoseconditional distributiongiven the observation ofN+positive values[note 2]andN−negative values (N=N++N−) is approximately normal, with:[1][2]
Equivalently, the number of runs isR=12(N++N−+1−∑i=1N−1xixi+1){\displaystyle R={\frac {1}{2}}(N_{+}+N_{-}+1-\sum _{i=1}^{N-1}x_{i}x_{i+1})}.
These parameters do not assume that the positive and negative elements have equal probabilities of occurring, but only assume that the elements areindependent and identically distributed. If the number of runs issignificantlyhigher or lower than expected, the hypothesis of statistical independence of the elements may be rejected.
The number of runs isR=12(N++N−+1−∑i=1N−1xixi+1){\displaystyle R={\frac {1}{2}}(N_{+}+N_{-}+1-\sum _{i=1}^{N-1}x_{i}x_{i+1})}. By independence, the expectation isE[R]=12(N+1−(N−1)E[x1x2]){\displaystyle E[R]={\frac {1}{2}}(N+1-(N-1)E[x_{1}x_{2}])}Writing out all possibilities, we findx1x2={+1with probabilityN+(N+−1)+N−(N−−1)N(N−1)−1with probability2N+N−N(N−1){\displaystyle x_{1}x_{2}={\begin{cases}+1\quad &{\text{ with probability }}{\frac {N_{+}(N_{+}-1)+N_{-}(N_{-}-1)}{N(N-1)}}\\-1\quad &{\text{ with probability }}{\frac {2N_{+}N_{-}}{N(N-1)}}\\\end{cases}}}Thus,E[x1x2]=(N+−N−)2−NN(N−1){\displaystyle E[x_{1}x_{2}]={\frac {(N_{+}-N_{-})^{2}-N}{N(N-1)}}}.
Now simplify the expression to getE[R]=2N+N−N+1{\displaystyle E[R]={\frac {2\ N_{+}\ N_{-}}{N}}+1}.
Similarly, the variance of the number of runs isVar[R]=14Var[∑i=1N−1xixi+1]=14((N−1)E[x1x2x1x2]+2(N−2)E[x1x2x2x3]+(N−2)(N−3)E[x1x2x3x4]−(N−1)2E[x1x2]2){\displaystyle Var[R]={\frac {1}{4}}Var[\sum _{i=1}^{N-1}x_{i}x_{i+1}]={\frac {1}{4}}((N-1)E[x_{1}x_{2}x_{1}x_{2}]+2(N-2)E[x_{1}x_{2}x_{2}x_{3}]+(N-2)(N-3)E[x_{1}x_{2}x_{3}x_{4}]-(N-1)^{2}E[x_{1}x_{2}]^{2})}and simplifying, we obtain the variance.
Similarly we can calculate all moments ofR{\displaystyle R}, but the algebra becomes uglier and uglier.
Theorem.If we sample longer and longer sequences, withlimN+/N=p{\displaystyle \lim N_{+}/N=p}for some fixedp∈(0,1){\displaystyle p\in (0,1)}, thenR−μσ∼N(R/μ−1){\displaystyle {\frac {R-\mu }{\sigma }}\sim {\sqrt {N}}(R/\mu -1)}converges in distribution to the normal distribution with mean 0 and variance 1.
Proof sketch.It suffices to prove the asymptotic normality of the sequence∑i=1N−1xixi+1{\displaystyle \sum _{i=1}^{N-1}x_{i}x_{i+1}}, which can be proven by amartingale central limit theorem.
Runs tests can be used to test:
TheKolmogorov–Smirnov testhas been shown to be more powerful than the Wald–Wolfowitz test for detecting differences between distributions that differ solely in their location. However, the reverse is true if the distributions differ in variance and have at the most only a small difference in location.[citation needed]
The Wald–Wolfowitz runs test has been extended for use with severalsamples.[3][4][5][6]
|
https://en.wikipedia.org/wiki/Wald%E2%80%93Wolfowitz_runs_test
|
Ashared-nothing architecture(SN) is adistributed computingarchitecturein which each update request is satisfied by a single node (processor/memory/storage unit) in acomputer cluster. The intent is to eliminate contention among nodes. Nodes do not share (independently access) the same memory or storage.
One alternative architecture is shared everything, in which requests are satisfied by arbitrary combinations of nodes. This may introduce contention, as multiple nodes may seek to update the same data at the same time. It also contrasts withshared-diskandshared-memoryarchitectures.
SN eliminatessingle points of failure, allowing the overall system to continue operating despite failures in individual nodes and allowing individual nodes to upgrade hardware or software without a system-wide shutdown.[1]
A SN system can scale simply by adding nodes, since no central resource bottlenecks the system.[2]In databases, a term for the part of a database on a single node is ashard. A SN system typically partitions its data among many nodes. A refinement is to replicate commonly used but infrequently modified data across many nodes, allowing more requests to be resolved on a single node.
Michael Stonebrakerat theUniversity of California, Berkeleyused the term in a 1986 database paper.[3]Teradatadelivered the first SN database system in 1983.[4]Tandem ComputersNonStopsystems, a shared-nothing implementation of hardware and software was released to market in 1976.[5][6]Tandem Computers later releasedNonStop SQL, a shared-nothing relational database, in 1984.[7]
Shared-nothing is popular forweb development.
Shared-nothing architectures are prevalent fordata warehousingapplications, although requests that require data from multiple nodes can dramatically reduce throughput.[8]
|
https://en.wikipedia.org/wiki/Shared_nothing_architecture
|
Ininformation theory, theinformation content,self-information,surprisal, orShannon informationis a basic quantity derived from theprobabilityof a particulareventoccurring from arandom variable. It can be thought of as an alternative way of expressing probability, much likeoddsorlog-odds, but which has particular mathematical advantages in the setting of information theory.
The Shannon information can be interpreted as quantifying the level of "surprise" of a particular outcome. As it is such a basic quantity, it also appears in several other settings, such as the length of a message needed to transmit the event given an optimalsource codingof the random variable.
The Shannon information is closely related toentropy, which is the expected value of the self-information of a random variable, quantifying how surprising the random variable is "on average". This is the average amount of self-information an observer would expect to gain about a random variable when measuring it.[1]
The information content can be expressed in variousunits of information, of which the most common is the "bit" (more formally called theshannon), as explained below.
The term 'perplexity' has been used in language modelling to quantify the uncertainty inherent in a set of prospective events.[citation needed]
Claude Shannon's definition of self-information was chosen to meet several axioms:
The detailed derivation is below, but it can be shown that there is a unique function of probability that meets these three axioms, up to a multiplicative scaling factor. Broadly, given a real numberb>1{\displaystyle b>1}and aneventx{\displaystyle x}withprobabilityP{\displaystyle P}, the information content is defined as follows:I(x):=−logb[Pr(x)]=−logb(P).{\displaystyle \mathrm {I} (x):=-\log _{b}{\left[\Pr {\left(x\right)}\right]}=-\log _{b}{\left(P\right)}.}
The basebcorresponds to the scaling factor above. Different choices ofbcorrespond to different units of information: whenb= 2, the unit is theshannon(symbol Sh), often called a 'bit'; whenb=e, the unit is thenatural unit of information(symbol nat); and whenb= 10, the unit is thehartley(symbol Hart).
Formally, given a discrete random variableX{\displaystyle X}withprobability mass functionpX(x){\displaystyle p_{X}{\left(x\right)}}, the self-information of measuringX{\displaystyle X}asoutcomex{\displaystyle x}is defined as[2]IX(x):=−log[pX(x)]=log(1pX(x)).{\displaystyle \operatorname {I} _{X}(x):=-\log {\left[p_{X}{\left(x\right)}\right]}=\log {\left({\frac {1}{p_{X}{\left(x\right)}}}\right)}.}
The use of the notationIX(x){\displaystyle I_{X}(x)}for self-information above is not universal. Since the notationI(X;Y){\displaystyle I(X;Y)}is also often used for the related quantity ofmutual information, many authors use a lowercasehX(x){\displaystyle h_{X}(x)}for self-entropy instead, mirroring the use of the capitalH(X){\displaystyle H(X)}for the entropy.
For a givenprobability space, the measurement of rarereventsare intuitively more "surprising", and yield more information content, than more common values. Thus, self-information is astrictly decreasing monotonic functionof the probability, or sometimes called an "antitonic" function.
While standard probabilities are represented by real numbers in the interval[0,1]{\displaystyle [0,1]}, self-informations are represented byextended real numbersin the interval[0,∞]{\displaystyle [0,\infty ]}. In particular, we have the following, for any choice of logarithmic base:
From this, we can get a few general properties:
The Shannon information is closely related to thelog-odds. In particular, given some eventx{\displaystyle x}, suppose thatp(x){\displaystyle p(x)}is the probability ofx{\displaystyle x}occurring, and thatp(¬x)=1−p(x){\displaystyle p(\lnot x)=1-p(x)}is the probability ofx{\displaystyle x}not occurring. Then we have the following definition of the log-odds:log-odds(x)=log(p(x)p(¬x)){\displaystyle {\text{log-odds}}(x)=\log \left({\frac {p(x)}{p(\lnot x)}}\right)}
This can be expressed as a difference of two Shannon informations:log-odds(x)=I(¬x)−I(x){\displaystyle {\text{log-odds}}(x)=\mathrm {I} (\lnot x)-\mathrm {I} (x)}
In other words, the log-odds can be interpreted as the level of surprise when the eventdoesn'thappen, minus the level of surprise when the eventdoeshappen.
The information content of twoindependent eventsis the sum of each event's information content. This property is known asadditivityin mathematics, andsigma additivityin particular inmeasureand probability theory. Consider twoindependent random variablesX,Y{\textstyle X,\,Y}withprobability mass functionspX(x){\displaystyle p_{X}(x)}andpY(y){\displaystyle p_{Y}(y)}respectively. Thejoint probability mass functionis
pX,Y(x,y)=Pr(X=x,Y=y)=pX(x)pY(y){\displaystyle p_{X,Y}\!\left(x,y\right)=\Pr(X=x,\,Y=y)=p_{X}\!(x)\,p_{Y}\!(y)}
becauseX{\textstyle X}andY{\textstyle Y}areindependent. The information content of theoutcome(X,Y)=(x,y){\displaystyle (X,Y)=(x,y)}isIX,Y(x,y)=−log2[pX,Y(x,y)]=−log2[pX(x)pY(y)]=−log2[pX(x)]−log2[pY(y)]=IX(x)+IY(y){\displaystyle {\begin{aligned}\operatorname {I} _{X,Y}(x,y)&=-\log _{2}\left[p_{X,Y}(x,y)\right]=-\log _{2}\left[p_{X}\!(x)p_{Y}\!(y)\right]\\[5pt]&=-\log _{2}\left[p_{X}{(x)}\right]-\log _{2}\left[p_{Y}{(y)}\right]\\[5pt]&=\operatorname {I} _{X}(x)+\operatorname {I} _{Y}(y)\end{aligned}}}See§ Two independent, identically distributed dicebelow for an example.
The corresponding property forlikelihoodsis that thelog-likelihoodof independent events is the sum of the log-likelihoods of each event. Interpreting log-likelihood as "support" or negative surprisal (the degree to which an event supports a given model: a model is supported by an event to the extent that the event is unsurprising, given the model), this states that independent events add support: the information that the two events together provide for statistical inference is the sum of their independent information.
TheShannon entropyof the random variableX{\displaystyle X}above isdefined asH(X)=∑x−pX(x)logpX(x)=∑xpX(x)IX(x)=defE[IX(X)],{\displaystyle {\begin{alignedat}{2}\mathrm {H} (X)&=\sum _{x}{-p_{X}{\left(x\right)}\log {p_{X}{\left(x\right)}}}\\&=\sum _{x}{p_{X}{\left(x\right)}\operatorname {I} _{X}(x)}\\&{\overset {\underset {\mathrm {def} }{}}{=}}\ \operatorname {E} {\left[\operatorname {I} _{X}(X)\right]},\end{alignedat}}}by definition equal to theexpectedinformation content of measurement ofX{\displaystyle X}.[3]: 11[4]: 19–20The expectation is taken over thediscrete valuesover itssupport.
Sometimes, the entropy itself is called the "self-information" of the random variable, possibly because the entropy satisfiesH(X)=I(X;X){\displaystyle \mathrm {H} (X)=\operatorname {I} (X;X)}, whereI(X;X){\displaystyle \operatorname {I} (X;X)}is themutual informationofX{\displaystyle X}with itself.[5]
Forcontinuous random variablesthe corresponding concept isdifferential entropy.
This measure has also been calledsurprisal, as it represents the "surprise" of seeing the outcome (a highly improbable outcome is very surprising). This term (as a log-probability measure) was introduced byEdward W. Samsonin his 1951 report "Fundamental natural concepts of information theory".[6][7]An early appearance in the Physics literature is inMyron Tribus' 1961 bookThermostatics and Thermodynamics.[8][9]
When the event is a random realization (of a variable) the self-information of the variable is defined as theexpected valueof the self-information of the realization.[citation needed]
Consider theBernoulli trialoftossing a fair coinX{\displaystyle X}. Theprobabilitiesof theeventsof the coin landing as headsH{\displaystyle {\text{H}}}and tailsT{\displaystyle {\text{T}}}(seefair coinandobverse and reverse) areone halfeach,pX(H)=pX(T)=12=0.5{\textstyle p_{X}{({\text{H}})}=p_{X}{({\text{T}})}={\tfrac {1}{2}}=0.5}. Uponmeasuringthe variable as heads, the associated information gain isIX(H)=−log2pX(H)=−log212=1,{\displaystyle \operatorname {I} _{X}({\text{H}})=-\log _{2}{p_{X}{({\text{H}})}}=-\log _{2}\!{\tfrac {1}{2}}=1,}so the information gain of a fair coin landing as heads is 1shannon.[2]Likewise, the information gain of measuring tailsT{\displaystyle T}isIX(T)=−log2pX(T)=−log212=1Sh.{\displaystyle \operatorname {I} _{X}(T)=-\log _{2}{p_{X}{({\text{T}})}}=-\log _{2}{\tfrac {1}{2}}=1{\text{ Sh}}.}
Suppose we have afair six-sided die. The value of a die roll is adiscrete uniform random variableX∼DU[1,6]{\displaystyle X\sim \mathrm {DU} [1,6]}withprobability mass functionpX(k)={16,k∈{1,2,3,4,5,6}0,otherwise{\displaystyle p_{X}(k)={\begin{cases}{\frac {1}{6}},&k\in \{1,2,3,4,5,6\}\\0,&{\text{otherwise}}\end{cases}}}The probability of rolling a 4 ispX(4)=16{\textstyle p_{X}(4)={\frac {1}{6}}}, as for any other valid roll. The information content of rolling a 4 is thusIX(4)=−log2pX(4)=−log216≈2.585Sh{\displaystyle \operatorname {I} _{X}(4)=-\log _{2}{p_{X}{(4)}}=-\log _{2}{\tfrac {1}{6}}\approx 2.585\;{\text{Sh}}}of information.
Suppose we have twoindependent, identically distributed random variablesX,Y∼DU[1,6]{\textstyle X,\,Y\sim \mathrm {DU} [1,6]}each corresponding to anindependentfair 6-sided dice roll. Thejoint distributionofX{\displaystyle X}andY{\displaystyle Y}ispX,Y(x,y)=Pr(X=x,Y=y)=pX(x)pY(y)={136,x,y∈[1,6]∩N0otherwise.{\displaystyle {\begin{aligned}p_{X,Y}\!\left(x,y\right)&{}=\Pr(X=x,\,Y=y)=p_{X}\!(x)\,p_{Y}\!(y)\\&{}={\begin{cases}\displaystyle {1 \over 36},\ &x,y\in [1,6]\cap \mathbb {N} \\0&{\text{otherwise.}}\end{cases}}\end{aligned}}}
The information content of therandom variate(X,Y)=(2,4){\displaystyle (X,Y)=(2,\,4)}isIX,Y(2,4)=−log2[pX,Y(2,4)]=log236=2log26≈5.169925Sh,{\displaystyle {\begin{aligned}\operatorname {I} _{X,Y}{(2,4)}&=-\log _{2}\!{\left[p_{X,Y}{(2,4)}\right]}=\log _{2}\!{36}=2\log _{2}\!{6}\\&\approx 5.169925{\text{ Sh}},\end{aligned}}}and can also be calculated byadditivity of eventsIX,Y(2,4)=−log2[pX,Y(2,4)]=−log2[pX(2)]−log2[pY(4)]=2log26≈5.169925Sh.{\displaystyle {\begin{aligned}\operatorname {I} _{X,Y}{(2,4)}&=-\log _{2}\!{\left[p_{X,Y}{(2,4)}\right]}=-\log _{2}\!{\left[p_{X}(2)\right]}-\log _{2}\!{\left[p_{Y}(4)\right]}\\&=2\log _{2}\!{6}\\&\approx 5.169925{\text{ Sh}}.\end{aligned}}}
If we receive information about the value of the dicewithout knowledgeof which die had which value, we can formalize the approach with so-called counting variablesCk:=δk(X)+δk(Y)={0,¬(X=k∨Y=k)1,X=k⊻Y=k2,X=k∧Y=k{\displaystyle C_{k}:=\delta _{k}(X)+\delta _{k}(Y)={\begin{cases}0,&\neg \,(X=k\vee Y=k)\\1,&\quad X=k\,\veebar \,Y=k\\2,&\quad X=k\,\wedge \,Y=k\end{cases}}}fork∈{1,2,3,4,5,6}{\displaystyle k\in \{1,2,3,4,5,6\}}, then∑k=16Ck=2{\textstyle \sum _{k=1}^{6}{C_{k}}=2}and the counts have themultinomial distributionf(c1,…,c6)=Pr(C1=c1and…andC6=c6)={1181c1!⋯ck!,when∑i=16ci=20otherwise,={118,when 2ckare1136,when exactly oneck=20,otherwise.{\displaystyle {\begin{aligned}f(c_{1},\ldots ,c_{6})&{}=\Pr(C_{1}=c_{1}{\text{ and }}\dots {\text{ and }}C_{6}=c_{6})\\&{}={\begin{cases}{\displaystyle {1 \over {18}}{1 \over c_{1}!\cdots c_{k}!}},\ &{\text{when }}\sum _{i=1}^{6}c_{i}=2\\0&{\text{otherwise,}}\end{cases}}\\&{}={\begin{cases}{1 \over 18},\ &{\text{when 2 }}c_{k}{\text{ are }}1\\{1 \over 36},\ &{\text{when exactly one }}c_{k}=2\\0,\ &{\text{otherwise.}}\end{cases}}\end{aligned}}}
To verify this, the 6 outcomes(X,Y)∈{(k,k)}k=16={(1,1),(2,2),(3,3),(4,4),(5,5),(6,6)}{\textstyle (X,Y)\in \left\{(k,k)\right\}_{k=1}^{6}=\left\{(1,1),(2,2),(3,3),(4,4),(5,5),(6,6)\right\}}correspond to the eventCk=2{\displaystyle C_{k}=2}and atotal probabilityof1/6. These are the only events that are faithfully preserved with identity of which dice rolled which outcome because the outcomes are the same. Without knowledge to distinguish the dice rolling the other numbers, the other(62)=15{\textstyle {\binom {6}{2}}=15}combinationscorrespond to one die rolling one number and the other die rolling a different number, each having probability1/18. Indeed,6⋅136+15⋅118=1{\textstyle 6\cdot {\tfrac {1}{36}}+15\cdot {\tfrac {1}{18}}=1}, as required.
Unsurprisingly, the information content of learning that both dice were rolled as the same particular number is more than the information content of learning that one dice was one number and the other was a different number. Take for examples the eventsAk={(X,Y)=(k,k)}{\displaystyle A_{k}=\{(X,Y)=(k,k)\}}andBj,k={cj=1}∩{ck=1}{\displaystyle B_{j,k}=\{c_{j}=1\}\cap \{c_{k}=1\}}forj≠k,1≤j,k≤6{\displaystyle j\neq k,1\leq j,k\leq 6}. For example,A2={X=2andY=2}{\displaystyle A_{2}=\{X=2{\text{ and }}Y=2\}}andB3,4={(3,4),(4,3)}{\displaystyle B_{3,4}=\{(3,4),(4,3)\}}.
The information contents areI(A2)=−log2136=5.169925Sh{\displaystyle \operatorname {I} (A_{2})=-\log _{2}\!{\tfrac {1}{36}}=5.169925{\text{ Sh}}}I(B3,4)=−log2118=4.169925Sh{\displaystyle \operatorname {I} \left(B_{3,4}\right)=-\log _{2}\!{\tfrac {1}{18}}=4.169925{\text{ Sh}}}
LetSame=⋃i=16Ai{\textstyle {\text{Same}}=\bigcup _{i=1}^{6}{A_{i}}}be the event that both dice rolled the same value andDiff=Same¯{\displaystyle {\text{Diff}}={\overline {\text{Same}}}}be the event that the dice differed. ThenPr(Same)=16{\textstyle \Pr({\text{Same}})={\tfrac {1}{6}}}andPr(Diff)=56{\textstyle \Pr({\text{Diff}})={\tfrac {5}{6}}}. The information contents of the events areI(Same)=−log216=2.5849625Sh{\displaystyle \operatorname {I} ({\text{Same}})=-\log _{2}\!{\tfrac {1}{6}}=2.5849625{\text{ Sh}}}I(Diff)=−log256=0.2630344Sh.{\displaystyle \operatorname {I} ({\text{Diff}})=-\log _{2}\!{\tfrac {5}{6}}=0.2630344{\text{ Sh}}.}
The probability mass or density function (collectivelyprobability measure) of thesum of two independent random variablesis the convolution of each probability measure. In the case of independent fair 6-sided dice rolls, the random variableZ=X+Y{\displaystyle Z=X+Y}has probability mass functionpZ(z)=pX(x)∗pY(y)=6−|z−7|36{\textstyle p_{Z}(z)=p_{X}(x)*p_{Y}(y)={6-|z-7| \over 36}}, where∗{\displaystyle *}represents thediscrete convolution. TheoutcomeZ=5{\displaystyle Z=5}has probabilitypZ(5)=436=19{\textstyle p_{Z}(5)={\frac {4}{36}}={1 \over 9}}. Therefore, the information asserted isIZ(5)=−log219=log29≈3.169925Sh.{\displaystyle \operatorname {I} _{Z}(5)=-\log _{2}{\tfrac {1}{9}}=\log _{2}{9}\approx 3.169925{\text{ Sh}}.}
Generalizing the§ Fair dice rollexample above, consider a generaldiscrete uniform random variable(DURV)X∼DU[a,b];a,b∈Z,b≥a.{\displaystyle X\sim \mathrm {DU} [a,b];\quad a,b\in \mathbb {Z} ,\ b\geq a.}For convenience, defineN:=b−a+1{\textstyle N:=b-a+1}. Theprobability mass functionispX(k)={1N,k∈[a,b]∩Z0,otherwise.{\displaystyle p_{X}(k)={\begin{cases}{\frac {1}{N}},&k\in [a,b]\cap \mathbb {Z} \\0,&{\text{otherwise}}.\end{cases}}}In general, the values of the DURV need not beintegers, or for the purposes of information theory even uniformly spaced; they need only beequiprobable.[2]The information gain of any observationX=k{\displaystyle X=k}isIX(k)=−log21N=log2NSh.{\displaystyle \operatorname {I} _{X}(k)=-\log _{2}{\frac {1}{N}}=\log _{2}{N}{\text{ Sh}}.}
Ifb=a{\displaystyle b=a}above,X{\displaystyle X}degeneratesto aconstant random variablewith probability distribution deterministically given byX=b{\displaystyle X=b}and probability measure theDirac measurepX(k)=δb(k){\textstyle p_{X}(k)=\delta _{b}(k)}. The only valueX{\displaystyle X}can take isdeterministicallyb{\displaystyle b}, so the information content of any measurement ofX{\displaystyle X}isIX(b)=−log21=0.{\displaystyle \operatorname {I} _{X}(b)=-\log _{2}{1}=0.}In general, there is no information gained from measuring a known value.[2]
Generalizing all of the above cases, consider acategoricaldiscrete random variablewithsupportS={si}i=1N{\textstyle {\mathcal {S}}={\bigl \{}s_{i}{\bigr \}}_{i=1}^{N}}andprobability mass functiongiven by
pX(k)={pi,k=si∈S0,otherwise.{\displaystyle p_{X}(k)={\begin{cases}p_{i},&k=s_{i}\in {\mathcal {S}}\\0,&{\text{otherwise}}.\end{cases}}}
For the purposes of information theory, the valuess∈S{\displaystyle s\in {\mathcal {S}}}do not have to benumbers; they can be anymutually exclusiveeventson ameasure spaceoffinite measurethat has beennormalizedto aprobability measurep{\displaystyle p}.Without loss of generality, we can assume the categorical distribution is supported on the set[N]={1,2,…,N}{\textstyle [N]=\left\{1,2,\dots ,N\right\}}; the mathematical structure isisomorphicin terms ofprobability theoryand thereforeinformation theoryas well.
The information of the outcomeX=x{\displaystyle X=x}is given
IX(x)=−log2pX(x).{\displaystyle \operatorname {I} _{X}(x)=-\log _{2}{p_{X}(x)}.}
From these examples, it is possible to calculate the information of any set ofindependentDRVswith knowndistributionsbyadditivity.
By definition, information is transferred from an originating entity possessing the information to a receiving entity only when the receiver had not known the informationa priori. If the receiving entity had previously known the content of a message with certainty before receiving the message, the amount of information of the message received is zero. Only when the advance knowledge of the content of the message by the receiver is less than 100% certain does the message actually convey information.
For example, quoting a character (the Hippy Dippy Weatherman) of comedianGeorge Carlin:
Weather forecast for tonight: dark.Continued dark overnight, with widely scattered light by morning.[10]
Assuming that one does not reside near thepolar regions, the amount of information conveyed in that forecast is zero because it is known, in advance of receiving the forecast, that darkness always comes with the night.
Accordingly, the amount of self-information contained in a message conveying content informing an occurrence ofevent,ωn{\displaystyle \omega _{n}}, depends only on the probability of that event.
I(ωn)=f(P(ωn)){\displaystyle \operatorname {I} (\omega _{n})=f(\operatorname {P} (\omega _{n}))}for some functionf(⋅){\displaystyle f(\cdot )}to be determined below. IfP(ωn)=1{\displaystyle \operatorname {P} (\omega _{n})=1}, thenI(ωn)=0{\displaystyle \operatorname {I} (\omega _{n})=0}. IfP(ωn)<1{\displaystyle \operatorname {P} (\omega _{n})<1}, thenI(ωn)>0{\displaystyle \operatorname {I} (\omega _{n})>0}.
Further, by definition, themeasureof self-information is nonnegative and additive. If a message informing of eventC{\displaystyle C}is theintersectionof twoindependenteventsA{\displaystyle A}andB{\displaystyle B}, then the information of eventC{\displaystyle C}occurring is that of the compound message of both independent eventsA{\displaystyle A}andB{\displaystyle B}occurring. The quantity of information of compound messageC{\displaystyle C}would be expected to equal thesumof the amounts of information of the individual component messagesA{\displaystyle A}andB{\displaystyle B}respectively:I(C)=I(A∩B)=I(A)+I(B).{\displaystyle \operatorname {I} (C)=\operatorname {I} (A\cap B)=\operatorname {I} (A)+\operatorname {I} (B).}
Because of the independence of eventsA{\displaystyle A}andB{\displaystyle B}, the probability of eventC{\displaystyle C}isP(C)=P(A∩B)=P(A)⋅P(B).{\displaystyle \operatorname {P} (C)=\operatorname {P} (A\cap B)=\operatorname {P} (A)\cdot \operatorname {P} (B).}
However, applying functionf(⋅){\displaystyle f(\cdot )}results inI(C)=I(A)+I(B)f(P(C))=f(P(A))+f(P(B))=f(P(A)⋅P(B)){\displaystyle {\begin{aligned}\operatorname {I} (C)&=\operatorname {I} (A)+\operatorname {I} (B)\\f(\operatorname {P} (C))&=f(\operatorname {P} (A))+f(\operatorname {P} (B))\\&=f{\big (}\operatorname {P} (A)\cdot \operatorname {P} (B){\big )}\\\end{aligned}}}
Thanks to work onCauchy's functional equation, the only monotone functionsf(⋅){\displaystyle f(\cdot )}having the property such thatf(x⋅y)=f(x)+f(y){\displaystyle f(x\cdot y)=f(x)+f(y)}are thelogarithmfunctionslogb(x){\displaystyle \log _{b}(x)}. The only operational difference between logarithms of different bases is that of different scaling constants, so we may assume
f(x)=Klog(x){\displaystyle f(x)=K\log(x)}
wherelog{\displaystyle \log }is thenatural logarithm. Since the probabilities of events are always between 0 and 1 and the information associated with these events must be nonnegative, that requires thatK<0{\displaystyle K<0}.
Taking into account these properties, the self-informationI(ωn){\displaystyle \operatorname {I} (\omega _{n})}associated with outcomeωn{\displaystyle \omega _{n}}with probabilityP(ωn){\displaystyle \operatorname {P} (\omega _{n})}is defined as:I(ωn)=−log(P(ωn))=log(1P(ωn)){\displaystyle \operatorname {I} (\omega _{n})=-\log(\operatorname {P} (\omega _{n}))=\log \left({\frac {1}{\operatorname {P} (\omega _{n})}}\right)}
The smaller the probability of eventωn{\displaystyle \omega _{n}}, the larger the quantity of self-information associated with the message that the event indeed occurred. If the above logarithm is base 2, the unit ofI(ωn){\displaystyle I(\omega _{n})}isshannon. This is the most common practice. When using thenatural logarithmof basee{\displaystyle e}, the unit will be thenat. For the base 10 logarithm, the unit of information is thehartley.
As a quick illustration, the information content associated with an outcome of 4 heads (or any specific outcome) in 4 consecutive tosses of a coin would be 4 shannons (probability 1/16), and the information content associated with getting a result other than the one specified would be ~0.09 shannons (probability 15/16). See above for detailed examples.
|
https://en.wikipedia.org/wiki/Self-information
|
Incomputer visionandimage processing, afeatureis a piece of information about the content of an image; typically about whether a certain region of the image has certain properties. Features may be specific structures in the image such as points, edges or objects. Features may also be the result of a generalneighborhood operationorfeature detectionapplied to the image. Other examples of features are related to motion in image sequences, or to shapes defined in terms of curves or boundaries between different image regions.
More broadly afeatureis any piece of information that is relevant for solving the computational task related to a certain application. This is the same sense asfeatureinmachine learningandpattern recognitiongenerally, though image processing has a very sophisticated collection of features. The feature concept is very general and the choice of features in a particular computer vision system may be highly dependent on the specific problem at hand.
There is no universal or exact definition of what constitutes a feature, and the exact definition often depends on the problem or the type of application. Nevertheless, a feature is typically defined as an "interesting" part of animage, and features are used as a starting point for many computer vision algorithms.
Since features are used as the starting point and main primitives for subsequent algorithms, the overall algorithm will often only be as good as its feature detector. Consequently, the desirable property for a feature detector isrepeatability: whether or not the same feature will be detected in two or more different images of the same scene.
Feature detection is a low-levelimage processingoperation. That is, it is usually performed as the first operation on an image and examines everypixelto see if there is a feature present at that pixel. If this is part of a larger algorithm, then the algorithm will typically only examine the image in the region of the features. As a built-in pre-requisite to feature detection, the input image is usually smoothed by aGaussiankernel in ascale-space representationand one or several feature images are computed, often expressed in terms of localimage derivativeoperations.
Occasionally, when feature detection iscomputationally expensiveand there are time constraints, a higher-level algorithm may be used to guide the feature detection stage so that only certain parts of the image are searched for features.
There are many computer vision algorithms that use feature detection as the initial step, so as a result, a very large number of feature detectors have been developed. These vary widely in the kinds of feature detected, the computational complexity and the repeatability.
When features are defined in terms of local neighborhood operations applied to an image, a procedure commonly referred to asfeature extraction, one can distinguish between feature detection approaches that produce local decisions whether there is a feature of a given type at a given image point or not, and those who produce non-binary data as result. The distinction becomes relevant when the resulting detected features are relatively sparse. Although local decisions are made, the output from a feature detection step does not need to be a binary image. The result is often represented in terms of sets of (connected or unconnected) coordinates of the image points where features have been detected, sometimes with subpixel accuracy.
When feature extraction is done without local decision making, the result is often referred to as afeature image. Consequently, a feature image can be seen as an image in the sense that it is a function of the same spatial (or temporal) variables as the original image, but where the pixel values hold information about image features instead of intensity or color. This means that a feature image can be processed in a similar way as an ordinary image generated by an image sensor. Feature images are also often computed as integrated step in algorithms for feature detection.
In some applications, it is not sufficient to extract only one type of feature to obtain the relevant information from the image data. Instead, two or more different features are extracted, resulting in two or more feature descriptors at each image point. A common practice is to organize the information provided by all these descriptors as the elements of one single vector, commonly referred to as afeature vector. The set of all possible feature vectors constitutes afeature space.[1]
A common example of feature vectors appears when each image point is to be classified as belonging to a specific class. Assuming that each image point has a corresponding feature vector based on a suitable set of features, meaning that each class is well separated in the corresponding feature space, the classification of each image point can be done using standardclassificationmethod.
Another and related example occurs whenneural network-based processing is applied to images. The input data fed to the neural network is often given in terms of a feature vector from each image point, where the vector is constructed from several different features extracted from the image data. During a learning phase, the network can itself find which combinations of different features are useful for solving the problem at hand.
Edges are points where there is a boundary (or an edge) between two image regions. In general, an edge can be of almost arbitrary shape, and may include junctions. In practice, edges are usually defined as sets of points in the image that have a stronggradientmagnitude. Furthermore, some common algorithms will then chain high gradient points together to form a more complete description of an edge. These algorithms usually place some constraints on the properties of an edge, such as shape, smoothness, and gradient value.
Locally, edges have a one-dimensional structure.
The terms corners and interest points are used somewhat interchangeably and refer to point-like features in an image, which have a local two-dimensional structure. The name "Corner" arose since early algorithms first performededge detection, and then analyzed the edges to find rapid changes in direction (corners). These algorithms were then developed so that explicit edge detection was no longer required, for instance by looking for high levels ofcurvaturein theimage gradient. It was then noticed that the so-called corners were also being detected on parts of the image that were not corners in the traditional sense (for instance a small bright spot on a dark background may be detected). These points are frequently known as interest points, but the term "corner" is used by tradition[citation needed].
Blobs provide a complementary description of image structures in terms of regions, as opposed to corners that are more point-like. Nevertheless, blob descriptors may often contain a preferred point (a local maximum of an operator response or a center of gravity) which means that many blob detectors may also be regarded as interest point operators. Blob detectors can detect areas in an image that are too smooth to be detected by a corner detector.
Consider shrinking an image and then performing corner detection. The detector will respond to points that are sharp in the shrunk image, but may be smooth in the original image. It is at this point that the difference between a corner detector and a blob detector becomes somewhat vague. To a large extent, this distinction can be remedied by including an appropriate notion of scale. Nevertheless, due to their response properties to different types of image structures at different scales, the LoG and DoHblob detectorsare also mentioned in the article oncorner detection.
For elongated objects, the notion ofridgesis a natural tool. A ridge descriptor computed from a grey-level image can be seen as a generalization of amedial axis. From a practical viewpoint, a ridge can be thought of as a one-dimensional curve that represents an axis of symmetry, and in addition has an attribute of local ridge width associated with each ridge point. Unfortunately, however, it is algorithmically harder to extract ridge features from general classes of grey-level images than edge-, corner- or blob features. Nevertheless, ridge descriptors are frequently used for road extraction in aerial images and for extracting blood vessels in medical images—seeridge detection.
Feature detectionincludes methods for computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image domain, often in the form of isolated points, continuous curves or connected regions.
The extraction of features are sometimes made over several scalings. One of these methods is thescale-invariant feature transform(SIFT).
Once features have been detected, a local image patch around the feature can be extracted. This extraction may involve quite considerable amounts of image processing. The result is known as a feature descriptor or feature vector. Among the approaches that are used to feature description, one can mentionN-jetsand local histograms (seescale-invariant feature transformfor one example of a local histogram descriptor). In addition to such attribute information, the feature detection step by itself may also provide complementary attributes, such as the edge orientation and gradient magnitude in edge detection and the polarity and the strength of the blob in blob detection.
A specific image feature, defined in terms of a specific structure in the image data, can often be represented in different ways. For example, an edge can be represented as aBoolean variablein each image point that describes whether an edge is present at that point. Alternatively, we can instead use a representation that provides acertainty measureinstead of a Boolean statement of the edge's existence and combine this with information about theorientationof the edge. Similarly, the color of a specific region can either be represented in terms of the average color (three scalars) or acolor histogram(three functions).
When a computer vision system or computer vision algorithm is designed the choice of feature representation can be a critical issue. In some cases, a higher level of detail in the description of a feature may be necessary for solving the problem, but this comes at the cost of having to deal with more data and more demanding processing. Below, some of the factors which are relevant for choosing a suitable representation are discussed. In this discussion, an instance of a feature representation is referred to as afeature descriptor, or simplydescriptor.
Two examples of image features are local edge orientation and local velocity in an image sequence. In the case of orientation, the value of this feature may be more or less undefined if more than one edge are present in the corresponding neighborhood. Local velocity is undefined if the corresponding image region does not contain any spatial variation. As a consequence of this observation, it may be relevant to use a feature representation that includes a measure of certainty or confidence related to the statement about the feature value. Otherwise, it is a typical situation that the same descriptor is used to represent feature values of low certainty and feature values close to zero, with a resulting ambiguity in the interpretation of this descriptor. Depending on the application, such an ambiguity may or may not be acceptable.
In particular, if a featured image will be used in subsequent processing, it may be a good idea to employ a feature representation that includes information aboutcertaintyorconfidence. This enables a new feature descriptor to be computed from several descriptors, for example, computed at the same image point but at different scales, or from different but neighboring points, in terms of a weighted average where the weights are derived from the corresponding certainties. In the simplest case, the corresponding computation can be implemented as a low-pass filtering of the featured image. The resulting feature image will, in general, be more stable to noise.
In addition to having certainty measures included in the representation, the representation of the corresponding feature values may itself be suitable for anaveragingoperation or not. Most feature representations can be averaged in practice, but only in certain cases can the resulting descriptor be given a correct interpretation in terms of a feature value. Such representations are referred to asaverageable.
For example, if the orientation of an edge is represented in terms of an angle, this representation must have a discontinuity where the angle wraps from its maximal value to its minimal value. Consequently, it can happen that two similar orientations are represented by angles that have a mean that does not lie close to either of the original angles and, hence, this representation is not averageable. There are other representations of edge orientation, such as thestructure tensor, which are averageable.
Another example relates to motion, where in some cases only the normal velocity relative to some edge can be extracted. If two such features have been extracted and they can be assumed to refer to same true velocity, this velocity is not given as the average of the normal velocity vectors. Hence, normal velocity vectors are not averageable. Instead, there are other representations of motions, using matrices or tensors, that give the true velocity in terms of an average operation of the normal velocity descriptors.[citation needed]
Features detected in each image can be matched across multiple images to establishcorresponding featuressuch ascorresponding points.
The algorithm is based on comparing and analyzing point correspondences between the reference image and the target image. If any part of the cluttered scene shares correspondences greater than the threshold, that part of the cluttered scene image is targeted and considered to include the reference object there.[18]
|
https://en.wikipedia.org/wiki/Feature_detection_(computer_vision)
|
In thephysicalscience ofdynamics,rigid-body dynamicsstudies the movement ofsystemsof interconnectedbodiesunder the action of externalforces. The assumption that the bodies arerigid(i.e. they do notdeformunder the action of applied forces) simplifies analysis, by reducing the parameters that describe the configuration of the system to the translation and rotation ofreference framesattached to each body.[1][2]This excludes bodies that displayfluid, highlyelastic, andplasticbehavior.
The dynamics of a rigid body system is described by the laws ofkinematicsand by the application of Newton's second law (kinetics) or their derivative form,Lagrangian mechanics. The solution of these equations of motion provides a description of the position, the motion and the acceleration of the individual components of the system, and overall the system itself, as afunction of time. The formulation and solution of rigid body dynamics is an important tool in the computer simulation ofmechanical systems.
If a system of particles moves parallel to a fixed plane, the system is said to be constrained to planar movement. In this case, Newton's laws (kinetics) for a rigid system of N particles, Pi,i=1,...,N, simplify because there is no movement in thekdirection. Determine theresultant forceandtorqueat a reference pointR, to obtainF=∑i=1NmiAi,T=∑i=1N(ri−R)×miAi,{\displaystyle \mathbf {F} =\sum _{i=1}^{N}m_{i}\mathbf {A} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {r} _{i}-\mathbf {R} )\times m_{i}\mathbf {A} _{i},}
whereridenotes the planar trajectory of each particle.
Thekinematicsof a rigid body yields the formula for the acceleration of the particle Piin terms of the positionRand accelerationAof the reference particle as well as the angular velocity vectorωand angular acceleration vectorαof the rigid system of particles as,Ai=α×(ri−R)+ω×(ω×(ri−R))+A.{\displaystyle \mathbf {A} _{i}={\boldsymbol {\alpha }}\times (\mathbf {r} _{i}-\mathbf {R} )+{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times (\mathbf {r} _{i}-\mathbf {R} ))+\mathbf {A} .}
For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed alongkperpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectorseifrom the reference pointRto a pointriand the unit vectorsti=k×ei{\textstyle \mathbf {t} _{i}=\mathbf {k} \times \mathbf {e} _{i}}, soAi=α(Δriti)−ω2(Δriei)+A.{\displaystyle \mathbf {A} _{i}=\alpha (\Delta r_{i}\mathbf {t} _{i})-\omega ^{2}(\Delta r_{i}\mathbf {e} _{i})+\mathbf {A} .}
This yields the resultant force on the system asF=α∑i=1Nmi(Δriti)−ω2∑i=1Nmi(Δriei)+(∑i=1Nmi)A,{\displaystyle \mathbf {F} =\alpha \sum _{i=1}^{N}m_{i}\left(\Delta r_{i}\mathbf {t} _{i}\right)-\omega ^{2}\sum _{i=1}^{N}m_{i}\left(\Delta r_{i}\mathbf {e} _{i}\right)+\left(\sum _{i=1}^{N}m_{i}\right)\mathbf {A} ,}and torque asT=∑i=1N(miΔriei)×(α(Δriti)−ω2(Δriei)+A)=(∑i=1NmiΔri2)αk+(∑i=1NmiΔriei)×A,{\displaystyle {\begin{aligned}\mathbf {T} ={}&\sum _{i=1}^{N}(m_{i}\Delta r_{i}\mathbf {e} _{i})\times \left(\alpha (\Delta r_{i}\mathbf {t} _{i})-\omega ^{2}(\Delta r_{i}\mathbf {e} _{i})+\mathbf {A} \right)\\{}={}&\left(\sum _{i=1}^{N}m_{i}\Delta r_{i}^{2}\right)\alpha \mathbf {k} +\left(\sum _{i=1}^{N}m_{i}\Delta r_{i}\mathbf {e} _{i}\right)\times \mathbf {A} ,\end{aligned}}}
whereei×ei=0{\textstyle \mathbf {e} _{i}\times \mathbf {e} _{i}=0}andei×ti=k{\textstyle \mathbf {e} _{i}\times \mathbf {t} _{i}=\mathbf {k} }is the unit vector perpendicular to the plane for all of the particles Pi.
Use thecenter of massCas the reference point, so these equations for Newton's laws simplify to becomeF=MA,T=ICαk,{\displaystyle \mathbf {F} =M\mathbf {A} ,\quad \mathbf {T} =I_{\textbf {C}}\alpha \mathbf {k} ,}
whereMis the total mass andICis themoment of inertiaabout an axis perpendicular to the movement of the rigid system and through the center of mass.
Several methods to describe orientations of a rigid body in three dimensions have been developed. They are summarized in the following sections.
The first attempt to represent an orientation is attributed toLeonhard Euler. He imagined three reference frames that could rotate one around the other, and realized that by starting with a fixed reference frame and performing three rotations, he could get any other reference frame in the space (using two rotations to fix the vertical axis and another to fix the other two axes). The values of these three rotations are calledEuler angles. Commonly,ψ{\displaystyle \psi }is used to denote precession,θ{\displaystyle \theta }nutation, andϕ{\displaystyle \phi }intrinsic rotation.
These are three angles, also known as yaw, pitch and roll, Navigation angles and Cardan angles. Mathematically they constitute a set of six possibilities inside the twelve possible sets of Euler angles, the ordering being the one best used for describing the orientation of a vehicle such as an airplane. In aerospace engineering they are usually referred to as Euler angles.
Euler also realized that the composition of two rotations is equivalent to a single rotation about a different fixed axis (Euler's rotation theorem). Therefore, the composition of the former three angles has to be equal to only one rotation, whose axis was complicated to calculate until matrices were developed.
Based on this fact he introduced a vectorial way to describe any rotation, with a vector on the rotation axis and module equal to the value of the angle. Therefore, any orientation can be represented by a rotation vector (also called Euler vector) that leads to it from the reference frame. When used to represent an orientation, the rotation vector is commonly called orientation vector, or attitude vector.
A similar method, calledaxis-angle representation, describes a rotation or orientation using aunit vectoraligned with the rotation axis, and a separate value to indicate the angle (see figure).
With the introduction of matrices the Euler theorems were rewritten. The rotations were described byorthogonal matricesreferred to as rotation matrices or direction cosine matrices. When used to represent an orientation, a rotation matrix is commonly called orientation matrix, or attitude matrix.
The above-mentioned Euler vector is theeigenvectorof a rotation matrix (a rotation matrix has a unique realeigenvalue).
The product of two rotation matrices is the composition of rotations. Therefore, as before, the orientation can be given as the rotation from the initial frame to achieve the frame that we want to describe.
Theconfiguration spaceof a non-symmetricalobject inn-dimensional space isSO(n)×Rn. Orientation may be visualized by attaching a basis oftangent vectorsto an object. The direction in which each vector points determines its orientation.
Another way to describe rotations is usingrotation quaternions, also called versors. They are equivalent to rotation matrices and rotation vectors. With respect to rotation vectors, they can be more easily converted to and from matrices. When used to represent orientations, rotation quaternions are typically called orientation quaternions or attitude quaternions.
To consider rigid body dynamics in three-dimensional space, Newton's second law must be extended to define the relationship between the movement of a rigid body and the system of forces and torques that act on it.
Newton formulated his second law for a particle as, "The change of motion of an object is proportional to the force impressed and is made in the direction of the straight line in which the force is impressed."[3]Because Newton generally referred to mass times velocity as the "motion" of a particle, the phrase "change of motion" refers to the mass times acceleration of the particle, and so this law is usually written asF=ma,{\displaystyle \mathbf {F} =m\mathbf {a} ,}whereFis understood to be the only external force acting on the particle,mis the mass of the particle, andais its acceleration vector. The extension of Newton's second law to rigid bodies is achieved by considering a rigid system of particles.
If a system ofNparticles, Pi, i=1,...,N, are assembled into a rigid body, then Newton's second law can be applied to each of the particles in the body. IfFiis the external force applied to particle Piwith massmi, thenFi+∑j=1NFij=miai,i=1,…,N,{\displaystyle \mathbf {F} _{i}+\sum _{j=1}^{N}\mathbf {F} _{ij}=m_{i}\mathbf {a} _{i},\quad i=1,\ldots ,N,}whereFijis the internal force of particle Pjacting on particle Pithat maintains the constant distance between these particles.
An important simplification to these force equations is obtained by introducing theresultant forceand torque that acts on the rigid system. This resultant force and torque is obtained by choosing one of the particles in the system as a reference point,R, where each of the external forces are applied with the addition of an associated torque. The resultant forceFand torqueTare given by the formulas,F=∑i=1NFi,T=∑i=1N(Ri−R)×Fi,{\displaystyle \mathbf {F} =\sum _{i=1}^{N}\mathbf {F} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {R} _{i}-\mathbf {R} )\times \mathbf {F} _{i},}whereRiis the vector that defines the position of particle Pi.
Newton's second law for a particle combines with these formulas for the resultant force and torque to yield,F=∑i=1Nmiai,T=∑i=1N(Ri−R)×(miai),{\displaystyle \mathbf {F} =\sum _{i=1}^{N}m_{i}\mathbf {a} _{i},\quad \mathbf {T} =\sum _{i=1}^{N}(\mathbf {R} _{i}-\mathbf {R} )\times (m_{i}\mathbf {a} _{i}),}where the internal forcesFijcancel in pairs. Thekinematicsof a rigid body yields the formula for the acceleration of the particle Piin terms of the positionRand accelerationaof the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as,ai=α×(Ri−R)+ω×(ω×(Ri−R))+a.{\displaystyle \mathbf {a} _{i}=\alpha \times (\mathbf {R} _{i}-\mathbf {R} )+\omega \times (\omega \times (\mathbf {R} _{i}-\mathbf {R} ))+\mathbf {a} .}
The mass properties of the rigid body are represented by itscenter of massandinertia matrix. Choose the reference pointRso that it satisfies the condition∑i=1Nmi(Ri−R)=0,{\displaystyle \sum _{i=1}^{N}m_{i}(\mathbf {R} _{i}-\mathbf {R} )=0,}
then it is known as the center of mass of the system.
The inertia matrix [IR] of the system relative to the reference pointRis defined by[IR]=∑i=1Nmi(I(SiTSi)−SiSiT),{\displaystyle [I_{R}]=\sum _{i=1}^{N}m_{i}\left(\mathbf {I} \left(\mathbf {S} _{i}^{\textsf {T}}\mathbf {S} _{i}\right)-\mathbf {S} _{i}\mathbf {S} _{i}^{\textsf {T}}\right),}
whereSi{\displaystyle \mathbf {S} _{i}}is the column vectorRi−R;SiT{\displaystyle \mathbf {S} _{i}^{\textsf {T}}}is its transpose, andI{\displaystyle \mathbf {I} }is the 3 by 3 identity matrix.
SiTSi{\displaystyle \mathbf {S} _{i}^{\textsf {T}}\mathbf {S} _{i}}is the scalar product ofSi{\displaystyle \mathbf {S} _{i}}with itself, whileSiSiT{\displaystyle \mathbf {S} _{i}\mathbf {S} _{i}^{\textsf {T}}}is the tensor product ofSi{\displaystyle \mathbf {S} _{i}}with itself.
Using the center of mass and inertia matrix, the force and torque equations for a single rigid body take the formF=ma,T=[IR]α+ω×[IR]ω,{\displaystyle \mathbf {F} =m\mathbf {a} ,\quad \mathbf {T} =[I_{R}]\alpha +\omega \times [I_{R}]\omega ,}and are known as Newton's second law of motion for a rigid body.
The dynamics of an interconnected system of rigid bodies,Bi,j= 1, ...,M, is formulated by isolating each rigid body and introducing the interaction forces. The resultant of the external and interaction forces on each body, yields the force-torque equationsFj=mjaj,Tj=[IR]jαj+ωj×[IR]jωj,j=1,…,M.{\displaystyle \mathbf {F} _{j}=m_{j}\mathbf {a} _{j},\quad \mathbf {T} _{j}=[I_{R}]_{j}\alpha _{j}+\omega _{j}\times [I_{R}]_{j}\omega _{j},\quad j=1,\ldots ,M.}
Newton's formulation yields 6Mequations that define the dynamics of a system ofMrigid bodies.[4]
A rotating object, whether under the influence of torques or not, may exhibit the behaviours ofprecessionandnutation.
The fundamental equation describing the behavior of a rotating solid body isEuler's equation of motion:τ=DLDt=dLdt+ω×L=d(Iω)dt+ω×Iω=Iα+ω×Iω{\displaystyle {\boldsymbol {\tau }}={\frac {D\mathbf {L} }{Dt}}={\frac {d\mathbf {L} }{dt}}+{\boldsymbol {\omega }}\times \mathbf {L} ={\frac {d(I{\boldsymbol {\omega }})}{dt}}+{\boldsymbol {\omega }}\times {I{\boldsymbol {\omega }}}=I{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times {I{\boldsymbol {\omega }}}}where thepseudovectorsτandLare, respectively, thetorqueson the body and itsangular momentum, the scalarIis itsmoment of inertia, the vectorωis its angular velocity, the vectorαis its angular acceleration, D is the differential in an inertial reference frame and d is the differential in a relative reference frame fixed with the body.
The solution to this equation when there is no applied torque is discussed in the articlesEuler's equation of motionandPoinsot's ellipsoid.
It follows from Euler's equation that a torqueτapplied perpendicular to the axis of rotation, and therefore perpendicular toL, results in a rotation about an axis perpendicular to bothτandL. This motion is calledprecession. The angular velocity of precessionΩPis given by thecross product:[citation needed]τ=ΩP×L.{\displaystyle {\boldsymbol {\tau }}={\boldsymbol {\Omega }}_{\mathrm {P} }\times \mathbf {L} .}
Precession can be demonstrated by placing a spinning top with its axis horizontal and supported loosely (frictionless toward precession) at one end. Instead of falling, as might be expected, the top appears to defy gravity by remaining with its axis horizontal, when the other end of the axis is left unsupported and the free end of the axis slowly describes a circle in a horizontal plane, the resulting precession turning. This effect is explained by the above equations. The torque on the top is supplied by a couple of forces: gravity acting downward on the device's centre of mass, and an equal force acting upward to support one end of the device. The rotation resulting from this torque is not downward, as might be intuitively expected, causing the device to fall, but perpendicular to both the gravitational torque (horizontal and perpendicular to the axis of rotation) and the axis of rotation (horizontal and outwards from the point of support), i.e., about a vertical axis, causing the device to rotate slowly about the supporting point.
Under a constant torque of magnitudeτ, the speed of precessionΩPis inversely proportional toL, the magnitude of its angular momentum:τ=ΩPLsinθ,{\displaystyle \tau ={\mathit {\Omega }}_{\mathrm {P} }L\sin \theta ,}whereθis the angle between the vectorsΩPandL. Thus, if the top's spin slows down (for example, due to friction), its angular momentum decreases and so the rate of precession increases. This continues until the device is unable to rotate fast enough to support its own weight, when it stops precessing and falls off its support, mostly because friction against precession cause another precession that goes to cause the fall.
By convention, these three vectors – torque, spin, and precession – are all oriented with respect to each other according to theright-hand rule.
An alternate formulation of rigid body dynamics that has a number of convenient features is obtained by considering thevirtual workof forces acting on a rigid body.
The virtual work of forces acting at various points on a single rigid body can be calculated using the velocities of their point of application and theresultant force and torque. To see this, let the forcesF1,F2...Fnact on the pointsR1,R2...Rnin a rigid body.
The trajectories ofRi,i= 1, ...,nare defined by the movement of the rigid body. The velocity of the pointsRialong their trajectories areVi=ω×(Ri−R)+V,{\displaystyle \mathbf {V} _{i}={\boldsymbol {\omega }}\times (\mathbf {R} _{i}-\mathbf {R} )+\mathbf {V} ,}whereωis the angular velocity vector of the body.
Work is computed from thedot productof each force with the displacement of its point of contactδW=∑i=1nFi⋅δri.{\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot \delta \mathbf {r} _{i}.}If the trajectory of a rigid body is defined by a set ofgeneralized coordinatesqj,j= 1, ...,m, then the virtual displacementsδriare given byδri=∑j=1m∂ri∂qjδqj=∑j=1m∂Vi∂q˙jδqj.{\displaystyle \delta \mathbf {r} _{i}=\sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{i}}{\partial q_{j}}}\delta q_{j}=\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}}\delta q_{j}.}The virtual work of this system of forces acting on the body in terms of the generalized coordinates becomesδW=F1⋅(∑j=1m∂V1∂q˙jδqj)+⋯+Fn⋅(∑j=1m∂Vn∂q˙jδqj){\displaystyle \delta W=\mathbf {F} _{1}\cdot \left(\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{1}}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)+\dots +\mathbf {F} _{n}\cdot \left(\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{n}}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)}
or collecting the coefficients ofδqjδW=(∑i=1nFi⋅∂Vi∂q˙1)δq1+⋯+(∑1=1nFi⋅∂Vi∂q˙m)δqm.{\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{1}}}\right)\delta q_{1}+\dots +\left(\sum _{1=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{m}}}\right)\delta q_{m}.}
For simplicity consider a trajectory of a rigid body that is specified by a single generalized coordinate q, such as a rotation angle, then the formula becomesδW=(∑i=1nFi⋅∂Vi∂q˙)δq=(∑i=1nFi⋅∂(ω×(Ri−R)+V)∂q˙)δq.{\displaystyle \delta W=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}\right)\delta q=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial ({\boldsymbol {\omega }}\times (\mathbf {R} _{i}-\mathbf {R} )+\mathbf {V} )}{\partial {\dot {q}}}}\right)\delta q.}
Introduce the resultant forceFand torqueTso this equation takes the formδW=(F⋅∂V∂q˙+T⋅∂ω∂q˙)δq.{\displaystyle \delta W=\left(\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}}\right)\delta q.}
The quantityQdefined byQ=F⋅∂V∂q˙+T⋅∂ω∂q˙,{\displaystyle Q=\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}},}
is known as thegeneralized forceassociated with the virtual displacement δq. This formula generalizes to the movement of a rigid body defined by more than one generalized coordinate, that isδW=∑j=1mQjδqj,{\displaystyle \delta W=\sum _{j=1}^{m}Q_{j}\delta q_{j},}whereQj=F⋅∂V∂q˙j+T⋅∂ω∂q˙j,j=1,…,m.{\displaystyle Q_{j}=\mathbf {F} \cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}_{j}}}+\mathbf {T} \cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}_{j}}},\quad j=1,\ldots ,m.}
It is useful to note that conservative forces such as gravity and spring forces are derivable from a potential functionV(q1, ...,qn), known as apotential energy. In this case the generalized forces are given byQj=−∂V∂qj,j=1,…,m.{\displaystyle Q_{j}=-{\frac {\partial V}{\partial q_{j}}},\quad j=1,\ldots ,m.}
The equations of motion for a mechanical system of rigid bodies can be determined using D'Alembert's form of the principle of virtual work. The principle of virtual work is used to study the static equilibrium of a system of rigid bodies, however by introducing acceleration terms in Newton's laws this approach is generalized to define dynamic equilibrium.
The static equilibrium of a mechanical system rigid bodies is defined by the condition that the virtual work of the applied forces is zero for any virtual displacement of the system. This is known as theprinciple of virtual work.[5]This is equivalent to the requirement that the generalized forces for any virtual displacement are zero, that isQi=0.
Let a mechanical system be constructed fromnrigid bodies, Bi,i= 1, ...,n, and let the resultant of the applied forces on each body be the force-torque pairs,FiandTi,i= 1, ...,n. Notice that these applied forces do not include the reaction forces where the bodies are connected. Finally, assume that the velocityViand angular velocitiesωi,i= 1, ...,n, for each rigid body, are defined by a single generalized coordinate q. Such a system of rigid bodies is said to have onedegree of freedom.
The virtual work of the forces and torques,FiandTi, applied to this one degree of freedom system is given byδW=∑i=1n(Fi⋅∂Vi∂q˙+Ti⋅∂ωi∂q˙)δq=Qδq,{\displaystyle \delta W=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}}}\right)\delta q=Q\delta q,}whereQ=∑i=1n(Fi⋅∂Vi∂q˙+Ti⋅∂ωi∂q˙),{\displaystyle Q=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}}}\right),}is the generalized force acting on this one degree of freedom system.
If the mechanical system is defined by m generalized coordinates,qj,j= 1, ...,m, then the system has m degrees of freedom and the virtual work is given by,δW=∑j=1mQjδqj,{\displaystyle \delta W=\sum _{j=1}^{m}Q_{j}\delta q_{j},}whereQj=∑i=1n(Fi⋅∂Vi∂q˙j+Ti⋅∂ωi∂q˙j),j=1,…,m.{\displaystyle Q_{j}=\sum _{i=1}^{n}\left(\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}}+\mathbf {T} _{i}\cdot {\frac {\partial {\boldsymbol {\omega }}_{i}}{\partial {\dot {q}}_{j}}}\right),\quad j=1,\ldots ,m.}is the generalized force associated with the generalized coordinateqj. The principle of virtual work states that static equilibrium occurs when these generalized forces acting on the system are zero, that isQj=0,j=1,…,m.{\displaystyle Q_{j}=0,\quad j=1,\ldots ,m.}
Thesemequations define the static equilibrium of the system of rigid bodies.
Consider a single rigid body which moves under the action of a resultant forceFand torqueT, with one degree of freedom defined by the generalized coordinateq. Assume the reference point for the resultant force and torque is the center of mass of the body, then the generalized inertia forceQ*associated with the generalized coordinateqis given byQ∗=−(MA)⋅∂V∂q˙−([IR]α+ω×[IR]ω)⋅∂ω∂q˙.{\displaystyle Q^{*}=-(M\mathbf {A} )\cdot {\frac {\partial \mathbf {V} }{\partial {\dot {q}}}}-\left([I_{R}]{\boldsymbol {\alpha }}+{\boldsymbol {\omega }}\times [I_{R}]{\boldsymbol {\omega }}\right)\cdot {\frac {\partial {\boldsymbol {\omega }}}{\partial {\dot {q}}}}.}
This inertia force can be computed from the kinetic energy of the rigid body,T=12MV⋅V+12ω⋅[IR]ω,{\displaystyle T={\tfrac {1}{2}}M\mathbf {V} \cdot \mathbf {V} +{\tfrac {1}{2}}{\boldsymbol {\omega }}\cdot [I_{R}]{\boldsymbol {\omega }},}by using the formulaQ∗=−(ddt∂T∂q˙−∂T∂q).{\displaystyle Q^{*}=-\left({\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}}}-{\frac {\partial T}{\partial q}}\right).}
A system ofnrigid bodies with m generalized coordinates has the kinetic energyT=∑i=1n(12MVi⋅Vi+12ωi⋅[IR]ωi),{\displaystyle T=\sum _{i=1}^{n}\left({\tfrac {1}{2}}M\mathbf {V} _{i}\cdot \mathbf {V} _{i}+{\tfrac {1}{2}}{\boldsymbol {\omega }}_{i}\cdot [I_{R}]{\boldsymbol {\omega }}_{i}\right),}which can be used to calculate the m generalized inertia forces[6]Qj∗=−(ddt∂T∂q˙j−∂T∂qj),j=1,…,m.{\displaystyle Q_{j}^{*}=-\left({\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}\right),\quad j=1,\ldots ,m.}
D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of n rigid bodies with m generalized coordinates requires thatδW=(Q1+Q1∗)δq1+⋯+(Qm+Qm∗)δqm=0,{\displaystyle \delta W=\left(Q_{1}+Q_{1}^{*}\right)\delta q_{1}+\dots +\left(Q_{m}+Q_{m}^{*}\right)\delta q_{m}=0,}for any set of virtual displacementsδqj. This condition yieldsmequations,Qj+Qj∗=0,j=1,…,m,{\displaystyle Q_{j}+Q_{j}^{*}=0,\quad j=1,\ldots ,m,}which can also be written asddt∂T∂q˙j−∂T∂qj=Qj,j=1,…,m.{\displaystyle {\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}=Q_{j},\quad j=1,\ldots ,m.}The result is a set of m equations of motion that define the dynamics of the rigid body system.
If the generalized forces Qjare derivable from a potential energyV(q1, ...,qm), then these equations of motion take the formddt∂T∂q˙j−∂T∂qj=−∂V∂qj,j=1,…,m.{\displaystyle {\frac {d}{dt}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}=-{\frac {\partial V}{\partial q_{j}}},\quad j=1,\ldots ,m.}
In this case, introduce theLagrangian,L=T−V, so these equations of motion becomeddt∂L∂q˙j−∂L∂qj=0j=1,…,m.{\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}-{\frac {\partial L}{\partial q_{j}}}=0\quad j=1,\ldots ,m.}These are known asLagrange's equations of motion.
The linear and angular momentum of a rigid system of particles is formulated by measuring the position and velocity of the particles relative to the center of mass. Let the system of particles Pi,i= 1, ...,nbe located at the coordinatesriand velocitiesvi. Select a reference pointRand compute the relative position and velocity vectors,ri=(ri−R)+R,vi=ddt(ri−R)+V.{\displaystyle \mathbf {r} _{i}=\left(\mathbf {r} _{i}-\mathbf {R} \right)+\mathbf {R} ,\quad \mathbf {v} _{i}={\frac {d}{dt}}(\mathbf {r} _{i}-\mathbf {R} )+\mathbf {V} .}
The total linear and angular momentum vectors relative to the reference pointRarep=ddt(∑i=1nmi(ri−R))+(∑i=1nmi)V,{\displaystyle \mathbf {p} ={\frac {d}{dt}}\left(\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\right)+\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} ,}andL=∑i=1nmi(ri−R)×ddt(ri−R)+(∑i=1nmi(ri−R))×V.{\displaystyle \mathbf {L} =\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\times {\frac {d}{dt}}\left(\mathbf {r} _{i}-\mathbf {R} \right)+\left(\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\right)\times \mathbf {V} .}
IfRis chosen as the center of mass these equations simplify top=MV,L=∑i=1nmi(ri−R)×ddt(ri−R).{\displaystyle \mathbf {p} =M\mathbf {V} ,\quad \mathbf {L} =\sum _{i=1}^{n}m_{i}\left(\mathbf {r} _{i}-\mathbf {R} \right)\times {\frac {d}{dt}}\left(\mathbf {r} _{i}-\mathbf {R} \right).}
To specialize these formulas to a rigid body, assume the particles are rigidly connected to each other so Pi, i=1,...,n are located by the coordinatesriand velocitiesvi. Select a reference pointRand compute the relative position and velocity vectors,ri=(ri−R)+R,vi=ω×(ri−R)+V,{\displaystyle \mathbf {r} _{i}=(\mathbf {r} _{i}-\mathbf {R} )+\mathbf {R} ,\quad \mathbf {v} _{i}=\omega \times (\mathbf {r} _{i}-\mathbf {R} )+\mathbf {V} ,}where ω is the angular velocity of the system.[7][8][9]
Thelinear momentumandangular momentumof this rigid system measured relative to the center of massRisp=(∑i=1nmi)V,L=∑i=1nmi(ri−R)×vi=∑i=1nmi(ri−R)×(ω×(ri−R)).{\displaystyle \mathbf {p} =\left(\sum _{i=1}^{n}m_{i}\right)\mathbf {V} ,\quad \mathbf {L} =\sum _{i=1}^{n}m_{i}(\mathbf {r} _{i}-\mathbf {R} )\times \mathbf {v} _{i}=\sum _{i=1}^{n}m_{i}(\mathbf {r} _{i}-\mathbf {R} )\times (\omega \times (\mathbf {r} _{i}-\mathbf {R} )).}
These equations simplify to become,p=MV,L=[IR]ω,{\displaystyle \mathbf {p} =M\mathbf {V} ,\quad \mathbf {L} =[I_{R}]\omega ,}where M is the total mass of the system and [IR] is themoment of inertiamatrix defined by[IR]=−∑i=1nmi[ri−R][ri−R],{\displaystyle [I_{R}]=-\sum _{i=1}^{n}m_{i}[r_{i}-R][r_{i}-R],}where [ri− R] is the skew-symmetric matrix constructed from the vectorri−R.
|
https://en.wikipedia.org/wiki/Dynamic_equilibrium_(mechanics)
|
Documentationis any communicable material that is used to describe, explain or instruct regarding some attributes of an object, system or procedure, such as its parts, assembly, installation, maintenance, and use.[1]As a form ofknowledge managementandknowledge organization, documentation can be provided on paper, online, or ondigitaloranalog media, such asaudio tapeorCDs. Examples areuser guides,white papers,online help, and quick-reference guides. Paper or hard-copy documentation has become less common.[citation needed]Documentation is often distributed via websites, software products, and other online applications.
Documentation as a set of instructional materials shouldn't be confused withdocumentation science, the study of the recording and retrieval of information.
While associatedInternational Organization for Standardization(ISO) standards are not easily available publicly, a guide from other sources for this topic may serve the purpose.[2][3][4][5]
Documentation development may involve document drafting, formatting, submitting, reviewing, approving, distributing, reposting and tracking, etc., and are convened by associatedstandard operating procedurein a regulatory industry. It could also involve creating content from scratch. Documentation should be easy to read and understand. If it is too long and too wordy, it may be misunderstood or ignored. Clear, concise words should be used, and sentences should be limited to a maximum of 15 words. Documentation intended for a general audience should avoid gender-specific terms and cultural biases. In a series of procedures, steps should be clearly numbered.[6][7][8][9]
Technical writersand corporate communicators are professionals whose field and work is documentation. Ideally, technical writers have a background in both the subject matter and also in writing, managing content, andinformation architecture. Technical writers more commonly collaborate withsubject-matter experts, such as engineers, technical experts, medical professionals, etc. to define and then create documentation to meet the user's needs.Corporate communicationsincludes other types of written documentation, for example:
The following are typical software documentation types:
The following are typical hardware and service documentation types:
Acommon typeof software document written in the simulation industry is the SDF. When developing software for a simulator, which can range from embedded avionics devices to 3D terrain databases by way of full motion control systems, the engineer keeps a notebook detailing the development "the build" of the project or module. The document can be a wiki page, Microsoft Word document or other environment. They should contain arequirementssection, aninterfacesection to detail the communication interface of the software. Often anotessection is used to detail the proof of concept, and then track errors and enhancements. Finally, atestingsection to document how the software was tested. This documents conformance to the client's requirements. The result is a detailed description of how the software is designed, how to build and install the software on the target device, and any known defects and workarounds. This build document enables future developers and maintainers to come up to speed on the software in a timely manner, and also provides a roadmap to modifying code or searching for bugs.
These software tools can automatically collect data of your network equipment. The data could be for inventory and for configuration information. TheInformation Technology Infrastructure Libraryrequests to create such a database as a basis for all information for the IT responsible. It is also the basis for IT documentation. Examples include XIA Configuration.[11]
"Documentation" is the preferred term for the process of populating criminal databases. Examples include theNational Counterterrorism Center'sTerrorist Identities Datamart Environment,sex offender registries, and gang databases.[12]
Documentation, as it pertains to the early childhood education field, is "when we notice and value children's ideas, thinking, questions, and theories about the world and then collect traces of their work (drawings, photographs of the children in action, and transcripts of their words) to share with a wider community".[13]
Thus, documentation is a process, used to link the educator's knowledge and learning of the child/children with the families, other collaborators, and even to the children themselves.
Documentation is an integral part of the cycle of inquiry - observing, reflecting, documenting, sharing and responding.[13]
Pedagogical documentation, in terms of the teacher documentation, is the "teacher's story of the movement in children's understanding".[13]According to Stephanie Cox Suarez in "Documentation - Transforming our Perspectives", "teachers are considered researchers, and documentation is a research tool to support knowledge building among children and adults".[14]
Documentation can take many different styles in the classroom. The following exemplifies ways in which documentation can make the research, or learning, visible:
Documentation is certainly a process in and of itself, and it is also a process within the educator. The following is the development of documentation as it progresses for and in the educator themselves:
|
https://en.wikipedia.org/wiki/Documentation
|
Theroot mean square deviation(RMSD) orroot mean square error(RMSE) is either one of two closely related and frequently used measures of the differences between true or predicted values on the one hand and observed values or anestimatoron the other.
Thedeviationis typically simply a differences ofscalars; it can also be generalized to thevector lengthsof adisplacement, as in thebioinformaticsconcept ofroot mean square deviation of atomic positions.
The RMSD of asampleis thequadratic meanof the differences between the observed values and predicted ones. Thesedeviationsare calledresidualswhen the calculations are performed over the data sample that was used for estimation (and are therefore always in reference to an estimate) and are callederrors(or prediction errors) when computed out-of-sample (aka on the full set, referencing a true value rather than an estimate). The RMSD serves to aggregate the magnitudes of the errors in predictions for various data points into a single measure of predictive power. RMSD is a measure ofaccuracy, to compare forecasting errors of different models for a particular dataset and not between datasets, as it is scale-dependent.[1]
RMSD is always non-negative, and a value of 0 (almost never achieved in practice) would indicate a perfect fit to the data. In general, a lower RMSD is better than a higher one. However, comparisons across different types of data would be invalid because the measure is dependent on the scale of the numbers used.
RMSD is the square root of the average of squared errors. The effect of each error on RMSD is proportional to the size of the squared error; thus larger errors have a disproportionately large effect on RMSD. Consequently, RMSD is sensitive tooutliers.[2][3]
The RMSD of anestimatorθ^{\displaystyle {\hat {\theta }}}with respect to an estimated parameterθ{\displaystyle \theta }is defined as the square root of themean squared error:
For anunbiased estimator, the RMSD is the square root of thevariance, known as thestandard deviation.
IfX1, ...,Xnis a sample of a population with true mean valuex0{\displaystyle x_{0}}, then the RMSD of the sample is
The RMSD of predicted valuesy^t{\displaystyle {\hat {y}}_{t}}for timestof aregression'sdependent variableyt,{\displaystyle y_{t},}with variables observed overTtimes, is computed forTdifferent predictions as the square root of the mean of the squares of the deviations:
(For regressions oncross-sectional data, the subscripttis replaced byiandTis replaced byn.)
In some disciplines, the RMSD is used to compare differences between two things that may vary, neither of which is accepted as the "standard". For example, when measuring the average difference between two time seriesx1,t{\displaystyle x_{1,t}}andx2,t{\displaystyle x_{2,t}},
the formula becomes
Normalizing the RMSD facilitates the comparison between datasets or models with different scales. Though there is no consistent means of normalization in the literature, common choices are the mean or the range (defined as the maximum value minus the minimum value) of the measured data:[4]
This value is commonly referred to as thenormalized root mean square deviationorerror(NRMSD or NRMSE), and often expressed as a percentage, where lower values indicate less residual variance. This is also calledCoefficient of VariationorPercent RMS. In many cases, especially for smaller samples, the sample range is likely to be affected by the size of sample which would hamper comparisons.
Another possible method to make the RMSD a more useful comparison measure is to divide the RMSD by theinterquartile range(IQR). When dividing the RMSD with the IQR the normalized value gets less sensitive for extreme values in the target variable.
withQ1=CDF−1(0.25){\displaystyle Q_{1}={\text{CDF}}^{-1}(0.25)}andQ3=CDF−1(0.75),{\displaystyle Q_{3}={\text{CDF}}^{-1}(0.75),}where CDF−1is thequantile function.
When normalizing by the mean value of the measurements, the termcoefficient of variation of the RMSD, CV(RMSD)may be used to avoid ambiguity.[5]This is analogous to thecoefficient of variationwith the RMSD taking the place of thestandard deviation.
Some researchers[who?]have recommended[where?]the use of themean absolute error(MAE) instead of the root mean square deviation. MAE possesses advantages in interpretability over RMSD. MAE is the average of the absolute values of the errors. MAE is fundamentally easier to understand than the square root of the average of squared errors. Furthermore, each error influences MAE in direct proportion to the absolute value of the error, which is not the case for RMSD.[2]
|
https://en.wikipedia.org/wiki/Root_mean_squared_error
|
Inmathematics, a nonempty collection ofsetsis called a𝜎-ring(pronouncedsigma-ring) if it isclosedunder countableunionandrelative complementation.
LetR{\displaystyle {\mathcal {R}}}be a nonemptycollection of sets. ThenR{\displaystyle {\mathcal {R}}}is a𝜎-ringif:
These two properties imply:⋂n=1∞An∈R{\displaystyle \bigcap _{n=1}^{\infty }A_{n}\in {\mathcal {R}}}wheneverA1,A2,…{\displaystyle A_{1},A_{2},\ldots }are elements ofR.{\displaystyle {\mathcal {R}}.}
This is because⋂n=1∞An=A1∖⋃n=2∞(A1∖An).{\displaystyle \bigcap _{n=1}^{\infty }A_{n}=A_{1}\setminus \bigcup _{n=2}^{\infty }\left(A_{1}\setminus A_{n}\right).}
Every 𝜎-ring is aδ-ringbut there exist δ-rings that are not 𝜎-rings.
If the first property is weakened to closure under finite union (that is,A∪B∈R{\displaystyle A\cup B\in {\mathcal {R}}}wheneverA,B∈R{\displaystyle A,B\in {\mathcal {R}}}) but not countable union, thenR{\displaystyle {\mathcal {R}}}is aringbut not a 𝜎-ring.
𝜎-rings can be used instead of𝜎-fields(𝜎-algebras) in the development ofmeasureandintegrationtheory, if one does not wish to require that theuniversal setbe measurable. Every 𝜎-field is also a 𝜎-ring, but a 𝜎-ring need not be a 𝜎-field.
A 𝜎-ringR{\displaystyle {\mathcal {R}}}that is a collection of subsets ofX{\displaystyle X}induces a𝜎-fieldforX.{\displaystyle X.}DefineA={E⊆X:E∈RorEc∈R}.{\displaystyle {\mathcal {A}}=\{E\subseteq X:E\in {\mathcal {R}}\ {\text{or}}\ E^{c}\in {\mathcal {R}}\}.}ThenA{\displaystyle {\mathcal {A}}}is a 𝜎-field over the setX{\displaystyle X}- to check closure under countable union, recall aσ{\displaystyle \sigma }-ring is closed under countable intersections. In factA{\displaystyle {\mathcal {A}}}is the minimal 𝜎-field containingR{\displaystyle {\mathcal {R}}}since it must be contained in every 𝜎-field containingR.{\displaystyle {\mathcal {R}}.}
Additionally, asemiringis aπ-systemwhere every complementB∖A{\displaystyle B\setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}Asemialgebrais a semiring where every complementΩ∖A{\displaystyle \Omega \setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}A,B,A1,A2,…{\displaystyle A,B,A_{1},A_{2},\ldots }are arbitrary elements ofF{\displaystyle {\mathcal {F}}}and it is assumed thatF≠∅.{\displaystyle {\mathcal {F}}\neq \varnothing .}
|
https://en.wikipedia.org/wiki/Sigma-ring
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.