Unsupervised learning is a framework in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data.[1] Other frameworks in the spectrum of supervisions include weak- or semi-supervision, where a small portion of the data is tagged, and self-supervision. Some researchers consider self-supervised learning a form of unsupervised learning.[2]
Conceptually, unsupervised learning divides into the aspects of data, training, algorithm, and downstream applications. Typically, the dataset is harvested cheaply "in the wild", such as massive text corpus obtained by web crawling, with only minor filtering (such as Common Crawl). This compares favorably to supervised learning, where the dataset (such as the ImageNet1000) is typically constructed manually, which is much more expensive.
Sometimes a trained model can be used as-is, but more often they are modified for downstream applications. For example, the generative pretraining method trains a model to generate a textual dataset, before finetuning it for other applications, such as text classification.[3][4] As another example, autoencoders are trained to good features, which can then be used as a module for other models, such as in a latent diffusion model.
Tasks
Tasks are often categorized as discriminative (recognition) or generative (imagination). Often but not always, discriminative tasks use supervised methods and generative tasks use unsupervised (see Venn diagram); however, the separation is very hazy. For example, object recognition favors supervised learning but unsupervised learning can also cluster objects into groups. Furthermore, as progress marches onward some tasks employ both methods, and some tasks swing from one to another. For example, image recognition started off as heavily supervised, but became hybrid by employing unsupervised pre-training, and then moved towards supervision again with the advent of dropout, ReLU, and adaptive learning rates.
A typical generative task is as follows. At each step, a datapoint is sampled from the dataset, and part of the data is removed, and the model must infer the removed part. This is particularly clear for the denoising autoencoders and BERT.
Neural network architectures
Training
During the learning phase, an unsupervised network tries to mimic the data it's given and uses the error in its mimicked output to correct itself (i.e. correct its weights and biases). Sometimes the error is expressed as a low probability that the erroneous output occurs, or it might be expressed as an unstable high energy state in the network.
An energy function is a macroscopic measure of a network's activation state. In Boltzmann machines, it plays the role of the Cost function. This analogy with physics is inspired by Ludwig Boltzmann's analysis of a gas' macroscopic energy from the microscopic probabilities of particle motion , where k is the Boltzmann constant and T is temperature. In the RBM network the relation is ,[5] where and vary over every possible activation pattern and . To be more precise, , where is an activation pattern of all neurons (visible and hidden). Hence, some early neural networks bear the name Boltzmann Machine. Paul Smolensky calls the Harmony. A network seeks low energy which is high Harmony.
Networks
This table shows connection diagrams of various unsupervised networks, the details of which will be given in the section Comparison of Networks. Circles are neurons and edges between them are connection weights. As network design changes, features are added on to enable new capabilities or removed to make learning faster. For instance, neurons change between deterministic (Hopfield) and stochastic (Boltzmann) to allow robust output, weights are removed within a layer (RBM) to hasten learning, or connections are allowed to become asymmetric (Helmholtz).
Of the networks bearing people's names, only Hopfield worked directly with neural networks. Boltzmann and Helmholtz came before artificial neural networks, but their work in physics and physiology inspired the analytical methods that were used.
History
1974
Ising magnetic model proposed by WA Little [de] for cognition
Ising variant Hopfield net described as CAMs and classifiers by John Hopfield.
1983
Ising variant Boltzmann machine with probabilistic neurons described by Hinton & Sejnowski following Sherington & Kirkpatrick's 1975 work.
1986
Paul Smolensky publishes Harmony Theory, which is an RBM with practically the same Boltzmann energy function. Smolensky did not give a practical training scheme. Hinton did in mid-2000s.
1995
Schmidthuber introduces the LSTM neuron for languages.
1995
Dayan & Hinton introduces Helmholtz machine
2013
Kingma, Rezende, & co. introduced Variational Autoencoders as Bayesian graphical probability network, with neural nets as components.
Specific Networks
Here, we highlight some characteristics of select networks. The details of each are given in the comparison table below.
Ferromagnetism inspired Hopfield networks. A neuron correspond to an iron domain with binary magnetic moments Up and Down, and neural connections correspond to the domain's influence on each other. Symmetric connections enable a global energy formulation. During inference the network updates each state using the standard activation step function. Symmetric weights and the right energy functions guarantees convergence to a stable activation pattern. Asymmetric weights are difficult to analyze. Hopfield nets are used as Content Addressable Memories (CAM).
These are stochastic Hopfield nets. Their state value is sampled from this pdf as follows: suppose a binary neuron fires with the Bernoulli probability p(1) = 1/3 and rests with p(0) = 2/3. One samples from it by taking a uniformly distributed random number y, and plugging it into the inverted cumulative distribution function, which in this case is the step function thresholded at 2/3. The inverse function = { 0 if x <= 2/3, 1 if x > 2/3 }.
Sigmoid Belief Net
Introduced by Radford Neal in 1992, this network applies ideas from probabilistic graphical models to neural networks. A key difference is that nodes in graphical models have pre-assigned meanings, whereas Belief Net neurons' features are determined after training. The network is a sparsely connected directed acyclic graph composed of binary stochastic neurons. The learning rule comes from Maximum Likelihood on p(X): Δwij sj * (si - pi), where pi = 1 / ( 1 + eweighted inputs into neuron i ). sj's are activations from an unbiased sample of the posterior distribution and this is problematic due to the Explaining Away problem raised by Judea Perl. Variational Bayesian methods uses a surrogate posterior and blatantly disregard this complexity.
Introduced by Hinton, this network is a hybrid of RBM and Sigmoid Belief Network. The top 2 layers is an RBM and the second layer downwards form a sigmoid belief network. One trains it by the stacked RBM method and then throw away the recognition weights below the top RBM. As of 2009, 3-4 layers seems to be the optimal depth.[6]
These are early inspirations for the Variational Auto Encoders. Its 2 networks combined into one—forward weights operates recognition and backward weights implements imagination. It is perhaps the first network to do both. Helmholtz did not work in machine learning but he inspired the view of "statistical inference engine whose function is to infer probable causes of sensory input".[7] the stochastic binary neuron outputs a probability that its state is 0 or 1. The data input is normally not considered a layer, but in the Helmholtz machine generation mode, the data layer receives input from the middle layer and has separate weights for this purpose, so it is considered a layer. Hence this network has 3 layers.
These are inspired by Helmholtz machines and combines probability network with neural networks. An Autoencoder is a 3-layer CAM network, where the middle layer is supposed to be some internal representation of input patterns. The encoder neural network is a probability distribution qφ(z given x) and the decoder network is pθ(x given z). The weights are named phi & theta rather than W and V as in Helmholtz—a cosmetic difference. These 2 networks here can be fully connected, or use another NN scheme.
Comparison of networks
Hopfield
Boltzmann
RBM
Stacked RBM
Helmholtz
Autoencoder
VAE
Usage & notables
CAM, traveling salesman problem
CAM. The freedom of connections makes this network difficult to analyze.
pattern recognition. used in MNIST digits and speech.
recognition & imagination. trained with unsupervised pre-training and/or supervised fine tuning.
deterministic binary state. Activation = { 0 (or -1) if x is negative, 1 otherwise }
stochastic binary Hopfield neuron
← same. (extended to real-valued in mid 2000s)
← same
← same
language: LSTM. vision: local receptive fields. usually real valued relu activation.
middle layer neurons encode means & variances for Gaussians. In run mode (inference), the output of the middle layer are sampled values from the Gaussians.
Connections
1-layer with symmetric weights. No self-connections.
top layer is undirected, symmetric. other layers are 2-way, asymmetric.
3-layers: asymmetric weights. 2 networks combined into 1.
3-layers. The input is considered a layer even though it has no inbound weights. recurrent layers for NLP. feedforward convolutions for vision. input & output have the same neuron counts.
3-layers: input, encoder, distribution sampler decoder. the sampler is not considered a layer
Inference & energy
Energy is given by Gibbs probability measure :
← same
← same
minimize KL divergence
inference is only feed-forward. previous UL networks ran forwards AND backwards
minimize error = reconstruction error - KLD
Training
Δwij = si*sj, for +1/-1 neuron
Δwij = e*(pij - p'ij). This is derived from minimizing KLD. e = learning rate, p' = predicted and p = actual distribution.
Δwij = e*( < vi hj >data - < vi hj >equilibrium ). This is a form of contrastive divergence w/ Gibbs Sampling. "<>" are expectations.
← similar. train 1-layer at a time. approximate equilibrium state with a 3-segment pass. no back propagation.
wake-sleep 2 phase training
back propagate the reconstruction error
reparameterize hidden state for backprop
Strength
resembles physical systems so it inherits their equations
← same. hidden neurons act as internal representatation of the external world
faster more practical training scheme than Boltzmann machines
trains quickly. gives hierarchical layer of features
mildly anatomical. analyzable w/ information theory & statistical mechanics
Weakness
hard to train due to lateral connections
equilibrium requires too many iterations
integer & real-valued neurons are more complicated.
Hebbian Learning, ART, SOM
The classical example of unsupervised learning in the study of neural networks is Donald Hebb's principle, that is, neurons that fire together wire together.[8] In Hebbian learning, the connection is reinforced irrespective of an error, but is exclusively a function of the coincidence between action potentials between the two neurons.[9] A similar version that modifies synaptic weights takes into account the time between the action potentials (spike-timing-dependent plasticity or STDP). Hebbian Learning has been hypothesized to underlie a range of cognitive functions, such as pattern recognition and experiential learning.
Among neural network models, the self-organizing map (SOM) and adaptive resonance theory (ART) are commonly used in unsupervised learning algorithms. The SOM is a topographic organization in which nearby locations in the map represent inputs with similar properties. The ART model allows the number of clusters to vary with problem size and lets the user control the degree of similarity between members of the same clusters by means of a user-defined constant called the vigilance parameter. ART networks are used for many pattern recognition tasks, such as automatic target recognition and seismic signal processing.[10]
Probabilistic methods
Two of the main methods used in unsupervised learning are principal component and cluster analysis. Cluster analysis is used in unsupervised learning to group, or segment, datasets with shared attributes in order to extrapolate algorithmic relationships.[11] Cluster analysis is a branch of machine learning that groups the data that has not been labelled, classified or categorized. Instead of responding to feedback, cluster analysis identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data. This approach helps detect anomalous data points that do not fit into either group.
A central application of unsupervised learning is in the field of density estimation in statistics,[12] though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It can be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution conditioned on the label of input data; unsupervised learning intends to infer an a priori probability distribution .
Approaches
Some of the most common algorithms used in unsupervised learning include: (1) Clustering, (2) Anomaly detection, (3) Approaches for learning latent variable models. Each approach uses several methods as follows:
One of the statistical approaches for unsupervised learning is the method of moments. In the method of moments, the unknown parameters (of interest) in the model are related to the moments of one or more random variables, and thus, these unknown parameters can be estimated given the moments. The moments are usually estimated from samples empirically. The basic moments are first and second order moments. For a random vector, the first order moment is the mean vector, and the second order moment is the covariance matrix (when the mean is zero). Higher order moments are usually represented using tensors which are the generalization of matrices to higher orders as multi-dimensional arrays.
In particular, the method of moments is shown to be effective in learning the parameters of latent variable models. Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. A highly practical example of latent variable models in machine learning is the topic modeling which is a statistical model for generating the words (observed variables) in the document based on the topic (latent variable) of the document. In the topic modeling, the words in the document are generated according to different statistical parameters when the topic of the document is changed. It is shown that method of moments (tensor decomposition techniques) consistently recover the parameters of a large class of latent variable models under some assumptions.[15]
The Expectation–maximization algorithm (EM) is also one of the most practical methods for learning latent variable models. However, it can get stuck in local optima, and it is not guaranteed that the algorithm will converge to the true unknown parameters of the model. In contrast, for the method of moments, the global convergence is guaranteed under some conditions.
^"Deep Belief Nets" (video). September 2009. Archived from the original on 2022-03-08. Retrieved 2022-03-27. {{cite web}}: Unknown parameter |people= ignored (help)
^Buhmann, J.; Kuhnel, H. (1992). "Unsupervised and supervised data clustering with competitive neural networks". [Proceedings 1992] IJCNN International Joint Conference on Neural Networks. Vol. 4. IEEE. pp. 796–801. doi:10.1109/ijcnn.1992.227220. ISBN0780305590. S2CID62651220.
^Comesaña-Campos, Alberto; Bouza-Rodríguez, José Benito (June 2016). "An application of Hebbian learning in the design process decision-making". Journal of Intelligent Manufacturing. 27 (3): 487–506. doi:10.1007/s10845-014-0881-z. ISSN0956-5515. S2CID207171436.
Biografi ini memerlukan lebih banyak catatan kaki untuk pemastian. Bantulah untuk menambahkan referensi atau sumber tepercaya. Materi kontroversial atau trivial yang sumbernya tidak memadai atau tidak bisa dipercaya harus segera dihapus, khususnya jika berpotensi memfitnah.Cari sumber: Haris Sudarno – berita · surat kabar · buku · cendekiawan · JSTOR (September 2020) (Pelajari cara dan kapan saatnya untuk menghapus pesan templat ini) Haris Sudarno Informa…
This list of Ivy League law schools outlines the five universities of the Ivy League that host a law school. The three Ivy League universities that do not offer law degrees are Brown, Dartmouth and Princeton; they are the smallest universities in the Ivy League by enrollment. All five Ivy League law schools are consistently ranked among the top 14 law schools in the nation or T14.[1] List School name Host institution Image Degree programs offered Year founded Columbia Law School Columbia…
Kepolisian Resor Kota PatiSingkatanPolresta PatiMottoPelindung, Pengayom, dan Pelayan MasyarakatYurisdiksi hukumProvinsi Jawa Tengah Kabupaten PatiMarkas besarJl. A. Yani No. 1 PatiPejabat eksekutifAjun Komisaris Besar Polisi Christian Tobing, S.I.K, M.H, M.Si, KapolrestaKomisaris Polisi Asfauri, S.H., M.H., WakapolrestaSitus web[1] Kepolisian Resor Kota Pati atau Polresta Pati merupakan pelaksana tugas Polri di wilayah Kabupaten Pati. Polresta Pati yang berada di bawah jajaran Polda Jawa Tengah…
Lokasi Provinsi Afyon Afyonkarahisar (atau juga disebut Afyon supaya lebih mudah) adalah sebuah provinsi di sebelah barat Turki yang berbatasan dengan Provinsi Kütahya di barat laut, Provinsi Uşak di barat, Provinsi Denizli di barat daya, Provinsi Burdur di selatan, Provinsi Isparta di tenggara, Provinsi Konya di timur, dan Provinsi Eskişehir di utara. Ibu kota dari provinsi ini adalah Afyonkarahisar. Provinsi ini memiliki penduduk sebesar 812.416 jiwa (cacah jiwa tahun 2000) dan wilayahnya s…
1967 speech delivered by Martin Luther King Jr. Beyond Vietnam: A Time to Break Silence, also referred as the Riverside Church speech,[1] is an anti–Vietnam War and pro–social justice speech delivered by Martin Luther King Jr. on April 4, 1967, exactly one year before he was assassinated. The major speech at Riverside Church in New York City, followed several interviews[2] and several other public speeches in which King came out against the Vietnam War and the policies that c…
Wakil Bupati WonosoboSabda pandhawa raga nyawijiPetahanaDrs. H. Muhammad Albar, M.M.sejak 26 Februari 2021Masa jabatan5 tahunDibentuk2000Pejabat pertamaDrs. H. Abdul Kholiq Arif, M.Si.Situs webwebsite.wonosobokab.go.id Berikut ini adalah daftar Wakil Bupati Wonosobo dari masa ke masa. No Wakil Bupati Mulai Jabatan Akhir Jabatan Prd. Ket. Bupati 1 Drs. H.Abdul Kholiq ArifM.Si. 2000 2005 1 Drs.Trimarwan Nugrohadi 2 H.Munthohar 2005 2010 2 Drs. H.Abdul Kholiq ArifM.Si. 3 Dra. Hj.…
АнаксимандрἈναξίμανδρος Древнеримская мозаика из Трира, датируемая началом третьего века нашей эры, изображающая Анаксимандра, держащего солнечные часы Дата рождения 610 до н. э.(-610) Место рождения Милет[1][2] Дата смерти 547/540 до н. э. Место смерти Милет Страна Милет Я…
NASA satellite of the Explorer program Explorer 28Explorer 28 satelliteNamesIMP-CIMP-3Interplanetary Monitoring Platform-3Mission typeSpace physicsOperatorNASACOSPAR ID1965-042A SATCAT no.01388Mission duration2 years (achieved) Spacecraft propertiesSpacecraftExplorer XXVIIISpacecraft typeInterplanetary Monitoring PlatformBusIMPManufacturerGoddard Space Flight CenterLaunch mass128 kg (282 lb)Dimensions71 × 20.3 cm (28.0 × 8.0 in)Power4 deployable solar ar…
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.Find sources: National Democratic Congress Grenada – news · newspapers · books · scholar · JSTOR (November 2018) (Learn how and when to remove this template message) Political party in Grenada National Democratic Congress AbbreviationNDCLeaderDickon Mitchell[1]…
New Zealand cricketer and broadcaster (1922–2021) Iain GallawayQSO MBEPersonal informationFull nameIain Watson GallawayBorn(1922-12-26)26 December 1922Dunedin, New ZealandDied18 April 2021(2021-04-18) (aged 98)Dunedin, New ZealandBattingRight-handedRoleWicket-keeperDomestic team information YearsTeam1946/47–1947/48Otago Career statistics Competition First-class Matches 3 Runs scored 26 Batting average 8.66 100s/50s 0/0 Top score 22 Catches/stumpings 7/1Source: CricketArchive, 3…
American general Wade Hampton IMember of the U.S. House of Representativesfrom South Carolina's 4th districtIn officeMarch 4, 1803 – March 3, 1805Preceded byRichard WinnSucceeded byO'Brien SmithMember of the U.S. House of Representativesfrom South Carolina's 2nd districtIn officeMarch 4, 1795 – March 3, 1797Preceded byJohn HunterSucceeded byJohn Rutledge, Jr. Personal detailsBornearly 1750sColony of Virginia, British AmericaDied(1835-02-04)February 4…
Ludogorets RazgradNama lengkapПрофесионален футболен клуб Лудогорец Разград (Professional football club Ludogorets Razgrad)JulukanОрлите от Разград(Elang Razgrad)Berdiri1945StadionLudogorets Arena, Razgrad(Kapasitas: 6,000)PemilikKiril DomuschievKetuaAleksandar AleksandrovManajerIvaylo PetevLigaA Group2022–23JUARASitus webSitus web resmi klub Kostum kandang Kostum tandang PFC Ludogorets Razgrad (bahasa Bulgaria: ПФК Лудог…
Norwegian newspaper TypeDaily (Monday through Saturday) newspaperFormatTabloid from May 2006Owner(s)Polaris MediaEditorHanna Lovise Relling BergFounded1882HeadquartersÅlesund, NorwayWebsitewww.smp.no Sunnmørsposten (Urban East Norwegian: [ˈsʉ̂nːmøːʂˌpɔsːtn̩]) is a newspaper published by Polaris Media in Ålesund, Norway. History and profile In its early days, Sunnmørsposten competed with several other local newspapers, including Aalesunds Avis (1917–1957), Aalesunds blad …
Warren Hastings Warren Hastings (lahir 6 Desember 1732 di Churchill, Oxfordshire, Inggris - meninggal 22 Agustus 1818 di Daylesford, Oxfordshire pada umur 85 tahun) adalah seorang gubernur jenderal Britania Raya di India yang pertama.[1][2] Hastings adalah anak dari seorang pendeta Gereja di Inggris.[1] Sejak kecil ia telah ditinggal ayahnya dan diasuh serta diberi pendidikan oleh pamannya.[1] Hastings menempuh pendidikan formalnya di Sekolah Westminster di London…
First Lady of DjiboutiIncumbentKadra Mahamoud Haidsince May 8, 1999ResidencePresidential PalaceInaugural holderAicha BogorehFormationJune 27, 1977 Politics of Djibouti Member State of the Arab League Constitution Human rights Executive President Ismail Omar Guelleh Prime Minister Abdoulkader Kamil Mohamed Ministries and ministers Legislature National Assembly Speaker: Mohamed Ali Houmed Elections Recent elections Presidential: 20162021 Parliamentary: 20182023 Political parties Administrativ…
German cargo airline AeroLogic GmbH IATA ICAO Callsign 3S BOX GERMAN CARGO Founded12 September 2007; 16 years ago (2007-09-12)Commenced operations29 June 2009; 14 years ago (2009-06-29)HubsFrankfurt AirportLeipzig/Halle AirportFleet size22Destinations29Parent company DHL (50%) Lufthansa Cargo (50%) HeadquartersSchkeuditz, GermanyWebsiteaerologic.aero AeroLogic GmbH is a German cargo airline based in Schkeuditz near Leipzig. It is a joint-venture between DHL an…
Bronze statue in Port-au-prince, Haiti Le Marron Inconnu(Nèg Mawon)The Unknown Maroon(Maroon Man)18°32′41″N 72°20′16″W / 18.5446°N 72.3377°W / 18.5446; -72.3377LocationPlace du Marron Inconnu, Champ de Mars, HT6110 Port-au-Prince, Haiti[1]DesignerCreated by Haitian sculptor Albert MangonèsHeight3.60 metres (11.8 ft) and 2.40 metres (7.9 ft) tallCompletion date22 September 1967[2][3]Dedicated toAbolishment of slaver…
Mel Brooks alla cerimonia in cui ricevette la stella della Hollywood Walk of Fame nel 2010 Oscar alla migliore sceneggiatura originale 1969 Oscar alla carriera 2024 Mel Brooks, pseudonimo di Melvin James Brooks (nato Melvin James Kaminsky[1]; New York, 28 giugno 1926), è un regista, sceneggiatore, comico, compositore, produttore cinematografico, teatrale, televisivo e attore statunitense. È noto per le sue parodie e le sue commedie farsesche. Fa parte dei 19 artisti che hanno conseguit…
Medali Kemerdekaan dan Kebebasan Medali Kemerdekaan dan Kebebasan (Medali Kelas Utama) Dianugerahkan oleh Komite Tetap Kongres Rakyat Nasional Negara Republik Rakyat Tiongkok Jenis Medali militer Tiongkok Persyaratan penerima Personel militer saja Statistik Ditetapkan pada 1955 Anugerah pertama 1955 Anugerah terakhir 1955 Jumlahanugerah 196 (Medali Kelas Utama)4.152 (Medali Kelas Kedua)31.098 (Medali Kelas Ketiga) Medali Kemerdekaan dan Kebebasan Hanzi tradisional: 獨立自由勳章 Hanzi seder…