Imperatively Hidden Object Learning

- Attributes:

- image:


- description:

Imperatively of Hidden Object Learning

Authors:Thomas Hahn, Daniel Wuttke, Philip Marseca, Richard Segall, Fusheng Tang

The key to the keys to machine learning challenges lies in the correct answer to the main question:

How to naively discover new essential – but still hidden - objects (HO), which are defined by their features, required for properly training novel adaptive supervised machine learning algorithms, for predicting the outcome of many complex phenomena, such as aging, cancer, medical recovery, tornadoes, hurricanes, floods, droughts, profits, stocks and other complex phenomena, which partially depend on still hidden factors/objects?

Introduction to feature discovery to train supervised machine learning algorithms for artificial intelligence (AI) applications.

Feature discovery and selection for training supervised machine learning algorithms: – An analogy to building a two story house:

Imagine a building named “Aging”. It consists of two stories: the ground floor, which is called “feature selection”, and the second floor, which is called “developing, optimizing and training the machine learning algorithm”.

Before any machine learning algorithm can be trained properly, feature selection must be perfected and completed. Otherwise, the machine learning algorithm may learn irrelevant things caused by the ambiguity, which is due to missing features. Not much time and efforts should be invested into optimizing, training and improving the algorithms until all features are properly selected. As long as feature selection is incomplete one must focus on finding the missing features instead of tuning the algorithm.

In other words, using our building analogy, here is the most important advice: Do not try to complete and perfectionate the 2nd floor called “training, tuning and optimizing the machine learning algorithm”, before you are certain that the ground floor, i.e. “feature selection”, has been fully and properly completed. If this is not the case, one must focus on discovering the missing features for the training samples first.

Lots of research has been dedicated to perfectionate algorithms before completing feature selection. Therefore, our algorithms have gradually improved whereas our feature selection has not.

How can missing/hidden features be discovered?

If the algorithm cannot be trained to make perfect prediction it indicates that essential data input features are still lacking. When the predicted values fail to match with observed measurements, despite tuning the algorithm, it means features selection is not yet completed. This is the case when the error between predicted and observed values approaches an asymptote, which is not equal zero. The prediction error is most likely caused by a still hidden object. This hidden object is the cause of the error. But we cannot see the hidden cause yet. However, we can see its consequence, i.e. the error. But since every consequence must have a cause, we must start looking for it.

Example of a speculative scenario concerning predicting protein folding part 1

Let us take protein folding prediction as an example. Only about 30% of the predicted folding patterns are correct. We must then go back to the last step, at which our prediction still matched reality. As soon as we get deviations we must scrutinize the deviating object, because - most likely - it is not a single object, but instead, 2 or more objects, which to us look still so similar that we cannot yet distinguish between them. In order for an object to no longer remain hidden, it must have at least one feature by which it differs from its background environment and other objects.

Example of speculative scenario about discovering the molecules we breath

As soon as one feature is found by which 2 objects differ from one another, we must identify them as distinct. Let us take air as an example. Air in front of background air still looks like nothing. Even though air is a legitimate object, it remains hidden as long as its surrounding background, to which it is compared, is also air. Air has no feature that could distinguish it from background air. Legitimate real objects, which lack any feature, by which they could be distinguish from their background and/or other objects, are called imperatively hidden objects (IHO) because no kind of measurement can help to distinguish them in any way as an object, which is something other than its background. If objects like air, uniform magnetic field or gravitational force are omnipresent and uniformly distributed they remain imperatively hidden objects because we have no phase that is not the object, which would allow us to distinguish it as an object. An object before a background of the same object remains an imperatively hidden object unless we find an instance, which varies from the object, maybe in strength, because we need something to compare it with to identify a difference.

The only way by which an omnipresent uniform hidden object can be discovered is if there is some variation in strength or it can become totally absent (e.g. gravity). Otherwise, it remains imperatively hidden because it cannot be distinguished from itself, the environment or other objects. Therefore, in order to uncover imperatively hidden objects, we must intentionally induce variation in the surrounding environment, measurements and methods until new features, by which the object can be distinguished from its environment and/or other objects, can be discovered.

If we can advance our conceptual understanding of air being not the same as its background because of wind, which causes a variation in resistance by which air slows down our motion, air still looks like air. But we know that air consists of at least four very different groups of objects, i.e. 20% oxygen, 78% nitrogen, 1.5% helium and 0.5% carbon dioxide. Now these four objects are no longer imperatively hidden but they appear like a single object. When trying to predict molecular behavior we will get errors because what we think of as one object is actually at least four. By looking, sonar sounding, radiating, magnetizing or shining light on air, we cannot distinguish the four objects from one another yet. But if we start cooling them down gradually, suddenly we can distinguish them from one another by their different freezing temperatures.

Which features would be best to learn the difference between different oxygen species?

Let us assume that the element oxygen has been discovered and that the method by which atoms are distinguished from one another is to count their protons. Still, puzzling observations, which cannot be predicted by relying on proton numbers alone, will be encountered as soon as ozone (O3), molecular oxygen (O2) and free oxygen radicals are in our air sample. To get the most predictive power and highest F-score, investigators tend to try to optimize prediction by trying to find a method to predict the most common outcome. Accordingly, in this oxygen example, researchers tend to develop an algorithm, which is best to predict bimolecular oxygen (O2) because it is most abundant among all oxygen molecules (i.e. ozone (O3), bimolecular oxygen (O2) and the negatively charged oxygen radical (O-)). The error rate under the assumption that there is no difference between oxygen molecules would be equal to ozone/bimolecular oxygen + oxygen ions/molecular oxygen. In order to distinguish between these different kinds of oxygen the electron/proton ratio, the different charge distribution on the molecular surface, molecular weight, molecular volume, the arrangements of chemical bonds, and the position of the oxygen atoms relative to one another within the same molecule, could be added to our training data in order to distinguish between the different oxygen species. But let us assume that we are still naïve and cannot measure the needed features yet, how could we go about discovering the missing/hidden features? In general, varying the features of the input training data for training a supervised machine algorithm, the learning steps, the inert environment and the methods of measurement must be selected based on intuition due to lack of any better alternatives. For AI to correctly determine the overall electrical charge of an oxygen molecule, AI needs the number of protons and electrons as input data. Unfortunately, if the instruments for detecting protons, electrons and neutrons are lacking, we can see the effect of the still hidden factor, i.e. electron/proton ratio, on the overall molecular charge but its reason still remains a hidden mystery. In this case, investing time to discover electrons, neutrons and protons, is much wiser than trying to tweak the parameters after the error rate has reached its asymptote, because even if this improves prediction, there is a big risk of over-fitting, because AI is basing its decisions on features, which actually have no effect on the overall molecular charge. But instead of using the electron/proton ratio as input features, the molecular size of the different oxygen species, would also work for training our AI-molecular charge predictor. Electron/proton ratio (a simple fraction) and molecular size (a volume measured in cubic nanometers) are different dimensions; yet both of them can express the same event, i.e. electric charge. Therefore, both could be used to train AI on predicting the molecular charge correctly. If, like in the example above, the in reality observed outcome can be perfectly predicted in at least two different dimensions, then it is reasonable to believe that all hidden factors have been discovered. The relationship between electron/proton ratio and molecular volume is about the same as between transcription factor binding sites (TFBS) and the trajectories of gene expression time series plots. Both, i.e. TFBS-distribution and time series trajectory, are also expressing the same thing, i.e. transcription in four different dimensions.

To support our hypothesis that we are still many concepts away from understanding and manipulating aging, the example of the mesentery can be used. It took humanity until early 2017 to discover its function and group it together differently according to the new discoveries. We have seen this organ for a long time, yet still, it remained a hidden factor for us, that it is indeed an organ, but it could not be recognized as an organ because its functions were still unknown [2]. This is what is meant by imperative hidden object (IHO).

The discovery of nitrogen

A speculative example how nitrogen could be discovered using this method of intuition-driven hidden object discovery procedure, which relies on randomly varying background and object features, until they differ in at least one feature, by which they can be told apart from one another and from other similar-looking objects.

Looking, sonar sounding, radiating, magnetizing, light shining and changing temperature are considered variations in measuring methods. Below -80 degree Celsius the feature aggregate state for nitrogen is liquid, which differs from the remaining still gaseous objects. Therefore, we can conclude that the liquid must be a separate object from the gaseous. Thus, we have found a feature by which it differs from the rest. Hence, we took away the imperative hidden nature by changing environmental conditions, i.e. temperature, until nitrogen looked different from the rest. There was no data, which could have told us in advance that gradually lowering the temperature would expose features of difference between the objects. That is why we must become creative in finding ways by which we can vary environmental conditions and measurement methods until hidden objects differ from one another or their environment in at least one feature, which we can measure. However, until we cool it down to – 80 degree Celsius, nitrogen remains an imperative hidden object unless researchers can find other means to make one of nitrogen’s features to stick out from its background and other visible objects. If cooling does not work, nitrogen could be isolated from the other gases by boiling it.

Challenges encountered in distinguishing between similar objects

Imagine a small brown ball laying in front of an equally brown wall. When looking at it, we can see a brown object, which looks like a wall. However, we cannot see that there is an equally colored ball laying in front of the wall as long as the light is dim and their colors are perfectly matching when looking at the wall. By looking at both objects, they appear to be only one. Even though a brown wall is a legitimate visible object, it serves as background camouflaging the equally colored brown ball in front of it. Thus, 2 different objects are mistaken into one object.

However, a bat, who navigates very well by sonar sounding reflection (i.e. by echo-looting), has no problem to distinguish between ball and wall, no matter how equally colored they look, as long as the ball is some distance in front of the wall. This is an example how changes in measurement dimensions, e.g. substituting visual with echo-looting perception may allow to switch over to another feature, i.e. sound reflection, to distinguish between optically indistinguishable objects.

However, on the other hand, this example can also demonstrate the limitations of this environmental and measurement method/dimension variation approach. Let’s assume that scientists figured out a way to make the bat more intelligent than humans. Unfortunately, no matter how clever the bat may become, it may never understand the concept of reading, writing and the benefit of a computer screen, since it cannot extract anything in all three cases, because when sonar-sounding and echo-looting a screen, all letters, pictures, figures and graphs remain imperatively hidden objects, if explored by sonar-sound reflection only. It would even be challenging to explain the bat the concept of a screen because it cannot imagine different colors.

This shows again that we must remain flexible with our observational measuring techniques, because if we don’t vary them profoundly enough, we may fail in the discovery of still hidden objects. The naïve observer can only discover by trial and error. Lucky, that we humans have developed devices to measure differences in dimensions, for which we lack inborn sensory perception. But we also must use our different measuring devises to collect data, make observations and explore innately hidden dimensions, if we fail to discover differences between at least one feature of our hidden object of interest and other similar-looking objects as well as in at least one feature from its surrounding environment, if we cannot make such kinds of distinction within the limitations for our relatively small innate sensory sensitivity range.

The good news is that any two distinct objects must vary from one another and their environmental background by at least one feature because otherwise they could not be different objects. The challenge is to discover at least one feature, by which hidden objects differ in at least one situation from one another and their camouflaging environmental background.

This is the conceptual foundation, according to which anyone, who can observe in all possible dimensions, must eventually by systematically applying trial and error alone encounter conditions, which allow to expose difference in at least one feature based on which any object can be discerned from its environment and other objects by at least one feature under at least one environmental condition. That is why AI can play a very valuable role in systematically iterating through no matter how many options as long as the total number of combinations for condition and observation variations remains finite.

Numerical calculations, evaluations, comparisons or rankings are not required as long as qualitative distinction allows for at least a Boolean decision, i.e. the hidden object differs or does not differ from its environment and other objects in at least one feature under one set of experimental conditions. True or false, yes or no, is enough for succeeding in uncovering previously imperatively hidden objects.

Why are data and numbers only crutches?

Data and numbers are only crutches, on which most of us depend on way too much in determining their next steps. We tend to refuse exploring new options of variations unless data points us to them. But the naïve imperative hidden object discoverer has no data or information to infer that changing temperature would expose a feature for distinction, but shining light, radiating, sonar sounding, fanning and looking would not. The new feature hunter must simply use trial and error. He must follow his intuition because directing data is lacking. It would not help him to perfectionate the resolution of his measuring techniques as long as he has not changed the environmental conditions such that objects differ in at least one feature from one another and/or their surrounding environment. In such cases heuristic search options are lacking. There is no method that tells how and what to vary in order to expose new features for novel distinctions between former hidden objects.

What are the great benefits of highly speculative hypothetical assumptions?

It is not justified to refuse considering even the most speculative hypothetical theory or assumption for testing because what is the alternative? The alternative is not to vary features. But without it, no improvements in feature selection are possible. It is still better to have 0.00001% chance of the most speculative hypothetic assumption to change the conditions such that a distinguishing feature gets exposed. Any hypothetic and speculative hypothesis – no matter how unlikely it will be true – is better than the status quo, because it implies changes in feature selection, which is always better than keeping the status quo regarding selected training features. That is why even highly speculative hypothetical assumptions and theories – as long as they do not internally contradict themselves - should not be frowned upon; but instead, they should be very seriously tested. Even if most of them will eventually get disproven, it means progress. Any ruled out hypothesis is progress, because it is a discovery about how aging is not regulated. This excludes many options by giving an example for a way by which aging cannot be manipulated.

Why is diversity in research parameters, methods, features and workforce, surrounding environment essential for rapid scientific progress?

Instead of getting discouraged, researchers and students should be encouraged and rewarded for developing and testing speculative hypothetical assumptions, because they require inducing variations, which have the potential to expose an - until then still hidden - object, which could be identified at least by one distinguishable feature. If the data-driven approach directs our attention to vary specific conditions in certain directions, well and good. But if not, we must not stop varying conditions, just because no data can give us directions.

What are the best research methods?

Numbers and calculations are only one out of many tools to uncover imperative hidden objects. They tend to work well when available. But that does not mean that we should refuse exploiting alternative discovering methods, such as intuition, imagination, visions, dreams, trends, internal visualizations, analogies and other irrationally rejected feature discovering methods, which are completely number independent. We should explore these non-data-driven numerically independent methods of variation determinations for directional environmental changes at least as seriously as the numeric ones. Otherwise, we unnecessarily deprive ourselves to keep making progress in feature selection in the absence of numerical data. Of course intuition and data should not contradict one another. But no option should be discarded because it is erroneously believed as being “too hypothetical or speculative”.

Why is the discovery of the magnetic field so essential for machine learning?

For example, in order to uncover the magnetic field from a hidden to an observable object, it takes lots of trial and error variation of the kind I have described above. One must have magnets and iron objects before one can observe the consequences of the initially still hidden magnetic field (object). Once we have the consequences, we can use the feature and measurement variation methods analog to those outlined above, to hunt for the still hidden causes, i.e. hidden factors/objects.

Protein folding example part 2

Let us apply these variation methods to protein folding prediction. If our prediction accuracy is only 30%, we must scrutinize the product, because most likely, it is not one but more than one different objects, which to us still look the same.

Apparently, although they all look like the same protein, they obviously cannot be the same objects, because they differ in one very significant function-determining feature, i.e. their overall three-dimensional folding. This makes the actually imperatively different objects. Objects are considered to be imperatively different, when it has become impossible to devise a method of measurement or distinction that could erroneously still mistake them as only one object.

In case of proteins, we are unnecessarily limiting ourselves to the dimension “protein” because actually the low folding prediction accuracy implies that – despite them sharing the same primary amino acid sequence – they must be considered as different versions of a protein, which differ in their feature “three-dimensional folding” from one another. If objects differ in at least one feature, they must no longer be considered as the same, but as distinctly different objects.

Why are proteins not treated like RNA? For RNA it is explicit that there are many kinds, with very specific functions and which therefore, cannot be substituted for one another. For example, we distinguish between mRNA, tRNA, rRNA microRNA, etc.

Similarly, assuming that there are 3 protein folding states, we should develop a new nomenclature, which can distinguish between the alpha, beta and gamma folding state.

Why could evolution favor unpredictable protein folding patterns for the same primary amino acid sequence?

As we know protein folding affects protein function. So what could have been the evolutionary advantages that caused proteins with different folding patterns to evolve? Here, we have no data. All we have is our intuition and insights, which work surprisingly well, if we will stop refusing to develop and apply this insight-based methods much more readily and confidently and stop considering them as to be inferior, less valuable and reliable than data-driven predictions. If I can tell a difference between two objects, they can no longer be the same but instead must be accounted as two different objects, which should no longer be considered as one. E.g. a blue and a red dice are two different objects of the kind dice but they can never be the same objects when observed by the human eye. These are then inferred as imperatively different objects (IDO). The same applies to proteins of different folding shapes even more so; because not only do they differ in their feature 3D-folding, but also in their feature “function”. Hence, they can no longer be considered as one of the same kind.

As it seems to be the case for all initially hidden objects, we seem to observe the consequence (folding difference) before its still hidden cause. To find the cause there must be a hidden object or transition during translation, which makes identical mRNA molecules to fold up in different ways after translation. Where could this be useful?

A highly speculative – but nevertheless still valuable - hypothesis

For example, too high concentration of geronto-proteins shortens lifespan. But too low concentration of the same gerontogenes (i.e. genes, if knocked out, extend lifespan) could interfere with maintaining life-essential functions. That is why their concentration must remain within a very narrow window, too narrow to be adhered to transcriptional regulation alone. There are too many variables, which can affect how much protein is getting translated from a certain mRNA concentration. Hence, instead of a front-end (i.e. transcriptome) we need a back-end, i.e. protein folding dependent functional adjustment. Such kind of a much more sensitive and much more autonomously functioning enzymatic reaction speed adjustment mechanism could work like this:

If the substrate concentration is high, the nascent protein can bind a substrate molecule in its active site even before it has detached from the ribosome. But while still in the process of getting translated, no activator can bind to the protein’s alosteric binding site. This time is long enough for the protein-substrate complex to function thermodynamically like one molecule and fold to reach its lowest energetic state. After the protein detaches from the ribosome, the co-factor can bind, the protein cuts its substrate, but it remains locked in the same folding state as it was when it still formed a molecular folding unit with its substrate.

If protein concentration becomes toxically high the yeast wants to turn off all the mRNA coding for the protein, which is about to rise to toxic levels. Degrading mRNA takes too long. It is much easier to figure out a way to make the excess toxic proteins to fold into an enzymatic inactive state. This can easily be achieved because enzymatic over-functioning enzymes can quickly process the still remaining substrates in the cytoplasm or at the endoplasmatic reticulum (ER); hence, turning enzymatic functions off in less time. This is a quick way to lower cytotoxic protein concentration. This causes the nascent protein to fail in binding a substrate while still getting translated. The missing substrate causes this protein to have a different lowest energy state and accordingly folds in a different way as if it had bound its substrate. But this is an enzymatic non-functional folding arrangement. The co-factor cannot bind to its alosteric site. Thus, the upwards trends of the toxically rising protein is already getting reversed. But to make this directional concentration change even stronger, the enzymatic inactive folding state allows for a repressor co-factor to bind at its other alosteric binding site. This causes this protein to change its conformational shape again so that it can use its now exposed DNA-binding domain to bind to the promoter of exactly the same gene, which is coding for it; thus, inhibiting its further transcription.

Intuition-driven vs. data-driven research approach

As you can see I have not used any data but only intuition to develop a hypothesis, which can be tested. And no matter how highly hypothetic and hence unlikely this crazy and highly speculative hypothesis above may seem, in order to test it, I must change the environment and measurement methods in many ways. This increases my odds to discover an environmental condition and measurement method pair by chance that allows a totally unexpected distinctive feature to emerge. That is exactly what we want.

Different scientists have different gifts, which are great in some, but completely worthless, in other situations. Most researchers tend to feel much more comfortable in employing driven numerically reproducible analytical methods. However, a few like me, enjoy intuition-based prediction based on story-telling, imagination-like visions predictions, which are the best option we have for any domains, for which we cannot generate numerically reproducible data yet.

But since we tend to favor numerical over intuitional based prediction methods, dimensions within which we can qualitatively, but not quantitatively, distinguish from one another remain underexplored or even completely ignored, because no researcher dares to admit that his conclusions are not based on numbers.

What is the ideal life cycle of a new machine learning algorithm?

But I want to talk about something more important. I want to answer the question what is the ideal life cycles of developing, training, tuning and optimizing a machine learning algorithm.

There is always a need for better algorithms. As we discover more relevant features, according to the methodology described in the previous chapter, we indeed need better and more comprehensive algorithms to account for them. So will use trial and error and hopefully also some intuition and parameter tuning to improve our F-score. We will again approach an error asymptote, which is greater than zero eventually. But even if we get perfect prediction, this should not be our main final objective, but only means to an end to unravel still hidden objects. Our work is not done when we have reached perfect prediction although it implies proper feature selection. But we are never satisfied. As soon as we have the ideal machine learning solution, we want to create conditions, which will cause our tool to fail. Why? The reason why we are interested in forcing our algorithm to fail is because we want to explore situations when the assumptions of our algorithm are no longer met. For such kind of situations, we will have more or different essential features that must account for new circumstances, connectional innovations and perceptional changes in perspectives adequate for addressing a more complex situation, which has previously not yet been considered.

For example, when I started my dissertation I thought that there are only three kinds of aging regulating genes:

1 Lifespan-extending genes (i.e. aging suppressors) 2 Lifespan-shortening genes (i.e. gerontogenes) 3 Genes, which do not affect lifespan.

Dr. Matt Kaeberlein’s lab kindly gave me lifespan data for most of the possible gene knockout mutants. Caloric Restriction (CR) extended lifespan in wild type (WT), but shortened it in Erg6 and Atg15 knockouts. The generalization that CR is a lifespan extending intervention suddenly no longer held true for both of our knockouts. Tor1 and Sch9 knockouts lived about as long as wild type in CR. Hence, on normal 2% glucose media (YEPD), they are functioning like aging-suppressor genes, but during CR, they are functioning like non-aging genes. This would have caused every machine learning algorithm, which only assumes that an intervention can shorten, lengthen or have no change on lifespan, to inevitably fail, if the genotype feature is not given as part of the training data too. This causes genotype and intervention to become an imperative pair, whose members must not be considered in isolation, when training a more predictive algorithm.

Let us say I only train my algorithm on WT data to classify into three broad categories, i.e. lifespan extending, shortening or not changing interventions. Then CR would always extend lifespan. But if I – instead of WT – apply CR to the Atg15 knockout – its lifespan shortens through CR. Our algorithm would fail because it was not trained on knockout data. This kind of failure is not at all a bad thing - but instead a blessing in disguise - because it is teaching us that apart from the feature intervention, there is also the feature genotype, which affects lifespan and which must be considered together with genotype like an indivisible unit-pair of atomic data, whose components must never be evaluated in isolation. We only could notice it because our only WT data trained AI imperatively failed on predicting the impact of CR on Atg15 knockouts. From then onwards we know that for correct prediction genotype and intervention must be given together as a pair to train our artificial intelligence (AI). This allows us to establish that apart from intervention, genotype is another essential feature for correctly predicting lifespan. So far, we only trained our AI on glucose media. Since it was the same for all the training sets this feature was not yet essential as long as it could only take on the same value. But when testing it on galactose, tryptophan or methionine deficient media our algorithm will imperatively fail again because now we need to consider a triplet as one piece of information, i.e. intervention, genotype and media. Only if we train our AI on indivisible triplet unit pairs our AI can succeed. We just have shown how intentionally creating variations in the condition can reveal new hidden objects but only when a naively perfectly working AI suddenly starts failing. But without naïve AIs to have failed we could have never discovered this new feature. Hence, causing perfectly scoring AIs to fail is a very good method of choice for discovering new features.

However, if what I have been writing so far is all true, how come I not remember a single peer-reviewed paper to discuss these issues from a similar perspective? For the protein folding prediction I could make up plenty of regulatory scenarios, like the one a few paragraphs above, which could be tested in the web-lab. For example we know that the speed of translation depends on the charged tRNA ratios and concentrations in the cytoplasm and the endoplasmic reticulum. For example, we know that three tryptophans in a row cause translation to stop prematurely since the concentration of tryptophan-charged tRNAs is too low for continuing translation on time. Using our newly derived machine learning feature selection methodical toolbox, we had assumed that we can see a consequence, i.e. premature translation abortion, for which we must now start looking for the still hidden cause. However, in this particular case, the obscure reason for the abortion is not even hidden because the mRNA nucleotides coding for the three tryptophans can clearly and easily be measured and observed. But this tryptophan triplet, i.e. these 3 identical yet still distinct objects, started to form a kind of conceptual super-object possessing completely novel properties / features that none of its three individual units posses even in small parts on its own. This totally unrelated qualitatively completely novel dimension, which are completely absent in any of its parts, are like a gain-of-function effect. It terminates translation. Hence, these three termination causing tryptophans form a new shapeless super-object, on a whole different level, which cannot be accounted for by simply adding up the properties of the three tryptophans. Their mode of action to stop translation is of a much different nature than complementary codon-based translational mRNA/tRNA binding. The three tryptophan possess a new quality that cannot be distributed to each single tryptophan member.

It is kind of like we humans, who keep adding a lot of dimensionless, shapeless and physical matter-independent virtual features, based on which we distinguish between each other, which may be hard for AI to grasp. E.g., based on our first, middle and last name, SSN, citizenship, job, family role, etc., we make big differences between ourselves, which affect lifespan. Unfortunately, AI could not discover those, unless it can add the feature to perceive spoken and written communication, which is the only way by which our virtual self-imposed physical dimensionless features can be distinguished from one another.

Why will feature discovery never stop?

The new feature discovering cycle will never end because as soon as we think we have got it to work, we hope to succeed in creating an exception, which causes are newly trained AI to fail, since this allows us to discover even another new relevant feature.

We started out with the feature “intervention” and discovered “genotype” and “food media type”. Now the next Kaeberlein dataset had features like temperature, salinity, mating type, yeast strain, etc. which also affect lifespan. Now for one knockout we could have more than 10 different reported lifespans. According to my understanding this would make the concept of purely aging-suppressing gene or gerontogene obsolete. This would raise the number of components, which must be considered together as an indivisible atomic unit, of which none of its parts must be considered in isolation, to consist of already seven components, which must be given with every supervised input training sample for our AI. If this trend keeps growing like that, then the number of components, which form a data-point like entry, keeps growing by one new component for every new feature discovered. But would this not cause our data points to become too clumsy? But even if it does, for every new feature, which we decide to consider, our indivisible data unit must grow by one component. However, this would mean that 10 essential features would create data points of 10 dimensions. If we keep driving this to the extreme, when considering 100 new features, then we have 100 dimensional data points. But this would almost connect everything we can measure into a single point. This would put away with independent features because their dimensions will all get linked together. Is there something wrong with my thinking process here? I never heard anybody complaining about such kind of problems.

From this chapter we can conclude that the best AIs are those, which fail in a way that allows us to discover a new feature for subsequent feature selection.

How many still hidden concepts separating us from reversing aging?

But how many of such kind of essential key features breakthrough discoveries are we still away from solving and reversing aging? The lack of progress in extending lifespan by more than 50% indicates serious problems with feature selection. What does it take to make our experimental life scientists to please understand this essential feature selection concept through variation and to consider it in their experimental design? When this concept was published online in October of 2017, it became the most read material from UALR and has remained the most read contributions from the Information Science Department. Since then, the contributions to this topic have engaged more than 350 readers per week on www.ResearchGate.net. Occasionally, more than 200 readers were counted on a single day. The contributions to this topic received 20 recommendations last week. Only because of the strong encouragement and conceptual validation by researchers with very good reputations and impressive peer-reviewed publication track record, who took the time to answer conceptual questions at www.ResearchGate.net and www.Academia.edu, caused the necessary gain in self-confidence, which is needed for spending lots of time on working and revising this absolutely non-mainstream manuscript, since its authors are convinced that this is the only way to raise our chances as a social species to excel scientifically and improve methodically to accelerate our overall knowledge discovery rate and research efficiency to accomplish true immortality and permanent rejuvenation into feeling forever young, healthy, strong, energetic, optimistic and goal-driven within our potential reach of achieving this dream (i.e. currently a still deeply imperatively hidden object) within the upcoming 2 decades, if the recommendations outlined in this and other related manuscripts in preparation are not only widely considered but enthusiastically implemented, applied and further enhanced by all stakeholders. This manuscript intends to change the way research is conducted by minimizing the time periods during which everyone feels confused by providing highly effective guidance for overcoming the limitations posed by still imperatively hidden objects/features/factors/causes/elements.

What kind of datasets are needed to fully understand reverse engineer aging?

The best scenario would be to measure every 5 minutes through the entire yeast’s lifespan its transcriptome, proteome, metabolome, epigenetic, lipodome, automatic morphological microscope pictures, ribogenesis, ribosomal foot printing, DNA chip-chip and DNA chip-seq. analysis, speed of translation, distribution and ratios between charged tRNA in cytoplasm, length of poly-AAA-tail, non-coding RNA binding, autonomous replicating regions (ARS), vacuolar acidity, autophagy, endocytosis, proton pumping, chaperon folding, cofactors, suppressors, activators, etc.


- isDefinedBy:

- Relations:


- Add Property:


- Comments: