Mechanisms underlying direct programming of stem cells could eventually lead to cell-replacement therapies
A team of scientists has uncovered details of the cellular mechanisms that control the direct programming of stem cells into motor neurons. The scientists analyzed changes that occur in the cells over the course of the reprogramming process. They discovered a dynamic, multi-step process in which multiple independent changes eventually converge to change the stem cells into motor neurons.
“There is a lot of interest in generating motor neurons to study basic developmental processes as well as human diseases like ALS and spinal muscular atrophy,” said Shaun Mahony, assistant professor of biochemistry and molecular biology at Penn State and one of the lead authors of the paper. “By detailing the mechanisms underlying the direct programing of motor neurons from stem cells, our study not only informs the study of motor neuron development and its associated diseases, but also informs our understanding of the direct programming process and may help with the development of techniques to generate other cell types.”
The direct programming technique could eventually be used to regenerate missing or damaged cells by converting other cell types into the missing one. The research findings, which appear online in the journal Cell Stem Cell on December 8, 2016, show the challenges facing current cell-replacement technology, but they also outline a potential pathway to the creation of more viable methods.
“Despite having a great therapeutic potential, direct programming is generally inefficient and doesn’t fully take into account molecular complexity,” said Esteban Mazzoni, an assistant professor in New York University’s Department of Biology and one of the lead authors of the study. “However, our findings point to possible new avenues for enhanced gene-therapy methods.”
The researchers had shown previously that they can transform mouse embryonic stem cells into motor neurons by expressing three transcription factors — genes that control the expression of other genes — in the stem cells. The transformation takes about two days. In order to better understand the cellular and genetic mechanisms responsible for the transformation, the researchers analyzed how the transcription factors bound to the genome, changes in gene expression, and modifications to chromatin at 6-hour intervals during the transformation.
“We have a very efficient system in which we can transform stem cells into motor neurons with something like a 90 to 95 percent success rate by adding the cocktail of transcription factors,” said Mahony. “Because of that efficiency, we were able to use our system to tease out the details of what actually happens in the cell during this transformation.”
“A cell in an embryo develops by passing through several intermediate stages,” noted Uwe Ohler, senior researcher at the Max Delbrück Center for Molecular Medicine (MDC) in Berlin and one of the lead authors of the work. “But in direct programming we don’t have that: we replace the gene transcription network of the cell with a completely new one at once, without the progression through intermediate stages. We asked, what are the timing and kinetics of chromatin changes and transcription events that directly lead to the final cell fate?”
The research team found surprising complexity — programming of these stem cells into neurons is the result of two independent transcriptional processes that eventually converge. Early on in the process, two of the transcription factors — Isl1 and Lhx3 — work in tandem, binding to the genome and beginning a cascade of events including changes to chromatin structure and gene expression in the cells. The third transcription factor, Ngn2, acts independently making additional changes to gene expression. Later in the transformation process, Isl1 and Lhx3 rely on changes in the cell initiated by Ngn2 to help complete the transformation. In order for direct programming to successfully achieve cellular conversion, it must coordinate the activity of the two processes.
“Many have found direct programming to be a potentially attractive method as it can be performed either in vitro — outside of a living organism — or in vivo — inside the body and, importantly, at the site of cellular damage,” said Mazzoni. “However, questions remain about its viability to repair cells — especially given the complex nature of the biological process. Looking ahead, we think it’s reasonable to use this newly gained knowledge to, for instance, manipulate cells in the spinal cord to replace the neurons required for voluntary movement that are destroyed by afflictions such as ALS.”
Learn more: How to Make a Motor Neuron
A precise measurement of absolute beam intensity is essential for many areas of science. It is a key parameter to monitor any losses in a beam and to calibrate the absolute number of particles delivered to the experiments.
However, this type of measurement is very challenging with traditional beam current diagnostics when it comes to low energy, low intensity beams due to the very low signal levels. Particle accelerator experts from the University of Liverpool have now experimentally demonstrated a new type of monitor in a collaboration with CERN, the GSI Helmholtz Centre for Heavy Ion Research and Friedrich Schiller University and Helmholtz Institute Jena.
A paper just published in the IOP “Superconducting Science and Thechnology” journal, the challenges of implementation and first beam measurements are reported. These are the first-ever measurements of this type performed in a synchrotron using both coasting and short bunched beams.
The Antiproton Decelerator (AD) is a synchrotron that provides low-energy antiprotons for studies of antimatter. These studies rely on creating antimatter atoms (such as anti-hydrogen) and using them as probes for the most fundamental symmetries in nature such as the invariance of CPT, or of the gravitational acceleration on matter and antimatter.
A precise measurement of the beam intensity in the AD is essential to monitor any losses during the deceleration and cooling phases of the AD cycle, and to calibrate the absolute number of particles delivered to the experiments. However, this is very challenging with traditional beam current diagnostics due to the low intensity of the antiproton beam which is of the order of only 10 Million particles, corresponding to beam currents as low as a few hundred nano-Amperes. To cope with this, a Cryogenic Current Comparator (CCC) based on a Superconducting QUantum Interference Device (SQUID) was developed and installed in the AD, in a collaboration between accelerator experts from the University of Liverpool and CERN, the GSI Helmholtz Centre for Heavy Ion Research, Friedrich Schiller University and the Helmholtz Institute Jena.
Previous incarnations of CCC’s for accelerators suffered from issues concerning sensitivity to mechanical vibrations and electromagnetic perturbations. Furthermore, these setups were used for measuring slow beams, usually from transfer lines of accelerators, and were unable to measure short bunched beams presenting fast current variations. In order to measure the beam current and intensity throughout the cycle of a synchrotron machine such as the AD, the CCC needed to be adapted to cope with the fast signals of bunched beams.
In an open access paper just published in the IOP “Superconducting Science and Thechnology” journal, Miguel Fernandes and co-authors describe the challenges of implementation and first beam measurements. These are the first-ever CCC beam current measurements performed in a synchrotron using both coasting and short bunched beams. The paper demonstrates the exciting prospects of this new type of beam diagnostics device.
Releasing molecular brake allowed damaged neurons to regenerate
Injuries to the spinal cord can cause paralysis and other permanent disabilities because severed nerve fibers do not regrow. Now, scientists of the German Center for Neurodegenerative Diseases (DZNE) have succeeded in releasing a molecular brake that prevents the regeneration of nerve connections. Treatment of mice with Pregabalin, a drug that acts upon the growth inhibiting mechanism, caused damaged nerve connections to regenerate.
Human nerve cells are interconnected in a network that extends to all parts of the body. In this way control signals are transmitted from head to toe, while sensory inputs flow in the opposite direction. For this to happen, impulses are passed from neuron to neuron, not unlike a relay race. Damages to this wiring system can have drastic consequences – particularly if they affect the brain or the spinal cord. This is because the cells of the central nervous system are connected by long projections. When severed, these projections, which are called axons, are unable to regrow.
Reawakening a lost talent
Neural pathways that have been injured can only regenerate if new connections arise between the affected cells. In a sense, the neurons have to stretch out their arms, i.e. the axons have to grow. In fact, this happens in the early stages of embryonic development. However, this ability disappears in the adult. Can it be reactivated? This was the question Professor Bradke and co-workers asked themselves. “We started from the hypothesis that neurons actively down-regulate their growth program once they have reached other cells, so that they don’t overshoot the mark. This means, there should be a braking mechanism that is triggered as soon as a neuron connects to others,” says Dr. Andrea Tedeschi, a member of the Bradke Lab and first author of the current publication.
Searching through the genome
In mice and cell cultures, the scientists started an extensive search for genes that regulate the growth of neurons. “That was like looking for the proverbial needle in the haystack. There are hundreds of active genes in every nerve cell, depending on its stage of development. To analyze the large data set we heavily relied on bioinformatics. To this end, we cooperated closely with colleagues at the University of Bonn,” says Bradke. “Ultimately, we were able to identify a promising candidate. This gene, known as Cacna2d2, plays an important role in synapse formation and function, in other words in bridging the final gap between nerve cells.” During further experiments, the researchers modified the gene’s activity, e.g. by deactivating it. In this way, they were able to prove that Cacna2d2 does actually influence axonal growth and the regeneration of nerve fibers.
Pregabalin triggered neuronal growth
Cacna2d2 encodes the blueprint of a protein that is part of a larger molecular complex. The protein anchors ion channels in the cell membrane that regulate the flow of calcium particles into the cell. Calcium levels affect cellular processes such as the release of neurotransmitters. These ion channels are therefore essential for the communication between neurons.
In further investigations, the researchers used Pregabalin (PGB), a drug that had long been known to bind to the molecular anchors of calcium channels. Over a period of several weeks, they administered PGB to mice with spinal cord injuries. As it turned out, this treatment caused new nerve connections to grow.
“Our study shows that synapse formation acts as a powerful switch that restrains axonal growth. A clinically-relevant drug can manipulate this effect,” says Bradke. In fact, PGB is already being used to treat lesions of the spinal cord, albeit it is applied as a pain killer and relatively late after the injury has occurred. “PGB might have a regenerative effect in patients, if it is given soon enough. In the long term this could lead to a new treatment approach. However, we don’t know yet.”
A new mechanism?
In previous studies, the DZNE researchers showed that certain cancer drugs can also cause damaged nerve connections to regrow. The main protagonists in this process are the “microtubules”, long protein complexes that stabilize the cell body. When the microtubules grow, axons do as well. Is there a connection between the different findings? “We don’t know whether these mechanisms are independent or whether they are somehow related,” says Bradke. “This is something we want to examine more closely in the future.”
Nano-sized metallic wires are attracting increasing attention as conductive elements for manufacturing transparent electrodes, which are employed in solar cells and touch screen panels. In addition to high electric conductivity, excellent optical transmittance is one of the important parameters for an electrode in photovoltaic applications. An international team headed by HZB scientist Prof. Michael Giersig has recently demonstrated for these applications that networks of metallic mesh possessing fractal-like nano-features surpass other metallic networks in utility.
These findings have now been published in the most recent edition of the renowned journal Nature Communications.
Their new development is based on what is termed quasi-fractal nano-features. These structures have similarities to the hierarchical networks of veins in leaves. Giersig’s team was able to show that metallic networks with these features optimise performance of electrodes for several applications. They combine minimized surface coverage with ultra-low total resistance while maintaining uniform current density. In addition, it was demonstrated that these networks, inspired by nature, can surpass the performance of conventional indium tin oxide (ITO) layers. In experiments on artificially constructed electrode networks of different topologies, the scientists established that non-periodic hierarchical organisation exhibited lower resistance as well as excellent optical transmittance in comparison to periodic organisation. This led to elevated output power for photovoltaic components.
“On the basis of our studies, we were able to develop an economical transparent metal electrode”, says Giersig, continuing “We obtain this by integrating two silver networks. One silver network is applied with a broad mesh spacing between the micron-diameter main conductors that serve as the “highway” for electrons transporting electrical current over macroscopic distances.” Next to it, additional randomly distributed nano-wire networks serve as local conductors to cover the surface between the large mesh elements. “These smaller networks act as regional roadways beside the highways to randomise the directions and strengths of the local currents, and also create refraction effects to improve transparency above that of classical shadow-limited performance”, according to Giersig. “Solar cells based upon these electrodes show exceptional a high efficiencies”.
The capacity of lithium-ion batteries might be increased by six times by using anodes made of silicon instead of graphite. A team from the Helmholtz-Zentrum Berlin (HZB) Institute of Soft Matter and Functional Materials has observed for the first time in detail how lithium ions migrate into thin films of silicon. It was shown that extremely thin layers of silicon would be sufficient to achieve the maximal load of lithium.
The team was able to show through neutron measurements made at the Institut Laue-Langevin in Grenoble, France, that lithium ions do not penetrate deeply into the silicon. During the charge cycle, a 20-nm anode layer develops containing an extremely high proportion of lithium. This means extremely thin layers of silicon would be sufficient to achieve the maximal load of lithium.
Lithium-ion batteries provide laptops, smart phones, and tablet computers with reliable energy. However, electric vehicles have not gotten as far along with conventional lithium-ion batteries. This is due to currently utilised electrode materials such as graphite only being able to stably adsorb a limited number of lithium ions, restricting the capacity of these batteries. Semiconductor materials like silicon are therefore receiving attention as alternative electrodes for lithium batteries. Bulk silicon is able to absorb enormous quantities of lithium. However, the migration of the lithium ions destroys the crystal structure of silicon. This can swell the volume by a factor of three, which leads to major mechanical stresses.
Observation during charging cycle
Now a team from the HZB Institute for Soft Matter and Functional Materials headed by Prof. Matthias Ballauff has directly observed for the first time a lithium-silicon half-cell during its charging and discharge cycles. “We were able to precisely track where the lithium ions adsorb in the silicon electrode using neutron reflectometry methods, and also how fast they were moving”, comments Dr. Beatrix-Kamelia Seidlhofer, who carried out the experiments using the neutron source located at the Institute Laue-Langevin.
Lithium-rich layer of only 20 nanometer
She discovered two different zones during her investigations. Near the boundary to the electrolytes, a roughly 20-nm layer formed having extremely high lithium content: 25 lithium atoms were lodged among 10 silicon atoms. A second adjacent layer contained only one lithium atom for ten silicon atoms. Both layers together are less than 100 nm thick after the second charging cycle.
Theoretical maximum capacity
After discharge, about one lithium ion per silicon node in the electrode remained in the silicon boundary layer exposed to the electrolytes. Seidlhofer calculates from this that the theoretical maximum capacity of these types of silicon-lithium batteries lies at about 2300 mAh/g. This is more than six times the theoretical maximum attainable capacity for a lithium-ion battery constructed with graphite (372 mAh/g).
Less is more
These are substantial findings that could improve the design of silicon electrodes: very thin silicon films should be sufficient for adsorbing the maximum possible amount of lithium, which in turn would save on material and especially on energy consumed during manufacture – less is more!
Finally Up and Running
Silicon-air batteries are viewed as a promising and cost-effective alternative to current energy storage technology. However, they have thus far only achieved relatively short running times. Jülich researchers have now discovered why.
In theory, silicon-air batteries have a much higher energy density and are also smaller and lighter than current lithium-ion batteries. They are also environmentally friendly and insensitive to external influences. Their most important advantage, however, is their material. Silicon is the second most abundant element in the Earth’s crust after oxygen: it is cheap and its reserves are practically inexhaustible.
Scientists delineate molecular details of a new bacterial CRISPR-Cpf1 system and open possible avenue for alternative gene editing uses like targeting several genes in parallel
Only a few years after its discovery, it is difficult to conceive of genetics without the CRISPR-Cas9 enzyme scissors, which allow for a very simple, versatile and reliable modification of DNA of various organisms. Since its discovery, scientists throughout the world have been working on ways of further improving or adjusting the CRISPR-Cas9 system to their specific needs.
Researchers from the Max Planck Institute for Infection Biology in Berlin, the Umeå University in Sweden and the Helmholtz Centre for Infection Research in Braunschweig have now discovered a feature of the CRISPR-associated protein Cpf1 that has previously not been observed in this family of enzymes: Cpf1 exhibits dual, RNA and DNA, cleavage activity. In contrast to CRISPR-Cas9, Cpf1 is able to process the pre-crRNA on its own, and then using the processed RNA to specifically target and cut DNA. Not requiring a host derived RNase and the tracrRNA makes this the most minimalistic CRISPR immune system known to date.
The mechanism of combining two separate catalytic moieties in one allows for possible new avenues for sequence specific genome engineering, most importantly facilitation of targeting multiple sites at once, the so-called multiplexing.
CRISPR-Cas is part of the immune system of bacteria and is used to fight viruses. In the CRISPR-Cas9 system, the enzyme Cas9 cuts the virus DNA at a location specified by an RNA molecule – known as CRISPR RNA (crRNA) in complex with another RNA, the so-called tracrRNA. This puts the pathogens out of action.