content stringlengths 1 315k |
|---|
Rapid Screening of Retrieved Knee Prosthesis Components by Confocal Raman Micro-Spectroscopy Featured Application: Application of confocal Raman spectroscopy in search of failure reason in knee arthroplasty. Abstract: Aim: To evaluate the failure reason and surface modifications of a retrieved knee prosthesis; Methods: Rapid confocal Raman spectroscopy screening was applied on the surface of a retrieved knee prosthesis (both titanium and UHMWPE (Ultra-high-molecular-weight polyethylene) component) in order to determine predominate implant damage, along with the chemical composition of synovial fluid accumulated in the stem of the tibial component during the implantation period. Correlations between the medical records of the patient (clinical and radiographic information) and spectroscopic results are pointed out, the parameters being interpreted in the context of proper functioning and life span of knee prosthesis; Results: The metallic tibial component does not show any modification during the implantation period, as demonstrated by the well preserved titanium component with signature of anatase phase detected in retrieved component, compared to unused piece. The spectral features of polymeric component (ultrahigh molecular weight polyethylene, UHMWPE) revealed structural modification in crystallinity and amorphous phase accompanied by insignificant level of oxidation (OI < 1). Scratching, pitting and persistent organic spots as a result of mechanical and chemical deterioration were noticed on the surface of retrieved insert. Acrylic cement deterioration was also noticed. Synovial fluid collected from the stem of the tibial component demonstrated a lipidomic profile; Conclusions: Combining the clinical evidences with confocal Raman spectroscopy allowed a rapid screening with high sensitivity and nondestructive measurements in the case of failure in TKA (total knee arthroplasty). The third body wear and lipidomic profile of synovial fluid are cumulative factors of failure in this case, resulting in an osteolysis that finally leads to an aseptic loosening. Introduction The most common types of arthritis (severe degenerative joint disease) which may affect the knee joint are osteoarthritis, rheumatoid arthritis and traumatic arthritis. In osteoarthritis, the breakdown of joint Synovial fluid is primarily composed of water, proteins, proteoglycans, glycosaminoglycans with crucial role in joints lubrication, lipids, small inorganic salts and metabolites such as amino acids or sugars, each of these components having particular function in maintaining viscoelastic properties and regulating the biologic activity of cytokines and enzymes involved in osteoarthritis. Here, a retrieved knee prosthesis (both titanium and UHMWPE component) after 10 years use was investigated. In addition, chemical composition of synovial fluid accumulated in the stem of the tibial component during implantation period was analyzed, aiming to determine the implant failure cause. Using confocal micro-Raman spectroscopy, the retrieved polyethylene component was compared with a newly open counterpart, to assess molecular changes associated with aging process in physiological medium. Metal-plastic aging and the possible scenario of failure due to combined effect of mechanical and biochemical processes that may affect the longevity of knee prostheses are discussed. Correlations between the medical records of the patient (clinical and radiographic information) and our spectroscopic results are proposed. Clinical Case Description The present study was performed in agreement with the ethical standards of the Helsinki Declaration and approved by the Ethical Committee of University of Oradea, Romania (ref. nr. 08/05.2020); the patient signed an informed consent agreement for the surgical protocol. The patient was a 60-year-old male, admitted to the orthopedics clinic I at the Emergency County Hospital, Oradea (Romania) claiming right knee pain and partial functional impotence of the right pelvic limb. From the personal pathologic background of the patient, we noted that he was the victim of a car crush and suffered a fracture of the right tibial plateau, eight years before medical examination, for which surgery was performed. Additionally, the patient presented hydrostatic varices of the lower limbs bilaterally, lumbar spondylosis and lumbar discopathy. After clinical and paraclinical examination (radiological exam), the diagnosis was established: right gonarthrosis which appeared secondary to a tibial plateau fracture. Total cemented arthroplasty at the level of the right knee was performed using NexGen ® complete knee solution (Zimmer, Warsaw, IN, USA) and Gentafix ® 1 (Teknimed, Vic en Bigorre, France) acrylic bone cement. The patient had a favorable evolution after surgery. After 10 years, a clinical and radiological reassessment was made; a destabilization of the tibial component was noticed. As a consequence, revision of the total knee arthroplasty was performed. During the surgery, it the instability of the tibial components accompanied by decementation and loosening was noticed. No sign of sepsis was found. The radiological examination before and after arthroplasty and revision surgery is presented in Figure 1. Appl. Sci. 2020, 10, x FOR PEER REVIEW 3 of 17 content and collagen degradation. Synovial fluid is primarily composed of water, proteins, proteoglycans, glycosaminoglycans with crucial role in joints lubrication, lipids, small inorganic salts and metabolites such as amino acids or sugars, each of these components having particular function in maintaining viscoelastic properties and regulating the biologic activity of cytokines and enzymes involved in osteoarthritis. Here, a retrieved knee prosthesis (both titanium and UHMWPE component) after 10 years use was investigated. In addition, chemical composition of synovial fluid accumulated in the stem of the tibial component during implantation period was analyzed, aiming to determine the implant failure cause. Using confocal micro-Raman spectroscopy, the retrieved polyethylene component was compared with a newly open counterpart, to assess molecular changes associated with aging process in physiological medium. Metal-plastic aging and the possible scenario of failure due to combined effect of mechanical and biochemical processes that may affect the longevity of knee prostheses are discussed. Correlations between the medical records of the patient (clinical and radiographic information) and our spectroscopic results are proposed. Clinical Case Description The present study was performed in agreement with the ethical standards of the Helsinki Declaration and approved by the Ethical Committee of University of Oradea, Romania (ref. nr. 08/05.2020); the patient signed an informed consent agreement for the surgical protocol. The patient was a 60-year-old male, admitted to the orthopedics clinic I at the Emergency County Hospital, Oradea (Romania) claiming right knee pain and partial functional impotence of the right pelvic limb. From the personal pathologic background of the patient, we noted that he was the victim of a car crush and suffered a fracture of the right tibial plateau, eight years before medical examination, for which surgery was performed. Additionally, the patient presented hydrostatic varices of the lower limbs bilaterally, lumbar spondylosis and lumbar discopathy. After clinical and paraclinical examination (radiological exam), the diagnosis was established: right gonarthrosis which appeared secondary to a tibial plateau fracture. Total cemented arthroplasty at the level of the right knee was performed using NexGen ® complete knee solution (Zimmer, Warsaw, IN, USA) and Gentafix ® 1 (Teknimed, Vic en Bigorre, France) acrylic bone cement. The patient had a favorable evolution after surgery. After 10 years, a clinical and radiological reassessment was made; a destabilization of the tibial component was noticed. As a consequence, revision of the total knee arthroplasty was performed. During the surgery, it the instability of the tibial components accompanied by decementation and loosening was noticed. No sign of sepsis was found. The radiological examination before and after arthroplasty and revision surgery is presented in Figure 1. (d) anteroposterior and lateral view after revision of the total knee arthroplasty using modular tibial implant with short stem and two screws for re-insertion of the patellar tendon (after tibial tuberosity osteotomy). Retrieved Components of Knee Prosthesis and confocal Raman Spectroscopy In Figure 2 are presented the photographic images of retrieved tibial components after 10 years, showing the surface deterioration especially on the surface of UHMWPE component (delamination, scratching), along with the accumulation of pelleted synovial fluid in the stem of tibial component ( Figure 2b). Confocal micro-Raman spectra were acquired using an InVia Reflex Raman spectrometer (Renishaw, New Mills, UK) equipped with an upright microscope (Leica). Raman spectra were recorded from the surface of metallic and plastic components using a 785 nm laser diode for excitation. The retrieved plastic and metal were placed under different orientations on the Raman microscope stage to be able to collect distinct Raman signal from exposed surfaces to bone and metal, through a 20 objective (NA 0.35), 30 mW excitation power, 1 accumulation and 1 to 10 s exposure, depending on the collecting mode (short range from 700 to 1800 cm −1 or extended mode from 100 to 3200 cm −1 ). Different data collection modes were applied to control any spectral change associated with longer time exposure to laser of the raw, freshly retrieved biological material. The spectral resolution was 1 cm −1. The spectral calibration was achieved using an internal silicon standard. The Raman spectral characterization was run to comprise at least 10 spectral acquisitions from different active sites of the knee prosthesis (i.e., the tibial metallic plateau in direct contact with UHMWPE or in contact with biologic tissue, interface between UHMWPE and femoral component, interface between metallic component and acrylic cement, synovial fluid collected from the stem, acrylic cement detached from the tibial component). A total of 159 data files and micrographs were acquired via Raman micro-spectroscopy. and minimal osteolysis around the distal region of the tibial implant (lateral view); (d) anteroposterior and lateral view after revision of the total knee arthroplasty using modular tibial implant with short stem and two screws for re-insertion of the patellar tendon (after tibial tuberosity osteotomy). Retrieved Components of Knee Prosthesis and confocal Raman Spectroscopy In Figure In the case of UHMWPE component, a computational analysis of the relevant spectral regions was applied in the 1000-1200, 1250-1350 and 1400-1500 cm −1 intervals respectively. The secondderivative spectra of each interval were smoothed with a nine-point Savitsky-Golay function and the resulted components were used to locate the position of each relevant band component. The curvefitting analysis was performed by means of Origin 8 software and the spectra were baseline corrected according to the algorithm of Dong et al., assuming a Gaussian profile. The intensity of each Confocal micro-Raman spectra were acquired using an InVia Reflex Raman spectrometer (Renishaw, New Mills, UK) equipped with an upright microscope (Leica). Raman spectra were recorded from the surface of metallic and plastic components using a 785 nm laser diode for excitation. The retrieved plastic and metal were placed under different orientations on the Raman microscope stage to be able to collect distinct Raman signal from exposed surfaces to bone and metal, through a 20 objective (NA 0.35), 30 mW excitation power, 1 accumulation and 1 to 10 s exposure, depending on the collecting mode (short range from 700 to 1800 cm −1 or extended mode from 100 to 3200 cm −1 ). Different data collection modes were applied to control any spectral change associated with longer time exposure to laser of the raw, freshly retrieved biological material. The spectral resolution was 1 cm −1. The spectral calibration was achieved using an internal silicon standard. The Raman spectral characterization was run to comprise at least 10 spectral acquisitions from different active sites of the knee prosthesis (i.e., the tibial metallic plateau in direct contact with UHMWPE or in contact with biologic tissue, interface between UHMWPE and femoral component, interface between metallic component and acrylic cement, synovial fluid collected from the stem, acrylic cement detached from the tibial component). A total of 159 data files and micrographs were acquired via Raman micro-spectroscopy. In the case of UHMWPE component, a computational analysis of the relevant spectral regions was applied in the 1000-1200, 1250-1350 and 1400-1500 cm −1 intervals respectively. The second-derivative spectra of each interval were smoothed with a nine-point Savitsky-Golay function and the resulted components were used to locate the position of each relevant band component. The curve-fitting analysis was performed by means of Origin 8 software and the spectra were baseline corrected according to the algorithm of Dong et al., assuming a Gaussian profile. The intensity of each spectral band component was further used to calculate the percentage of crystalline (C%) and amorphous (A%) contribution along with the oxidation index (OI) and interpreted in terms of aging influence on the structure of UHMWPE. The Raman spectra of acrylic cement pieces detached from the metallic tibial component were analyzed in the range 600-3200 cm −1 and compared with the abundant literature data on polymethyl methacrylate (PMMA)-based acrylic cement, while the synovial fluid collected from the stem of the tibial component was examined in terms of major Raman bands describing the chemical composition, such as proteins and lipids. Spectra were baseline corrected by multi-point baseline fitting routine and, depending on the comparative purpose, normalized to unit. Tibial Metallic Component The micro-Raman spectra and corresponding images recorded on the surface of new (unused) and retrieved tibial component are presented in Figure 3, showing that the spectral features of metallic component (titanium) are well preserved during the implantation interval. Both spectra revealed an intense, broaden band at 144 cm −1, which is typical for titanium oxide polymorphs. Raman scattering is sensitive to material phase and polymorphism. Thus, even though two materials may have identical chemical formulae, any different crystal structure or phase will often result in distinct spectra. According to literature the strongest vibrational mode of anatase polymorph is Eg mode at about 144 cm −1. The titanium anodization spectra reported by Uttiya et al. using acid media, indicating the formation of titanium oxide layer (crystalline anatase) with typical bands at 144,396,515 and 638 cm −1. Rutile, other TiO 2 polymorph, exhibits characteristic stretching peaks at 144 (very week), 430 and 590 cm −1 (very strong) that correspond to the symmetries of B 1 g, Eg and A 1 g, respectively. In our case, even if the intense band at 144 cm −1 may indicate a polymorphism, we noted the absence of all the other Raman active modes of the TiO 2 polymorphs. Appl. Sci. 2020, 10, x FOR PEER REVIEW 6 of 17 On the other hand, the passive film formation on pure titanium on different anodic potential treatment conditions has been investigated by Zhang et al.. In this case, the overall Raman feature of the oxide layer showed decreased intensity of the Raman modes of anatase and main band (144 cm −1 ) broadening with decreasing the applied potential. Comparing our result with these reports, we may conclude that our spectra collected from metallic component resembled poor crystalline anatase film on pure titanium, either newly open or after prosthetic retrieval. Thus, the overall vibrational features ( Figure 3) of titanium surface remained unchanged. Aged UHMWPE Component Multiple Raman spectra were recorded on the surface of UHMWPE component of the retrieved prosthesis and compared with those recorded from a new one (unused, as received from the producer) as presented in Figure 4, highlighting the vibrational details in the main spectral regions. Spectral acquisition was performed from multiple points to comprise the contact interfaces, with the femoral and tibial sites. The corresponding micrographs are presented in Figure 5, showing deterioration signs due to the mechanical contact (scratching and pitting), but also upon interaction with the biologic tissue, as we noticed the presence of persistent organic spots on the surface of retrieved insert. The main vibrational Raman bands are characteristic to polyethylene: 1060, 1128 and 1168 cm −1 assigned to the C-C symmetric and asymmetric stretching modes, 1294 cm −1 assigned to CH2 twisting vibrations and the triplet centered at about 1440 cm −1 assigned to CH2 bending modes. In the high wavenumber region, the characteristic peaks of symmetric and asymmetric CH2, stretching vibration are located at 2846 and 2881 cm −1, respectively. Comparing the spectra of the new and old plastic, two main characteristics are observed: (i) the systematic slightly shift (2-3 cm −1 ) toward higher wavenumber in old polyethylene and (ii) a significant decrease of the relative intensities of the main vibrational bands due to aging process. On the other hand, the passive film formation on pure titanium on different anodic potential treatment conditions has been investigated by Zhang et al.. In this case, the overall Raman feature of the oxide layer showed decreased intensity of the Raman modes of anatase and main band (144 cm −1 ) broadening with decreasing the applied potential. Comparing our result with these reports, we may conclude that our spectra collected from metallic component resembled poor crystalline anatase film on pure titanium, either newly open or after prosthetic retrieval. Thus, the overall vibrational features ( Figure 3) of titanium surface remained unchanged. Aged UHMWPE Component Multiple Raman spectra were recorded on the surface of UHMWPE component of the retrieved prosthesis and compared with those recorded from a new one (unused, as received from the producer) as presented in Figure 4, highlighting the vibrational details in the main spectral regions. Spectral acquisition was performed from multiple points to comprise the contact interfaces, with the femoral and tibial sites. The corresponding micrographs are presented in Figure 5, showing deterioration signs due to the mechanical contact (scratching and pitting), but also upon interaction with the biologic tissue, as we noticed the presence of persistent organic spots on the surface of retrieved insert. The main vibrational Raman bands are characteristic to polyethylene: 1060, 1128 and 1168 cm −1 assigned to the C-C symmetric and asymmetric stretching modes, 1294 cm −1 assigned to CH 2 twisting vibrations and the triplet centered at about 1440 cm −1 assigned to CH 2 bending modes. In the high wavenumber region, the characteristic peaks of symmetric and asymmetric CH 2, stretching vibration are located at 2846 and 2881 cm −1, respectively. Comparing the spectra of the new and old plastic, two main characteristics are observed: (i) the systematic slightly shift (2-3 cm −1 ) toward higher wavenumber in old polyethylene and (ii) a significant decrease of the relative intensities of the main vibrational bands due to aging process. According to literature, the most sensitive vibrational bands which involves the structural and morphologic modifications due to oxidation process, and consequently the degree of crystallinity, are 1440, 1294, 1080 and 1310 cm −1 band which can be used for the calculation of crystalline and amorphous phase. For this purpose, computational analysis by spectral deconvolution and curve fitting procedure was performed for the intervals 1000-1200, 1250-1350 and 1400-1500 cm −1, in order calculate the exact bands intensity, as presented in Figure 6, for both unused and old (retrieved UHMPE) component. Moreover, since the spectral region between 1250-1350 cm −1 is considered to be independent of the polymer chain conformation, the area of the band at 1294 cm −1 is accepted as standard to which all the other bands can be related and compared. Black spectra refer to color spotted plastic supposed to be due to organic micro-deposits, however, the decreased polyethylene signal only was observed. significant intervals are shown in (a) 960-1200 cm −1, (b) 1280-1310 cm −1, (c) 1400-1500 cm −1 and (d) 2820-2960 cm −1 spectral ranges. Spectra acquired under similar excitation and optical collection conditions (excitation: 785, 30 mW, 10 s acquisition, 1 accumulation, 20 objective). Multiple spectra collected from single points of old plastic are shown. Black spectra refer to color spotted plastic supposed to be due to organic micro-deposits, however, the decreased polyethylene signal only was observed. According to literature, the most sensitive vibrational bands which involves the structural and morphologic modifications due to oxidation process, and consequently the degree of crystallinity, are 1440, 1294, 1080 and 1310 cm −1 band which can be used for the calculation of crystalline and amorphous phase. For this purpose, computational analysis by spectral deconvolution and curve fitting procedure was performed for the intervals 1000-1200, 1250-1350 and 1400-1500 cm −1, in order calculate the exact bands intensity, as presented in Figure 6, for both unused and old (retrieved UHMPE) component. Moreover, since the spectral region between 1250-1350 cm −1 is considered to be independent of the polymer chain conformation, the area of the band at 1294 cm −1 is accepted as standard to which all the other bands can be related and compared. The results of crystallinity, amorphous and oxidation index calculations based on the curve fitting operation of the main Raman bands intensity-for both new and retrieved componentsare summarized in Table 1. It can be noticed an increase of crystallinity degree accompanied by the decrease of amorphous phase in the retrieved component. As the addition of amorphous and crystalline percent does not match the 100%, we can assume the presence of a minor fraction of an intermediate phase. On the other hand, a small increase of the oxidation index was noticed in the retrieved component, which is not surprising, as the enhanced crystallinity is often accompanied by the oxidation during in vivo exposure. It is considered that a value OI = 3 represent a critical step in the loss of mechanical properties due to fatigue damage in vivo. Acrylic Cement Debris and Synovial Fluid The Raman spectra acquired from the surface of acrylic cement detached from the metallic stem is presented in Figure 7A,B in the low and high wavenumber region. The fingerprints of acrylic cement are the strongest band at 2953 cm −1, which is the stretching mode attributable to (C-H, CH 2, CH 3 ) in PMMA, 1726 cm −1, assigned to (C=O) groups stretching mode, 1450 cm −1 (strong intensity) due to CH 3 and CH 2 groups deformation and 812 cm −1 (strong and sharp) due to the deformation of C-O-C group. Ten-year-old acrylic cement exhibited several bands that are slightly different from those of pristine PMMA, as shown in Table 2, in comparison with the previous Raman data reported on PMMA. The intense band at 986 cm −1, which is absent in pristine-PMMA, represent the signature of gentamicin (C-C stretching vibration), while the characteristic band at 1648 cm −1 reported by several authors was not observed here. However, this latter band was not reported in the first comprehensive vibrational IR and Raman characterization of PMMA available from Willis et al.. The slight changes in main band positions and occurrence of new band such as that observed at 1602 cm −1 which is not observed in synovial fluid either, correlated with the yellow color of the retrieved acrylic cement, may suggest alteration in polymer composition and structural properties. The intense band at 986 cm −1, which is absent in pristine-PMMA, represent the signature of gentamicin (C-C stretching vibration), while the characteristic band at 1648 cm −1 reported by several authors was not observed here. However, this latter band was not reported in the first comprehensive vibrational IR and Raman characterization of PMMA available from Willis et al.. The slight changes in main band positions and occurrence of new band such as that observed at 1602 cm −1 which is not observed in synovial fluid either, correlated with the yellow color of the retrieved acrylic cement, may suggest alteration in polymer composition and structural properties. To assess whether the cement was loaded with synovial components, we comparatively showed its Raman signature ( Figure 7C,D) while the corresponding micrographs taken with Raman video camera are showed in Figure 8 and Figure 9. To assess whether the cement was loaded with synovial components, we comparatively showed its Raman signature ( Figure 7C,D) while the corresponding micrographs taken with Raman video camera are showed in Figures 8 and 9. The assignment of vibrational bands related to synovial fluid is summarized in Table 3, based on the literature. Table 3. Assignment of the vibrational bands in Raman spectra of synovial fluid accumulated as pellet deposition in the ten-year-old retrieved prosthetic ( Figure 7C,D). The assignment of vibrational bands related to synovial fluid is summarized in Table 3, based on the literature. Table 3. Assignment of the vibrational bands in Raman spectra of synovial fluid accumulated as pellet deposition in the ten-year-old retrieved prosthetic ( Figure 7C,D). The assignment of vibrational bands related to synovial fluid is summarized in Table 3, based on the literature. Raman It is well known that physiological composition of normal synovial fluid includes high level of albumin (6-10 g/L from a total of~25 g/L proteins), along with -globulin and hyaluronic acid. As there are some overlapping of the characteristic vibrational bands of lipids and proteins in the high wavenumber region and also at about 1656 cm −1, we wanted to clarify the main features of synovial fluid by comparing its Raman signature with that of proteins and lipids. As reference spectra, the human serum albumin (HAS, Sigma-Aldrich, St. Louis, MO, USA), collagen from bovine Achilles tendon (Sigma-Aldrich) and vegetal oil (rapeseed) were used from our Raman Lab database as presented in Figure 10. The lipid profile of spectra collected from synovial fluid is evidenced by the intense and sharp band at 2852 cm −1 accompanied by broad and weak bands at 2875 and 2934 cm −1, very similar to the spectral features of oils, fat and fatty acids. By comparison, the main Raman feature of albumin and collagen, in the high wave numbers region, is represented by the very intense band at 2930 cm −1. Regarding the assignment of the band at 1656 cm −1, it can be assumed an overlapping of vibrational contribution from proteins and lipids. Table 3. Assignment of the vibrational bands in Raman spectra of synovial fluid accumulated as pellet deposition in the ten-year-old retrieved prosthetic ( Figure 7C,D). It is well known that physiological composition of normal synovial fluid includes high level of albumin (6-10 g/L from a total of ~25 g/L proteins), along with -globulin and hyaluronic acid. As there are some overlapping of the characteristic vibrational bands of lipids and proteins in the high wavenumber region and also at about 1656 cm −1, we wanted to clarify the main features of synovial fluid by comparing its Raman signature with that of proteins and lipids. As reference spectra, the human serum albumin (HAS, Sigma-Aldrich, St. Louis, MO, USA), collagen from bovine Achilles tendon (Sigma-Aldrich) and vegetal oil (rapeseed) were used from our Raman Lab database as presented in Figure 10. The lipid profile of spectra collected from synovial fluid is evidenced by the intense and sharp band at 2852 cm −1 accompanied by broad and weak bands at 2875 and 2934 cm −1, very similar to the spectral features of oils, fat and fatty acids. By comparison, the main Raman feature of albumin and collagen, in the high wave numbers region, is represented by the very intense band at 2930 cm −1. Regarding the assignment of the band at 1656 cm −1, it can be assumed an overlapping of vibrational contribution from proteins and lipids. Discussion Although the design and surgical fixation techniques in arthroplasty have significantly improved in the last decade, progression of degenerative diseases to the adjacent compartment has commonly been reported as a reason for revision of TKA. The presence of an implant alters alignment and cartilage loading, thus favoring degenerative changes or the production of wear debris (metallic, polymeric, and/or cement). Many clinical evidence as well as retrieval studies evaluated the presence or absence of the damage modes: dishing, embedding, pitting, scratching and extruded cement (cement overhanging the margins of the component), while polyethylene components were also assessed for abrasion, burnishing and delamination. In general, joint simulators allows preclinical evaluation of wear of artificial joints in a controlled environment, the results being similar to those found in retrieved prostheses. Specifically, extruded cement may be associated with TKA revision and should be minimized during index surgery. Numerous studies have pinpointed various factors influencing the extent of wear of the tibial (and patellar) polyethylene components including knee alignment, polyethylene thickness, surface geometry, quality of the polyethylene, manufacturing processes (such as heat treatment of the articular surfaces) and gamma irradiation in air used for sterilizing the components [6,. From the material point of view, despite the large number of works in recent years that seems to indicate a satisfactory behavior of Discussion Although the design and surgical fixation techniques in arthroplasty have significantly improved in the last decade, progression of degenerative diseases to the adjacent compartment has commonly been reported as a reason for revision of TKA. The presence of an implant alters alignment and cartilage loading, thus favoring degenerative changes or the production of wear debris (metallic, polymeric, and/or cement). Many clinical evidence as well as retrieval studies evaluated the presence or absence of the damage modes: dishing, embedding, pitting, scratching and extruded cement (cement overhanging the margins of the component), while polyethylene components were also assessed for abrasion, burnishing and delamination. In general, joint simulators allows preclinical evaluation of wear of artificial joints in a controlled environment, the results being similar to those found in retrieved prostheses. Specifically, extruded cement may be associated with TKA revision and should be minimized during index surgery. Numerous studies have pinpointed various factors influencing the extent of wear of the tibial (and patellar) polyethylene components including knee alignment, polyethylene thickness, surface geometry, quality of the polyethylene, manufacturing processes (such as heat treatment of the articular surfaces) and gamma irradiation in air used for sterilizing the components [6,. From the material point of view, despite the large number of works in recent years that seems to indicate a satisfactory behavior of UHMWPE for bearing applications, further research needs to be carried out in order to properly predict lifetime of these prostheses when implanted. Vibrational spectroscopy such as FTIR (Fourier Transformed Infrared) and FT Raman are advanced analytical methods, highly precise and nondestructive, fast and very useful in assessing the oxidation range of retrieved UHMWPE components. Oxidized UHMWPE is an inherent state that exists in UHMWPE components used in total joint replacements. The degree of oxidation of UHMWPE components has been linked to changes in the mechanical properties of the material, such as decreased fatigue strength and the production of wear particles around the site of the implant. Higher crystalline fraction correspond to a more pronounced oxidative behavior, attributable to the formation of free radicals during in vivo exposure. Hence, Raman spectroscopy may represent a useful tool on monitoring and controlling the quality of UHMWPE components and would be a feasible choice for investigating the behavior of novel UHMWPE-derived materials with incorporated antioxidant agents like -tocopherol and others. In our study, confocal Raman spectroscopy was employed for rapid screening of retrieved tibial component of knee prosthesis, evaluating the modifications related to aging process as possible reasons of failure. The metallic tibial component does not show any modification during the implantation period, as demonstrated by the well preserved anatase phase detected in retrieved component, compared to the unused piece. Generally, metals are subjected to corrosion when in contact with body fluid as the body environment is very aggressive due to the presence of chloride ions and proteins. Specifically, titanium and titanium alloys that come into contact with biologic systems may undergo some degree of corrosion while metal ions released intra-articularly may form complexes with native proteins. These metal-protein complexes may act as antigens or allergens and cause an immunologic response in the body or synovial joint. In this case, we found no evidence of corrosion, and hence, metal ions released cannot be a reason of failure. Evan if the protective and stable oxides on titanium surfaces are able to provide favorable osseointegration, in this case, an oxide-free surface is desired ensuring the best contact with the UHMWPE component. In the retrieved titanium component, the eventually expected oxidation (TiO 2 ) layer with characteristic Raman bands specific for each polymorph (anatase, rutile, brookite) was not observed. To clarify the complex phenomena occurring in vivo in UHMWPE component, accurate structural analyses were performed in the present study, by assessing the crystallinity degree and oxidation behavior of UHMWPE component due to aging process. Polyethylene is a very transparent material to visible lasers, and hence, the penetration depth of the laser can be significant. A careful consideration of the main spectral region was made by comparing a new UHMWPE component with the retrieved one, highlighting important details such as slightly shifts accompanied by reduction of bands intensity due to aging process. Moreover, based on spectral deconvolution and curve fitting procedure, crystalline (C%) and amorphous (A%) percentages in each sample along with the oxidation index (OI) was calculated. An insignificant level of oxidation was demonstrated (OI less than one), while small modifications of crystallinity and amorphous phase were noticed. It is considered that values OI less than one and degrees of crystallinity in the range of 45%-50% does not affect the mechanical properties, but only the wear lifetime with respect to degradation. According to previous studies, both degrees of crystallinity and amorphous phase were altered by oxidation, but the mechanical strain was considered to be the main factor in alteration of crystallinity. Conversely, the chemical process (oxidation) is the only one affecting the amorphous phase. An interesting study applying Raman confocal spectroscopic technique to quantitatively assess the structural features of two kinds of UHMWPE acetabular cups belonging to different generations, and thus manufactured by different procedures, demonstrated that oxidation profiles of polyethylene cups belonging to different generations greatly differed after wear testing. In a similar approach, micro-Raman spectroscopy was used to investigate the effects of the sterilization method (gamma and ethylene oxide treatment) on the crystallinity degree of UHMWPE acetabular cups, demonstrating that unworn gamma-sterilized cups were significantly more crystalline than the ethylene oxide-sterilized ones. All these previous works demonstrated the superiority of confocal Raman spectroscopy over other available techniques (such as differential scanning calorimetry or X-rays diffraction) use for quantitative evaluation of polymer crystallinity. In the present study, the micrographs recorded on the interface of UHMWPE with both metallic component and biologic tissue, evidenced some mechanical damage such as scratches and pitting, along with persistent organic spots. These aspects could be correlated to the decementation observed based on the radiographic examination. PMMA-based acrylic bone cements has been traditionally used for fixation of total joint replacement prostheses to periprosthetic bone, based on radical polymerization of the MMA, which is an exothermic reaction resulting in a temperature increase in the curing bone cement. It has been demonstrated that biodegradation of polymeric bone cements can result from the environment in vivo. For example, acrylic cement specimens retrieved six years after total hip arthroplasty showed significant decrease in fatigue strength compared with unused (control) specimens. On the other hand, some previous works investigating antibiotic release from acrylic bone cements have postulated that water penetrates into the cement through surface cracks and voids, created by the release of gentamicin sulfate. The involvement of cement particles in the wear process of knee prosthesis was demonstrated in some previous works. For example, Wasielewski et al. reported severe articular and third-body wear from cement debris in a retrieval analysis of 55 polyethylene tibial inserts with deep prismatic scratching of cobalt-chromium alloy condylar components. A more recent study developed in a knee wear simulator, evaluated the influence of third bodies (bone and PMMA-particles) on number, size and shape of wear debris generated. These previous studies demonstrated that free cement debris can significantly increase the generation of wear particles in TKA. Hence, it is basically accepted that particle size, number and morphology will affect the biologic response, resulting in an osteolysis that finally leads to an aseptic loosening in TKA. In our case, the deterioration of acrylic cement due to aging, and third-body wear of cement particles could be a reason of failure, which is also supported by the radiographic examination and the mineralization process evidenced by the Raman spectra of acrylic cement debris. Pre-revision radiographs were evaluated for implant alignment as well as the presence or absence of extruded cement and periprosthetic osteolysis. It is well known that wear is very difficult to assess on conventional X rays, it can appear suddenly before the tenth years after implantation, in the form of early osteolysis, in the case of heavy patients. It was previously demonstrated that resonance Raman (RR) and surface-enhanced Raman scattering (SERS) analysis are very useful tools in estimating knee osteoarthritis grading, based on the signature of proteins in synovial fluid. An accurate discrimination between low-grade and high-grade osteoarthritis was possible based the molecular changes taking place in the synovial fluid of patients. Moreover, in situ techniques and real time measurements, using epidural needle Raman sensors were recently described, testing the ability of Raman spectroscopy to distinguish each tissue type. In the present work, confocal Raman applied to synovial fluid collected from the stem of the tibial component demonstrated a lipidomic profile, based upon a comparison with commercial lipid sample, human serum albumin and collagen reference samples. According to literature, several clinical studies have revealed correlations between various lipid classes (mainly triglycerides and cholesterols) in serum and synovial fluid with different stage of osteoarthritis, synovitis and wound repair. Hence, in the present case, we may assume that high lipids level could be a cumulative factor involved in failure, complementary to third body wear, which drastically affected the surrounding tissue. By corroborating the clinical and radiological examination with the confocal Raman results, we were able to evaluate the failure reason of knee implant upon TKA, based on rapid screening of retrieved tibial components. Conclusions In the present study, a failure situation was investigated upon TKA, in a rapid screening, by combining the clinical evidences with confocal Raman spectroscopy, with high sensitivity and nondestructive measurements. The retrieved components of knee prosthesis (both titanium and UHMWPE component), after 10 years use, were jointly investigated with the assessment of the chemical composition of synovial fluid accumulated in the stem of the tibial component during implantation period. From the material point of view, the metallic component (titanium) did not show any structural modification, as revealed from the Raman spectra. Using confocal micro-Raman spectroscopy, the ten years aged polyethylene showed slight structural changes in terms of Raman signature. Additional changes were noted in the Raman spectroscopy of aged acrylic cement, based on the comparison with the data from pristine PMMA. Metal-plastic aging and the possible scenario of failure due to combined effect of mechanical and biochemical processes that may affect the longevity of knee prostheses are discussed and interpreted in the context of proper functioning and life span of knee prosthesis. The UHMWPE component presents an insignificant level of oxidation (OI < 1), while small modifications of crystallinity and amorphous phase were noticed, as demonstrated by spectral deconvolution and curve fitting procedure. By analyzing the acrylic cement debris and synovial fluid composition we may assume that lipidomic profile and third body wear are cumulative factors of failure in this case. The results are also supported by the clinical and radiographic examination, which evidenced a drastic deterioration of the surrounding tissue. The cement debris affected the biologic response, resulting in an osteolysis and finally an aseptic loosening. |
Seasonal dynamics of internal waves governed by stratification stability and wind: Analysis of highresolution observations from the Dead Sea Internal waves in stratified lakes are affected by the seasonally varying stratification and the wind forcing. We studied the seasonal dynamics of internal waves by means of highresolution observations and model simulations for the Dead Sea. A twolayer hydrostatic model provided high correlations between measured thermocline depth and the lake level oscillations. Seasonally, the amplitude of the thermocline fluctuations were anticorrelated with the density difference between the water layers; the largest fluctuations were observed when stratification was weak in spring/fall and moderate to weak fluctuations in midsummer when stratification was fully developed. The surface and the internal waves propagated counterclockwise along the coasts at a speed of ~0.5 ms−1. Power spectra of the observed wind as well as the measured and simulated lake level and thermocline depth show a pronounced diurnal period during summer, suggesting forcing by the diurnally varying wind. During spring and fall, when the water column stability diminishes, a hint of longer wind periods appear in addition to the diurnal mode. Accordingly, the lake level and thermocline depth fluctuations respond at lower frequencies. In the fall, the longer wind periods are close to the lake's first vertical normal mode, suggesting that resonant amplification of the internal waves may explain the observed lower frequency response of the level and thermocline oscillations. Reduction of the stratification stability originating from anthropogenic water diversion over the past four decades, associated with lake level decline and salinity increase, have led to increases of internal waves amplitude and periods. |
Toward a new transcendental aesthetic: Merleau-Pontys appraisal of Kants philosophical method ABSTRACT In light of the central role scientific research plays in Merleau-Pontys phenomenology, the question has arisen whether his phenomenology involves some sort of commitment to naturalism or whether it is better understood along transcendental lines. In order to make headway on this issue, I focus specifically on Merleau-Pontys method and its relationship to Kants transcendental method. On the one hand, I argue that Merleau-Ponty rejects Kants method, the method-without-which, which seeks the a priori conditions of the possibility of experience. On the other hand, I show that this does not amount to a methodological rejection of the transcendental altogether. To the contrary, I claim that Merleau-Ponty offers a new account of the transcendental and a priori that he takes to be the proper subject matter of his phenomenological method, the method of radical reflection. And I submit that this method has important affinities with aesthetic themes in Kants philosophy. |
Effect of Roux-en-Y surgery and medical intervention on Barrett's-type changes: an in vivo model. In animal models, mixed acid and bile reflux into lower esophagus induces histological changes comparable to Barrett's metaplasia (BM) and neoplasia. The aim of this study was to compare the effects of Roux-en-Y (REY) surgery and medical therapy on BM in animals before the development of neoplasia. Vagus preserving esophagojejunostomy operation was performed on Sprague-Dawley rats to achieve gastroduodenal reflux (GDR) into the esophagus in 30 animals. After 3 months, changes were reversed in 10 animals (Group REY) by REY operation, 10 animals (Group proton pump inhibitor ) were given PPI during the postoperative period, and 10 animals (Group GDR) did not have further intervention. At 4 months, histological examination of the lower esophagus was performed by an experienced pathologist. Physiological parameters were also analyzed in all animals preoperatively and at 4 months postoperatively. The length of columnar mucosa, degree of acute inflammation, degree of metaplasia, and composite BM score were significantly reduced by REY surgery compared with medical therapy and with control (columnar mucosa in cm Group REY 0.44 +/- 0.06, Group PPI 0.92 +/- 0.08, P < 0.001/Group GDR 1.17 +/- 0.31, P < 0.03). There was no neoplasia seen in any specimen. At 4 months, postoperatively controls Group REY surgery showed significantly more normalization of physiological parameters to preoperative levels than Group PPI (P < 0.05). REY surgery is potentially more beneficial than medical therapy in reversing the histological and biochemical changes of Barrett's esophagus due to GDR. |
Medical legal complications of cutaneous surgery Complications in surgery are an unfavorable outcome as a result of a procedure. These can occur intraoperatively, immediately after or in the distant future. Minimizing the risk and prompt treatment of complications is important to avoid potentially disastrous outcomes. This article will review the more common complications of cutaneous surgery and then analyze the legal consequences of these complications. |
Adjuvant activity of 6-O-acyl-muramyldipeptides to enhance primary cellular and humoral immune responses in guinea pigs: adaptability to various vehicles and pyrogenicity Thirteen 6-O-acyl-N-acetylmuramyl-L-alanyl-D-isoglutamines (6-O-acyl-MDPs), including four inactive D-isoasparagine and L-isoglutamine analogs, were tested for their pyrogenicity and immunopotentiating activity to stimulate primary humoral and cellular immune responses in guinea pigs to a model protein antigen, ovalbumin, when administered in various vehicles. Among them, derivatives whose muramic acid residue was substituted by alpha-branched (and beta-hydroxylated) higher fatty acids at the carbon-6 position, especially 6-O-(2-tetradecylhexadecanoyl)-MDP (B3O-MDP) and, to a lesser extent, 6-O-(3-hydroxy-2-docosylhexacosanoyl)-MDP (BH48-MDP) and its L-serine analog, were found to exert strong adjuvant activity in both the induction of delayed-type hypersensitivity and the stimulation of circulating precipitating antibody levels when combined with nonirritating vehicles (liposomes, squalene-in-water emulsion, and phosphate-buffered saline). These vehicles did not efficiently support the adjuvant activity of MDP, the parent molecule of the above lipophilic derivatives. Pyrogenicity tests showed that introduction of alpha-branched higher fatty acid groups but not of straight, long-chain fatty acids at the 6-position of the muramic acid residue resulted in marked decrease of the pyrogenicity inherent to MDP via intravenous administration. |
Residential yard management and landscape cover affect urban bird community diversity across the continental USA. Urbanization has a homogenizing effect on biodiversity and leads to communities with fewer native species and lower conservation value. However, few studies have explored whether or how which land management by urban residents can ameliorate the deleterious effects of this homogenization on species composition. We tested the effects of local (land management) and neighborhood-scale (impervious surface and tree canopy cover) features on breeding bird diversity in six US metropolitan areas that differ in regional species pools and climate. We used a Bayesian multi-region community model to assess differences in species richness, functional guild richness, community turnover, population vulnerability, and public interest in each bird community in six land management types: two natural-area park types (separate and adjacent to residential areas), two yard types with conservation features (wildlife-certified and water-conservation) and two lawn-dominated yard types (high- and low-fertilizer application), and surrounding neighborhood-scale features. Species richness was higher in yards compared with parks; however, parks supported communities with high conservation scores while yards supported species of high public interest. Bird communities in all land management types were composed of primarily native species. Within yard types, species richness was strongly and positively associated with neighborhood-scale tree canopy cover and negatively associated with impervious surface. At a continental-scale, community turnover between cities was lowest in yards and highest in parks. Within cities, however, turnover was lowest in high-fertilizer yards and highest in wildlife-certified yards and parks. Our results demonstrate that across regions, preserving natural areas, minimizing impervious surfaces, and increasing tree canopy are essential strategies to conserve regionally important species. However, yards, especially those managed for wildlife support diverse, heterogeneous bird communities with high public interest and potential to support species of conservation concern. Management approaches that include the preservation of protected parks, encourage wildlife friendly yards, and acknowledge how public interest in local birds can advance successful conservation in American residential landscapes. |
Longitudinal improvements in communication and socialization of deaf children with cochlear implants and hearing aids: evidence from parental reports. BACKGROUND Research has shown that the cochlear implant may improve deaf children's speech and communication skills. However, little is known about its effect on children's ability to socialize with hearing peers. METHODS Using a standardized psychological measure completed by parents and a longitudinal design, this study examined the development of communication, socialization, and daily living skills of children who used hearing aids or cochlear implants for an average of 11 and 6 years, respectively. RESULTS Results show that children with cochlear implants, who were more delayed than children with hearing aids at the outset, made significant progress over time. Children with both devices achieved age-appropriate development after years of hearing aid or cochlear implant use. CONCLUSIONS The pattern of results suggests that cochlear implants may be effective in improving deaf children's communication and social skills. |
Subcortical and Cortical Electrophysiological Measures in Children With Speech-in-Noise Deficits Associated With Auditory Processing Disorders. PURPOSE The aim of this study was to analyze the subcortical and cortical auditory evoked potentials for speech stimuli in children with speech-in-noise (SIN) deficits associated with auditory processing disorder (APD) without any reading or language deficits. METHOD The study included 20 children in the age range of 9-13 years. Ten children were recruited to the APD group; they had below-normal scores on the speech-perception-in-noise test and were diagnosed as having APD. The remaining 10 were typically developing (TD) children and were recruited to the TD group. Speech-evoked subcortical (brainstem) and cortical (auditory late latency) responses were recorded and compared across both groups. RESULTS The results showed a statistically significant reduction in the amplitudes of the subcortical potentials (both for stimulus in quiet and in noise) and the magnitudes of the spectral components (fundamental frequency and the second formant) in children with SIN deficits in the APD group compared to the TD group. In addition, the APD group displayed enhanced amplitudes of the cortical potentials compared to the TD group. CONCLUSION Children with SIN deficits associated with APD exhibited impaired coding/processing of the auditory information at the level of the brainstem and the auditory cortex. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21357735. |
Identification of the Tobacco Blue Mold Pathogen, Peronospora tabacina, by Polymerase Chain Reaction. Tobacco blue mold, caused by the oomycete pathogen Peronospora tabacina, is a highly destructive pathogen of tobacco (Nicotiana tabacum) seed beds, transplants, and production fields in the United States. The pathogen also causes systemic infection in transplants. We used polymerase chain reaction (PCR) with the primers ITS4 and ITS5, sequencing, and restriction digestion to differentiate P. tabacina from other important tobacco pathogens, including Alternaria alternata, Cercospora nicotianae, Phytophthora glovera, P. parasitica, Pythium aphanidermatum, P. dissotocum, P. myriotylum, P. ultimum, Rhizoctonia solani, Sclerotinia sclerotiorum, Sclerotium rolfsii, Thielaviopsis basicola, and related Peronospora spp. A specific PCR primer, called PTAB, was developed and used with ITS4 to amplify a 764-bp region of DNA that was diagnostic for P. tabacina. The PTAB/ITS4 primers did not amplify host DNA or the other tobacco pathogens and were specific for P. tabacina on tobacco. DNA was detected to levels of 0.0125 ng. The PTAB primer was useful for detection of the pathogen in fresh, air-dried, and cured tobacco leaves. This primer will be useful for disease diagnosis, epidemiology, and regulatory work to reduce disease spread among fields. |
Simulating Term Structure of Interest Rates with Arbitrary Marginals Decision models under uncertainty rely their analysis on scenarios of the economic factors. A key economic factor is the term structure of interest rates (yields). Simulation models of the yield curve usually assume that the conjugate distribution of the interest rates is lognormal. Dynamic models, like vector auto-regression, implicitly postulate that the logarithm of the interest rates is normally distributed. Statistical analyses have, however, shown that stationary transformations (yield changes) of the interest rates are substantially leptokurtic, thus posing serious doubts on the reliability of the available models. We propose in this paper a VARTA model (Biller and Nelson, 2003) to simulate term structures of the interest rates with arbitrary marginals. We will show that such an approach is able to simulate paths of the entire yield curve with distributional properties very close to those found in the empirical data. |
1st Forum of the Southern Cone End-of-Life Study Group: proposal for care of patients, bearers of terminal disease staying in the ICU. Withholding of treatment in patients with terminal disease is increasingly common in intensive care units, throughout the world. Notwithstanding, Brazilian intensivists still have a great difficulty to offer the best treatment to patients that have not benefited from curative care. The objective of this comment is to suggest an algorithm for the care of terminally ill patients. It was formulated based upon literature and the experience of experts, by members of the ethics committee and end-of-life of AMIB - Brazilian Association of Intensive Care. |
Causal machine learning for healthcare and precision medicine Causal machine learning (CML) has experienced increasing popularity in healthcare. Beyond the inherent capabilities of adding domain knowledge into learning systems, CML provides a complete toolset for investigating how a system would react to an intervention (e.g. outcome given a treatment). Quantifying effects of interventions allows actionable decisions to be made while maintaining robustness in the presence of confounders. Here, we explore how causal inference can be incorporated into different aspects of clinical decision support systems by using recent advances in machine learning. Throughout this paper, we use Alzheimers disease to create examples for illustrating how CML can be advantageous in clinical scenarios. Furthermore, we discuss important challenges present in healthcare applications such as processing high-dimensional and unstructured data, generalization to out-of-distribution samples and temporal relationships, that despite the great effort from the research community remain to be solved. Finally, we review lines of research within causal representation learning, causal discovery and causal reasoning which offer the potential towards addressing the aforementioned challenges. Introduction Considerable progress has been made in predictive systems for healthcare following the advent of powerful machine learning (ML) approaches such as deep learning. In healthcare, clinical decision support (CDS) tools make predictions for tasks such as detection, classification and/or segmentation from electronic health record (EHR) data such as medical images, clinical free-text notes, blood tests and genetic data. These systems are usually trained with supervised learning techniques. However, most CDS systems powered by ML techniques learn only associations between variables in the data, without distinguishing between causal relationships and (spurious) correlations. CDS systems targeted at precision medicine (also known as personalized medicine) need to answer complex queries about how individuals would respond to interventions. A precision CDS system for Alzheimer's disease (AD), for instance, should be able to quantify the effect of treating a patient with a given drug on the final outcome, e.g. predict the subsequent cognitive test score. Even with the appropriate data and perfect performance, current ML systems would predict the best treatment based only on previous correlations in data, which may not represent actionable information. Information is defined as actionable when it enables treatment (interventional) decisions to be based on a comparison between different scenarios (e.g. outcomes for treated versus not treated) for a given patient. Such systems need causal inference (CI) in order to make actionable and individualized treatment effect predictions. A major upstream challenge in healthcare is how to acquire the necessary information to causally reason about treatments and outcomes. Modern healthcare data are multi-modal, high-dimensional and often unstructured. Information from medical images, genomics, clinical assessments and demographics must be taken into account when making predictions. A multi-modal approach better emulates how human experts use information to make predictions. In addition, many diseases are progressive over time, thus necessitating that time (the temporal dimension) is taken into account. Finally, any system must ensure that these predictions will be generalizable across deployment environments such as different hospitals, cities or countries. Interestingly, it is the connection between CI and ML that can help alleviate these challenges. ML allows causal models to process high-dimensional and unstructured data by learning complex nonlinear relations between variables. CI adds an extra layer of understanding about a system with expert knowledge, which improves information merging from multi-modal data, generalization and explainability of current ML systems. The causal machine learning (CML) literature offers several directions for addressing the aforementioned challenges when using observational data. Here, we categorize CML into three directions: (i) Causal representation learning-given high-dimensional data, learn to extract lowdimensional informative (causal) variables and their causal relations; (ii) causal discovery-given a set of variables, learn the causal relationships between them; and (iii) causal reasoning-given a set of variables and their causal relationships, analyse how a system will react to interventions. We illustrated in figure 1 how these CML directions can be incorporated into healthcare. Figure 1. CML in healthcare helps understanding biases and formalizing reasoning about the effect of interventions. We illustrated, with a hypothetical example, that high-level features (causal representations) can be extracted from low-level data (e.g. I 1 might correspond to the brain volume derived from a medical image) into a graph corresponding to the data generation process. CML can be used to discover which relationships between variables are spurious and which are causal, illustrated with lines dashed and solid lines respectively. Finally, CML offers tools for reasoning about the effect of interventions (shown with the do() operator). For instance, an intervention on D 1 would only affect the downstream variables in the graph while other relationships are either not relevant (due to graph mutilation) or remain unchanged. In this paper, we discuss how CML can improve personalized decision-making as well as help to mitigate pressing challenges in CDS systems. We review representative methods for CML, explaining how they can be used in a healthcare context. In particular, we (i) present the concept of causality and causal models; (ii) show how they can be useful in healthcare settings; (iii) discuss pressing challenges such as dealing with high-dimensional and unstructured data, out of distribution generalization and temporal information; and (iv) review potential research directions from CML. What is causality? We use a broad definition of causality: if A is a cause and B is an effect, then B relies on A for its value. As causal relations are directional, the reverse is not true; A does not rely on B for its value. The notion of causality thus enables analysis of how a system would respond to an intervention. Questions such as 'How will this disease progress if a patient is given treatment X?' or 'Would this patient still have experienced outcome Z if treatment Y was received?' require methods from causality to understand how an intervention would affect a specific individual. In a clinical environment, causal reasoning can be useful for deciding which treatment will result in the best outcome. For instance, in an AD scenario, causality can answer queries such as 'Which of drug A or drug B would best minimize the patient's expected cognitive decline within a 5-year time span?'. Ideally, we would compare the outcomes of alternative treatments using observational (historical) data. However, the 'fundamental problem of CI' is that for each unit (i.e. patient) we can observe either the result of treatment A or of treatment B, but never both at the same time. This is because after making a choice on a treatment, we cannot turn back time to undo the treatment. These queries that entertain hypothetical scenarios about individuals are called potential outcomes. Thus, we can observe only one of the potential consequences of an action; the unobserved quantity becomes a counterfactual. Causality's mathematical formalism pioneered by Pearl and Imbens and Rubin allows these more challenging queries to be answered. Most ML approaches are not (currently) able to identify cause and effect, because CI is fundamentally impossible to achieve without making assumptions. Several of these assumptions can be satisfied through study design or external contextual knowledge, but none can be discovered solely from observational data. Next, we introduce the reader to two ways of defining and reasoning about causal relationships: with structural causal models (SCMs) and with potential outcomes. We wrap up this section with an introduction to determining causal relationships, including the use of randomized controlled trials (RCT). Structural causal models The mathematical formalism around the so-called do-calculus and SCMs pioneered by the Turing Award winner Pearl has allowed a graphical perspective to reasoning with data which heavily relies on domain knowledge. This formalism can model the data generation process and incorporate assumptions about a given problem. An intuitive and historical description of causality can be found in Pearl & Mackenzie's recent book The Book of Why. An SCM G: S, P N consists of a collection S = ( f 1,, f K ) of structural assignments (called mechanisms) where PA k is the set of parent variables of X k (its direct causes) and N k is a noise variable for modelling uncertainty. N = {N 1, N 2,, N d } is also referred to as exogenous noise because it represents variables that were not included in the causal model, as opposed to the endogenous variables X = {X 1, X 2,, X d } which are considered known or at least intended by design to be considered, and from which the set of parents PA k are drawn. This model can be defined as a direct acyclic graph (DAG) in which the nodes are the variables and the edges are the causal mechanisms. One might consider other graphical structures which incorporate cycles and latent variables, depending on the nature of the data. It is important to note that the causal mechanisms are representations of physical mechanisms that are present in the real world. Therefore, according to the principle of independent causal mechanisms (ICM), we assume that the causal generative process of a system's variables is composed of autonomous modules that do not inform or influence each other. This means that exogenous variables N are mutually royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 220638 3 independent with the following joint distribution PN Q d k1 PN k. Moreover, the joint distribution over the endogenous variables X can be factorized as a product of independent conditional mechanisms P G X 1, X 2,..., 2:2 The causal framework now allows us to go beyond (i) associative predictions, and begin to answer (ii) interventional and (iii) counterfactual queries. These three tasks are also known as Pearl's causal hierarchy. The do-calculus introduces the notation do(A), to denote a system where we have intervened to fix the value of A. This allows us to sample from an interventional distribution P G;do X, which has the advantage over an observational distribution P G X that the causal structure enforces that only the descendants of the variable intervened upon will be modified by a given action. As illustrated in figure 1, after an intervention, the edges between the intervened variable and its parents are not relevant, resulting in a mutilated graph. Potential outcomes An alternative approach to CI is the potential outcomes framework proposed by Rubin. In this framework, a response variable Y is used to measure the effect of some cause or treatment for a patient, i. The value of Y may be affected by the treatment assigned to i. To enable the treatment effect to be modelled, we represent the response with two variables Y i. As a patient may potentially be untreated or treated, we refer to Y 0 i and Y 1 i as potential outcomes. It is, however, impossible to observe both simultaneously, according to the previously mentioned fundamental problem of CI. This does not mean that CI itself is impossible, but it does bring challenges. Causal reasoning in the potential outcome frameworks depends on obtaining an estimate for the joint probability distribution, P(Y, Y ). Both SCM and potential outcomes approaches have useful applications, and are used where appropriate throughout this article. In practice, while graphical SCMs are powerful for modelling assumption or identifying if an intervention is even possible or not, the potential outcomes literature is more focused on quantifying the effect of interventions. We note that single world intervention graphs have been proposed as a way to unify them. Determining cause and effect Determining causal relationships often requires carefully designed experiments. There is a limit to how much can be learned using purely observational data. The effects of causes can be determined through prospective experiments to observe an effect E after a cause C is tried or withheld, keeping constant all other possible factors. It is hard, and in most cases impossible, to control for all possible confounders of C and E. The gold standard for discovering a true causal effect is by performing an RCT, where the choice of C is randomized, thus removing confounding. For example, by randomly assigning a drug or a placebo to patients participating in an interventional study, we can measure the effect of the treatment, eliminating any bias that may have arisen in an observational study due to other confounding variables, such as lifestyle factors, that influence both the choice of using the drug and the impact of cognitive decline. Note that the conditional probability P(E|C) of observing E after observing C can be different from the interventional probability P(E|do(C )) of observing E after doing/intervening on C. P(E|do(C )) means that only the descendants of C (in a causal graph) change after an intervention, all other variables maintain their values. In RCTs, 'do' is guaranteed and unconditioned, while with observational data such as historical EHRs, it is not, due to the presence of confounders. Determining the causes of effects (the aetiology of diseases) requires hypotheses and experimentation where interventions are performed and studied to determine the necessary and sufficient conditions for an effect or disease to occur. Why should we consider a causal framework in healthcare? CI has made several contributions over the last few decades to fields such as social sciences, econometrics, epidemiology and aetiology, and it has recently spread to other healthcare fields royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 220638 such as medical imaging and pharmacology. In this section, we will elaborate on how causality can be used for improving medical decision-making. Even though data from EHRs, for example, are usually observational, they have already been successfully leveraged in several ML applications, such as modelling disease progression, predicting disease deterioration and discovering risk factors, as well as for predicting treatment responses. Further, we now have evidence of algorithms which achieve superhuman performance in imaging tasks such as segmentation, detection of pathologies and classification. However, predicting a disease with almost perfect accuracy for a given patient is not what precision medicine is trying to achieve. Rather, we aim to build ML methods which extract actionable information from observational patient data in order to make interventional (treatment) decisions. This requires CI, which goes beyond standard supervised learning methods for prediction as detailed below. In order to make actionable decisions at the patient level, one needs to estimate the treatment effect. The treatment effect is the difference between two potential outcomes: the factual outcome and the counterfactual outcome. For actionable predictions, we need algorithms that learn how to reason about hypothetical scenarios in which different actions could have been taken, creating, therefore, a decision boundary that can be navigated in order to improve patient outcome. There is recent evidence that humans use counterfactual reasoning to make causal judgements, lending support to this reasoning hypothesis. This is what makes the problem of inferring treatment effect fundamentally different from standard supervised learning as defined by the potential outcome framework. When using observational datasets, by definition, we never observe the counterfactual outcome. Therefore, the best treatment for an individual-the main goal of precision medicine -can only be identified with a model that is capable of causal reasoning as will be detailed in §3.3. Alzheimer's disease practical example We now illustrate the notion of CML for healthcare with an example from Alzheimer's disease (AD). A recent attempt to understand AD from a causal perspective takes into account many biomarkers and uses domain knowledge (as opposed to RCTs) for deriving ground truth causal relationships. In this section, we present a simpler view with only three variables: chronological age, 1 magnetic resonance (MR) images of the brain, and AD diagnosis. The diagnosis of AD is made by a clinician who takes into account all available clinical information, including images. We are particularly interested in MR images because analysing the relationship of high-dimensional data, such as medical images, is a task that can be more easily handled with ML techniques, the main focus of this paper. AD is a type of cognitive decline that generally appears later in life. AD is associated with brain atrophy, i.e. volumetric reduction of grey matter. We consider that AD causes the symptom of brain morphology change, following Richens et al., by arguing that a high-dimensional variable such as the MR image is caused by the factors that generated it; this modelling choice has been previously used in the causality literature. Further, it is well established that atrophy also occurs during normal ageing. Time does not depend on any biological variable, therefore chronological age cannot be caused by AD nor any change in brain morphology. In this scenario, we can assume that age is a confounder of brain morphology, measured by the MR image, and AD diagnosis. These relationships are illustrated in the causal graph in figure 2. To model the effect of having age as a confounder of brain morphology and AD, we use a conditional generative model from Xia et al., 2 in which we condition on age and AD diagnosis for brain MRI image generation. We then synthesize images of a patient at different ages and with different AD status as depicted in figure 2. In particular, we control for (i.e. condition on) one variable while intervening on the other. That is, we synthesize images based on a patient who is cognitively normal (CN) for their age of 64 years. We then fix the Alzheimer's status at CN and increase the age by 3 years for three steps, resulting in images of the same CN patient at ages 64, 67, 70, 73. At the same time, we synthesize images with different Alzheimer's status by fixing the age at 64 and changing the Alzheimer's status from mild cognitive impairment to a clinical diagnosis of AD. 1 Age can otherwise be measured in biological terms using, for instance, DNA methylation. 2 We take the model from Xia et al. and run new demonstrative experiments for illustration in this paper. royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 220638 This example illustrates the effect of confounding bias. By observing qualitatively the difference between the baseline and synthesized images, we see that ageing and AD have similar effects on the brain. 3 That is, that both variables change the volume of brain when intervened on independently. Throughout the paper, we will further add variables and causal links to this example to illustrate how healthcare problems can become more complex and how a causal approach might mitigate some of the main challenges. In particular, we will build on this example by explaining some consequences of causal modelling for dealing with high-dimensional and unstructured data, generalization and temporal information. Modelling the data generation process The AD example illustrates the importance of considering causal relationships in a ML scenario. Namely, causality gives the ability to model and identify types and sources of bias. 4 To correctly identify which variables to control for (as means to mitigate confounding bias), causal diagrams offer a direct means of visual exploration and consequently explanation. Castro et al. details further how understanding the causal generating process can be useful in medical imaging. By representing the variables of a particular problem and their causal relationships as a causal graph, one can model domain shifts, such as population shift (different cohorts), acquisition shift (different sites or scanners) and annotation shift (different annotators), and data scarcity (imbalanced classes). A benefit of reasoning causally about a problem domain is transparency, by offering a clear and precise language to communicate assumptions about the collected data. In a similar vein, models whose architecture mirrors an assumed causal graph can be desirable in applications where interpretability is important.. The images with grey background are difference images obtained by subtracting the synthesized image from the baseline. The upper sequence of images is generated by fixing Alzheimer's status at CN and increasing age by 3 years. The bottom images are generated by fixing the age at 64 and increasing Alzheimer's status to MCI and AD, as discussed in the main text. 3 See Xia et al. for quantitative results confirming this hypothesis. 4 We refer to https://catalogofbias.org/biases for a catalogue of bias types. royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 220638 In the AD setting above, a classifier naively trained to perform diagnosis from MR images of the brain might focus on the brain atrophy alone. This classifier may show reduced performance in younger adults with AD or for CN older adults, leading to potentially incorrect diagnosis. To illustrate this, we report the results of a convolutional neural network classifier trained and tested on the ADNI dataset following the same setting as Xia et al.. 5 Table 1 shows that as feared, healthy older patients (80-90 years old) are less accurately predicted because ageing itself causes the brain to have Alzheimer's-like patterns. Indeed, using augmented data based on causal knowledge is a solution discussed in Xia et al., whereby the training data are augmented with counterfactual images of a patient when intervening on age. That is, images of a patient at different ages (while controlling for Alzheimer's status) are synthesized so the classifier learns how to differentiate the effects of ageing versus AD in brain images. This causal knowledge enables the formulation of best strategies for mitigating data bias(es) and improving generalization (further detailed in §4.3). For example, if after modelling the data distribution, an acquisition shift becomes apparent (e.g. training data were obtained with a specific MR sequence but the model will be evaluated on data from a different sequence), then data augmentation strategies can be designed to increase robustness of the learned representation. The acquisition shift-e.g. different intensities due to different scanners-might be modelled according to the physics of the (sensing) systems. Ultimately, creating a diagram of the data generation process helps rationalize/visualize which are the best strategies to solve the problem. Treatment effect and precision medicine Beyond diagnosis, a major challenge in healthcare is ascertaining whether a given treatment influences an outcome. For a binary treatment decision, for instance, the aim is to estimate the average treatment effect is the outcome given the treatment and Y is the outcome without it (control). As it is impossible to observe both potential outcomes Y and Y The treatment assignment and outcomes, however, both depend on the patient's condition in normal clinical conditions. This results in confounding, which is best mitigated by the use of an RCT ( §2.3). Performing an RCT as detailed in §2.3, however, is not always feasible, and CI techniques can be used to estimate the causal effect of treatment from observational data. A number of assumptions need to hold in order for the treatment effect to be identifiable from observational data. Conditional exchangeability (ignorability) assumes there are no unmeasured confounders. Positivity (overlap) is the assumption that every patient has a chance of receiving each treatment. Consistency assumes that the treatment is defined unambiguously. Continuing the Alzheimer's example, Charpignon et al. explore drug re-purposing by emulating an RCT with a target trial and find indications that metformin (a drug classically used for diabetes) might prevent dementia. Note that even if the treatment effect is estimated using data from a well-designed RCT, E − E is the average treatment effect across the study population. However, there is evidence that for any given treatment, it is likely that only a small proportion of subjects will actually respond in a manner that resembles the 'average' patient, as illustrated in figure 3. In other words, the treatment Table 1. Illustration of how a naively trained classifier (a neural network) fails when the data generation process and causal structure are not identified. We report the precision and recall on the test set when training a classifier for diagnosing AD. We stratify the results by age. We highlight that the group with worse performance is the older cognitively normal patients due to the confounding bias described in the main text. After training with counterfactually augmented data, the classifier's precision for the worse performance age group improved. These results were replicated from our previous work Xia et al.. 7 That is, when making a prediction from a high-dimensional, unstructured variable X (e.g. a brain image) one is usually interested in extracting and/or categorizing one of its true generating factors Y (e.g. grey matter volume). P(X|Y ), which represents the causal mechanism, Y → X, is independent of P(Y|Env); however, P(Y|X ) is not, as P(Y|X ) = P(X|Y )P(Y|Env)/P(X ). Thus P(Y|X ) changes as the environment changes. Secondly, another (or many others) generating factor W is often correlated with Y, which might cause the predictor to learn the relationship between X and W instead of the P(Y|X ). This is known as shortcut learning as it may be easier to learn the spurious correlation than the required relationship. For example, suppose an imaging dataset X is collected from two hospitals, Env 1 and Env 2. Hospital Env 1 has a large neurological disorder unit, hence a higher prevalence of AD status (denoted by Y), and uses a 3T MRI scanner (scanner type denoted by W). Hospital Env 2 with no specialist unit, hence a lower prevalence of AD, happens to use a more common 1.5T MRI scanner. The model will learn the spurious correlation between W (scanner type) and Y (AD status). We can now describe several ML settings based on this causal perspective by comparing data availability at train and test time. Classical supervised learning (or empirical risk minimization ) uses the strong assumption that the data from train and test sets are independent and identically distributed (i.i.d.), therefore we assign the same environment for both sets. Semi-supervised learning is a case where part of the training samples are not paired to annotations. Continual (or Lifelong) learning considers the case where data from different environments are added after training, and the challenge is to learn new environments without forgetting what has initially been learned. In domain adaptation, only unpaired data from the test environment is available during training. Domain generalization aims at learning how to become invariant to changes of environment, such that a new (unseen in training data) environment can be used for the test set. Enforcing fairness is important when W is a sensitive variable and the train set has Y and W spuriously 8 correlated due to a choice of environment. Finally, learning from imbalanced datasets can be seen under this causal framework when a specific Y = y have different numbers of samples because of the environment, but the test environment might contain the same bias towards a specific value of Y. Research directions in causal machine learning Having discussed the utility of CML for healthcare including complex multimodal, temporal and unstructured data, the final section of this paper discusses some future research directions. We discuss W X Y Env p r e d. s purious Figure 4. Reasoning about generalization of a prediction task with a causal graph. Anti-causal prediction and a spurious association that may lead to shortcut learning are illustrated. 7 We note that other seminal works consider prediction a causal task because prediction should copy a cognitive human process of generating labels given the data. 8 We use the term spurious for features that correlate but do not have a causal relationship between each other. Causal representations Representation learning refers to a compositional view of ML. Instead of a mapping between input and output domains, we consider an intermediate representation that captures concepts about the world. This notion is essential when considering learning and reasoning with real healthcare data. High-dimensional and unstructured data, as considered in §4.3, are not organized in units that can be directly used in current causal models. In most situations, the variable of interest is not, for instance, the image itself, but one of its generating factors, for instance grey matter volume in the AD example. Causal representation learning extends the notion of learning factors about the world to modelling the relationships between variables with causal models. In other words, the goal is to model the representation domain Z as an SCM as in §2.1. Causal representation learning builds on top of the disentangled representation learning literature towards enforcing stronger inductive bias as opposed to assumptions of factor independence commonly pursued by disentangled representations. The idea is to reinforce a hierarchy of latent variables following the causal model, which in turn should follow the real data generation process. Causal discovery Performing RCTs is very expensive and sometimes unethical or even impossible. For instance, to understand the impact of smoking in lung cancer, it would be necessary to force random individuals to smoke or not smoke. Most real data are observational and discovering causal relationships between the variables is more challenging. Considering a setting where the causal variables are known, causal discovery is the task of learning the direction of causal relationships between the variables. In some settings, we have many input variables and the goal is to construct the graph structure that best describes the data generation process. Extensive background has been developed over the last three decades around discovering causal structures from observational data, as described in recent reviews of the subject [6,. Most methods rely on conditional independence tests, combinatorial exploration over possible DAGs and/ or assumptions about the data generation process's function class and noise distribution (e.g. the true causal relationships assumed to be linear, with additive noise, or that the exogenous noise has a Gaussian distribution) for finding the causal relations of given causal variables. In healthcare, Huang et al. and Sanchez-Romero et al. use causal discovery for learning how different physiological processes in the brain causally influence each other using functional MRI data. Causal discovery is still an open area of research, and some of the major challenges in discovering causal effects from observational data are the inability to (i) identify all potential sources of bias (unobserved confounders); (ii) select an appropriate functional form for all variables (model misspecification); and (iii) model temporal causal relationships. Causal reasoning It has been conjectured that humans internally build generative causal models for imagining approximate physical mechanisms through intuitive theories. Similarly, the development of models that leverage the power of causal models around interventions would be useful. The causal models can be formally manipulated for measuring the effects of interventions. Using causal models for quantifying the effect of interventions and pondering about the best decision is known as causal reasoning. As previously discussed in §3.3, one of the key benefits from causal reasoning in healthcare is around personalized decision-making. In SCMs ( §2.1), personalized decision-making usually refers to the ability to answer counterfactual queries about historical situations, such as 'What would have happened if the patient had received alternative treatment X?'. Counterfactuals can be estimated with (i) a three-step procedure (abduction-action-prediction) which has been recently enhanced with deep learning using generative models such as normalizing flows, variational autoencoders and diffusion probabilistic models or (ii) twin networks which augment the original SCM resulting in both factual and counterfactual variables represented simultaneously. Deep twin networks leverage neural networks to further improve flexibility of the causal mechanisms. We note that quantifying the royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 220638 effect of interventions usually assumes that causal models are given either explicitly or learned via causal discovery. Aglietti et al. evaluate their method with using a model of the causal effect of statin drugs on the levels of prostate specific antigen while Pawlowski et al. and Wang et al. model the data generation process of the MRI images of the brain. Reinhold et al. extend Pawlowski et al. by adding pathological information about multiple sclerosis lesions. In the potential outcomes framework ( §2.2), a number of approaches have been proposed to estimate personalized (also called individualized or conditional average) treatment effect from observational data. These techniques include Bayesian additive regression trees, double ML, regularization of neural networks with integral probability metrics or orthogonality constraints, Gaussian processes, generative adversarial networks or energy-based models. Another trend for estimating CATE are based on meta-learners. In the meta-learning setting, traditional (supervised) ML is used to predict the conditional expectations of the potential outcomes and propensity. Then, CATE is computed by taking the difference between the estimated potential outcomes or using a two-step procedure with regression adjustment, propensity weighting or doubly robust learning. Conclusion We have described the importance of considering CML in healthcare systems. We highlighted the need to design systems that take into account the data generation process. A causal perspective on ML contributes to the goal of building systems that are not just performing better (e.g. achiever higher accuracy), but are able to reason about potential effects of interventions at population and individual levels, closing the gap towards realizing precision medicine. We have discussed key pressing challenges in precision medicine and healthcare, namely, using multi-modal, high-dimensional and unstructured data to make decisions that are generalizable across environments and take into account temporal information. We finally proposed opportunities drawing inspiration from causal representation learning, causal discovery and causal reasoning towards addressing these challenges. |
Identification of Nepro, a gene required for the maintenance of neocortex neural progenitor cells downstream of Notch In the developing neocortex, neural progenitor cells (NPCs) produce projection neurons of the six cortical layers in a temporal order. Over the course of cortical neurogenesis, maintenance of NPCs is essential for the generation of distinct types of neurons at the required time. Notch signaling plays a pivotal role in the maintenance of NPCs by inhibiting neuronal differentiation. Although Hairy and Enhancer-of-split (Hes)-type proteins are central to Notch signaling, it remains unclear whether other essential effectors take part in the pathway. In this study, we identify Nepro, a gene expressed in the developing mouse neocortex at early stages that encodes a 63 kDa protein that has no known structural motif except a nuclear localization signal. Misexpression of Nepro inhibits neuronal differentiation only in the early neocortex. Furthermore, knockdown of Nepro by siRNA causes precocious differentiation of neurons. Expression of Nepro is activated by the constitutively active form of Notch but not by Hes genes. Nepro represses expression of proneural genes without affecting the expression of Hes genes. Finally, we show that the combination of Nepro and Hes maintains NPCs even when Notch signaling is blocked. These results indicate that Nepro is involved in the maintenance of NPCs in the early neocortex downstream of Notch. |
Socio-community practices as bridges of encounter between knowledge Within the framework of Community Practices and the Field of Practices, we initiate learning spaces, among bakers, students and teachers, in the search for practical and theoretical knowledge; academic and empirical, with concrete actions for the recovery, assessment and visibility of the work in artisan brick kilns. Likewise, we coordinate this meeting space with the Technical Secondary School to generate a meaningful educational exchange for students where academic and popular knowledge are combined with interdisciplinary knowledge of secondary education in order to express its practical application in the community. |
. OBJECTIVE To explore the association and interaction between the components of metabolic syndrome (MS) and cardiovascular disease (CVD). METHOD In this cohort study, participants (total 3598, male 1451) were recruited and followed up for five years from the program "prevention of multiple metabolic disorders and MS in Jiangsu province". We used modified Asian criteria of the National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III) to define the presence of MS. COX regression was used to analyze the association between the MS and its components with CVD; both the multiplication of blood pressure (BP) and 2, 3, or 4 other components of MS in the logistic regression model and the estimation of the relative excess risk due to interaction (RERI), the attributable proportion due to interaction (AP), and the synergy index (S) and 95% confidence intervals (95%CI) were used to evaluate the interactions between the components of MS. RESULTS After adjustment for traditional CVD risks, the adjusted risk ratio (aRR) of CVD was 2.49 (95%CI: 1.59 - 3.90) in the MS group compared with the non-MS group at baseline. The aRRs of MS components to CVD were as follows: 1.44 (95%CI: 0.88 - 2.37) for waist circumference; 2.84 (95%CI: 1.73 - 4.68) for BP; 1.31 (95%CI: 0.83 - 2.07) for low high density lipoprotein; 1.84 (95%CI: 1.19 - 2.85) for triglyceride; 1.55(95%CI: 0.98 - 2.45) for fasting plasma glucose, respectively. BP was the single component significantly related to CVD (aRR = 2.58, 95%CI: 1.55 - 4.29). The risk of CVD was significantly increased (aOR = 4.47, 95%CI: 2.35 - 8.51) when BP was combined with 2, 3 or 4 other components of MS in the participants. CONCLUSIONS Only BP is an independent CVD risk factor in the components of MS, the risk of CVD was significantly increased when BP was combined with other components of MS in this cohort. |
Carving of protein crystal by high-speed micro-bubble jet using micro-fluidic platform This paper reports a novel micro-fluidic platform for carving protein crystal with electrically driven high-speed mono-dispersed micro-bubble jet. This minimally invasive micro-processing method overcomes the difficulties of processing, holding and positioning fragile material such as protein crystal underwater. The combination of using electrically-induced micro-bubble knife and microfluidic channel provide effective carving of protein crystal by ablation crystal and draining of chips by free vortex flow in microchannel. Three-dimensional positioning of crystal was sufficiently achieved by the configuration of effective micro-fluidic channels. The protein crystal can be carved to the desired shape to fit to X-ray analysis effectively. It seemed that it has potential to contribute to more precise protein analysis. |
Managerial Strengths and Weaknesses as Functions of the Development of Personal Meaning Based on a theory of lifelong development proposed by Robert Kegan, this article explores the idea that certain important and typical managerial strengths and weaknesses arise from the personal meaning systems of managers. This article asserts that strengths arising from a stage in the development of personal meaning characterized by self-differentiation and identity formation are highly prized by most organizations. In seeking to be effective and successful in such organizations, managers are consequently fixed in this stage of development. The strengths of this stage have concomitant weaknesses that successful managers cannot avoid. The author concludes that further development in the meaning systems underlying organizations and managers will be necessary in the future. |
Late Paleozoic to Early Mesozoic provenance record of PaleoPacific subduction beneath South China Northeast trending Yong'an Basin, southeast South China Craton, preserves a PermianJurassic, marine to continental, siliciclasticdominated, retroarc foreland basin succession. Modal and detrital zircon data, along with published paleocurrent data, sedimentary facies, and euhedral to subhedral detrital zircon shapes, indicate derivation from multicomponent, nearby sources with input from both the interior of the craton to the northwest and from an inferred arc accretionary complex to the southeast. The detrital zircon UPb age spectra range in age from Archean to early Mesozoic, with major age groups at 20001700Ma, 1200900Ma, 400340Ma, and 300240Ma. In addition, Early Jurassic strata include zircon detritus with ages of 200170Ma. Regional geological relations suggest that Precambrian and Early Paleozoic detritus was derived from the inland Wuyi Mountain region and Yunkai Massif of the South China Craton. Sources for Middle Paleozoic to early Mesozoic detrital zircons include input from beyond the currently exposed China mainland. Paleogeographic reconstruction in East Asia suggests derivation from an active convergent plate margin along the southeastern rim of the craton that incorporated part of Southwest Japan and is related to the subduction of the PaleoPacific Ocean. Integration of the geologic and provenance records of the Yong'an Basin with the time equivalent Yongjiang and Shiwandashan basins that lie to the southwest and south, respectively, provides an integrated record of the subduction of the PaleoPacific Ocean along the southeast margin of the South China Craton and termination of subduction of the PaleoTethys beneath its southwest margin in PermoTriassic. |
Secure Authentication using Zero Knowledge Proof Zero- Knowledge Proof is a cryptographic protocol exercised to render privacy and data security by securing the identity of users and using services anonymously. It finds numerous applications; authentication is one of them. A Zero-Knowledge Proof-based authentication system is discussed in this paper. Advanced Encryption Standard (AES) and Secure Remote Password (SRP) protocol have been used to design and build the ZKP based authentication system. SRP is a broadly used Password Authenticated Key Exchange (PAKE) protocol. The proposed method overcomes several drawbacks of traditional and commonly used authentication systems such as a simple username and plaintext password-based system, multi-factor authentication system and others. |
Water-related IR characteristics in natural fibrous diamonds Abstract Water-related features were studied using infrared absorption spectroscopy in fibrous diamonds with micro-inclusions. Both OH-stretching and HOH-bending vibrations were observed. The lack of correlation between the intensities of HOH and OH bands in different samples and the complexity of the OH-stretching band indicate that a large fraction of water is present as hydroxyl groups in minerals. Heating and cooling experiments were performed to elucidate the properties of fluids in micro- inclusions in diamonds. The results of experiments at various temperatures support the presence of several water-related components in individual diamonds. These spectroscopic investigations of fibrous diamonds revealed two or more interrelated water-related components preserved in micro-inclusions. The existence of several water phases or water solutions with different salinity and solutes within one diamond crystal is possible. |
. The term crude fibre according to the Weend analysis method is insufficient for the nutrition of pigs as it does not comprise pentosanes. During the cooking process they are hydrolysed with diluted acid and do not remain in the crude fibre fraction. As stomach HCl can also hydrolyse pentosane (probably in shorter chains), they are well utilized by the microorganisms in the digestive tract (production of volatile fatty acids). Cereal bran and straw meal contain a particularly high quota of pentosanes in their fibre. The fibre fraction of the plant materials fulfills several functions in the digestive tract: absorption of water at the hydroxyl groups of cellulose and hemicellulose (higher absorption capacity of the digesta and improved passage rate); formation of volatile fatty acids (VFA) by the intestinal bacteria due to the fermentation of pentosanes and cellulose (positive influence of VFA on the mucosa of the intestinal walls); absorption of protein decomposition products (including amines) in the cavities of native plant scaffold substances and absorption of aromatic toxic substances (tyramine, phenol, cresol, tryptamine, indole, skatole, histamine etc.) in the lignin by means of VAN DER WAALS forces and further transport of the toxic substances until they are excreted in faeces. HCl treated straw meal is either a mixture of HCl and straw meal at a ratio of 20 kg half concentrated HCl (17% HCl) and 100 kg straw meal with or without heat treatment (steaming for ca. 30 min). The unsteamed product is called HCl straw meal, the steamed product partly hydrolysed straw meal (PHS). 5-10% HCl--straw meal was successfully used in the rearing of piglets after weaning. In addition to the above-mentioned significance of the scaffold substances for the digestive tract, the HCl improved the pH status in the stomach and the upper region of the small intestine. PHS neutralized with CaCO3 (up to pH 6-7) is suitable for breeding sows, boars, young sows and fattening pigs. PSM contains 25% reducing substances (reducing sugars) in the DM and serves the intestinal bacteria in the production of VFA. In the feeding of breeding sows in the phase of gestation 20-30% of the DM intake could be covered by PSM. The number of viable and rearable piglets was significantly higher than after conventional feeding. HCl--SM and PSM also provide advantages with regard to hygiene. They are not congested by fungi and can well be stored without neutralization. |
Color dipole cross section in the DGLAP improved saturation model We show that the geometric scaling of the dipole cross section can be explained using standard DGLAP perturbative evolution. The DGLAP-improved saturation model due to the Laplace transform method is considered at LO and NNLO approximations from the experimental data by relying on a Froissart-bounded parametrization of F2(x,Q2)\documentclass{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$F_{2}(x,Q^{2})$$\end{document}. These results are comparable with the Golec-BiernatWu¨\documentclass{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ddot{\mathrm {u}}$$\end{document}sthoff (GBW) model in a wide kinematic region rQs\documentclass{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$rQ_{s}$$\end{document} which takes into account charm mass. The successful description of dip(x,r)/0\documentclass{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\sigma }_{\mathrm {dip}}(x,r)/\sigma _{0}$$\end{document} and dip(rQs)/0\documentclass{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\sigma }_{\mathrm {dip}}(rQ_{s})/\sigma _{0}$$\end{document} are presented. Introduction An update on the saturation model of deep inelastic scattering (DIS) was recently presented by Golec-Biernat et al. by introducing the results of new fits to the extracted Hadron-Electron Ring Accelerator (HERA) data on the proton structure function at small x with the Golec-Biernat-Wsthoff (GBW) saturation model and its modification to cover high values of Q 2. When x 1, the Dokshitzer-Gribov-Lipatov-Alterelli-Parisi (DGLAP) or the Balitsky-Fadin-Kuraev-Lipatov (BFKL) evolution equations predict that the small x structure of the proton is dominated by a strongly increasing gluon density, which drives a similar increase in the sea quark densities. In this region, gluons in the proton form a dense system with mutual interaction and recombination which leads to the saturation of the total cross section. For x≈Q 2 /W 2 1, the virtual spacelike photon on the proton fluctuations are defined into an on-shell quark-antiquark, qq, vector state. Here, Q 2 refers to the photon virtuality, and W to the photonproton center-of-mass energy. In this process, a photon interacts with the proton via coupling of two gluons to the qq color dipole, which is called the color dipole model (CDM). a e-mails: grboroun@gmail.com; boroun@razi.ac.ir (corresponding author) The mass of the qq dipole in terms of the transverse momen- − → k ⊥ is defined with respect to the photon direction, and the variable z characterizes the distribution of the momenta between quark and antiquark. The lifetime of the qq dipole is defined by = W 2 Q 2 +M 2 qq 1 M p, which is much longer than its typical interaction time with the target at small x. This condition not only restricts the kinematic range of the color dipole model to x 1, but also saturates the * -proton cross section with x < 0.1. Some years ago the saturation model was shown by Golec-Biernat and Wsthoff, which gives an elegant and accurate account of DIS at small x and has been formulated to new models in recent years. This type of saturation occurs when the photon wavelength 1/Q reaches the size of the proton. It is well known that the dipole picture is a factorization scheme for DIS, which is particularly convenient for the inclusion of unitarity corrections at small x. In the mixed representation, the scattering between the virtual photon * and the proton is seen as the color dipole where the transverse dipole size r and the longitudinal momentum fraction z are defined with respect to the photon momentum. The amplitude for the complete process is simply the production of these subprocess amplitudes, as the DIS cross section is factorized into a light-cone wave function and a dipole cross section. Using the optical theorem, this leads to the following expression for the * p cross sections and the F 2 structure function is defined as The subscript L and T refer to the transverse and longitudinal polarization state of the exchanged boson. Here, L,T are the appropriate spin-averaged light-cone wave functions of the photon, dip ( x f, r ) is the dipole cross section which is related to the imaginary part of the (qq) p forward scattering amplitude, and x f ≡x(1 + 4m 2 f /Q 2 ) is equivalent to the Bjorken variable and provides an interpolation for the Q 2 →0 limit; m f is the mass of the quark of flavor f. The variable z, with 0 ≤ z ≤ 1, characterizes the distribution of the momenta between the quark and antiquark. The square of the photon wave function describes the probability for the occurrence of a (qq) fluctuation of transverse size with respect to the photon polarization [1,. The dipole hadron cross section dip contains all information about the target and the strong interaction physics. There are several phenomenological implementations for this quantity, and the main feature is to be able to match the soft (low Q 2 ) and hard (large Q 2 ) regimes in a unified way. In Ref., the dipole cross section was proposed to have the eikonal-like form where Q s ( x) plays the role of the saturation momentum, parametrized as Q 2 Parameters Q 0 and x 0 set the dimension and absolute value of the saturation scale and exponent governs x behavior of Q 2 s. The saturation is energydependent and marks the transition between the linear (leading twist) perturbative QCD regime and saturation domain. The resulting dipole cross section presents the color transparency property, i.e., dip ∼ r 2 when r →0, which is a purely perturbative QCD (pQCD) phenomenon, and the saturation property, i.e., dip ∼ 0 at large r, which imposes the unitarity condition. The GBW model was updated in to improve the large Q 2 description of F 2 by a modification of the small r behavior of the dipole cross section to include the DGLAP evolved gluon distribution. A parameterization similar in spirit to the dipole scattering amplitude, based on the Balitsky-Kovchegov (BK) equation solution, was proposed in. The BK equation for a dipole scattering amplitude was proposed in terms of the hierarchy of equations for Wilson line operators in the limit of large number of colors N c. The geometrical scaling (GS) at the high-energy limit of pQCD is obtained from the BK equation and the color glass condensate (CGC) formalism. Geometrical scaling is connected to the existence of the saturation scale and is defined as the dependence of the dipole cross section on only one dimensionless variable. In the limit of large Q 2 values, the structure function does not exactly match with the DGLAP formula for F 2, i.e., the saturation model does not include logarithmic scaling violations. Since the energy dependence in a large Q 2 region is mainly due to the behavior of the dipole cross section at small dipole size r, the authors in Refs. investigated the DGLAP evolution for small dipoles. Bartels-Golec-Bienat-Kowalski (BGBK) improved the dipole cross section by adding the collinear DGLAP effects. Indeed, the BGBK model is the implementation of QCD evolution in the dipole cross section which depends on the gluon distribution. The following modification of the DGLAP improved saturation model proposed for the dipole cross section is where the hard scale is assumed to have the form and the parameters C and 2 0 are obtained from the fit to the DIS data. The gluon distribution G(x, 2 ) obeys the DGLAP evolution equation truncated to the gluonic sector, as reported in Refs. [1,, by the form where g(x, 2 ) is the gluon density, and G(x, 2 ) = xg(x, 2 ). The splitting function P gg at the leading-order (LO) approximation reads with 4 3, and T f = 1 2 n f where n f is the active quark flavor. The convolution integrals in which contain a plus prescription, () +, can be easily calculated by The initial gluon distribution is defined at the scale 2 0 in the form The choice of the power 5.6, which regulates the large-x behavior, and other parameters (i.e., A g and g ) is motivated by global fits to DIS data with the LO DGLAP equation in the literature. Although the BGBK model is successful in describing the dipole cross section at large values of r, as the two models (GBW and BGBK) overlap in this region, they differ in the small r region where the running of the gluon distribution starts to play a significant role. Indeed, the DGLAP improved model of dip significantly improves agreement at large values of Q 2 without affecting the physics of saturation responsible for the transition to small Q 2. As expected, GS is true for the DGLAP improved model curve for the scaling variable r Q s ≥1 and for the GBW model curve for the whole region. It is well known that the color dipole cross sections are determined from the original structure functions with a parametrization of the deep inelastic structure function for electromagnetic scattering with protons in Ref.. The authors in Ref. presented the dipole cross section from an approximate form of the presumed dipole cross section convoluted with the perturbative photon wave function for virtual photon splitting into a color dipole with massless quarks. Some approximated analytical solutions in the color dipole model have been reported in recent years with considerable phenomenological success. The analytical methods of the unpolarized DGLAP evolution equations have been discussed extensively in Mellin and Laplace transformation. We present a modification of the DGLAP improved saturation model with respect to the Laplace transform technique by employing the parametrization of the proton structure function at LO up to next-to-next-to-leading order (NNLO) approximations, which preserves its behavior success in the low-and high-Q 2 regions. We show that GS holds for the DGLAP improved model in a wide kinematic region r Q s. In the next section, we introduce the theoretical details of the model with regard to the Laplace transform technique and discuss its qualitative features. We then derive the dipole cross section with respect to the parametrization of F 2 at LO up to NNLO approximations. In Sect. 3, we describe our results and discuss their physical implications in comparison with the GBW model. Section 4 contains conclusions. The model An analytical expression for F 2 (x, Q 2 ) was suggested by the authors in Ref. which describes fairly well the available experimental data on the reduced cross section in full accordance with the Froissart predictions. This parameterization provides a reliable structure function F 2 (x, Q 2 ) according to a combined fit of the H1 and ZEUS Collaborations' data in a range of the kinematic variables x and Q 2, x≤0.1 and 0.15 GeV 2 < Q 2 < 3000 GeV 2 as and can be applied as well in analyses of ultrahigh-energy processes with cosmic neutrinos. The effective parameters are defined by the following forms with the logarithmic terms L as where the effective parameters M and 2 are the effective mass and a scale factor, respectively. The additional parameters with their statistical errors are given in Table 1. According to the DGLAP Q 2 -evolution equation, the singlet and gluon distribution functions are related by the following form (for further discussion, please refer to Appendix A) where and P (n) ab (x) + ⊗ P where Here, denotes the order in running coupling s (Q 2 ), and The Laplace transform of H(a s (Q 2 ), ), s are given by the following forms We know that the Laplace transforms of the convolution factors are simply the ordinary products of the Laplace transforms of the factors. Therefore, Eq. in the Laplace space s reads as The gluon distribution into the parametrization of the proton structure function and its derivative with respect to lnQ 2 in s-space in Eq. is given by the following form where The coefficient functions f and f in the Laplace space s are given by The explicit expressions for the NLO and NNLO kernels in s space are rather cumbersome; therefore, we recall that we are interested in investigation of the kernels in small x as and at NNLO approximation f (a s, s) f (a s, s) The standard representation for QCD couplings in LO up to NNLO (within the MS scheme) approximations are defined by where 0, 1, and 2 are the one-, two-, and three-loop corrections to the QCD function, respectively, and t = ln Q 2 2, is the QCD cutoff parameter. Now the inverse Laplace transforms of Eq. can be easily performed by the following form as where the inverse transform of a product to the convolution of the original functions gives where The explicit expressions for the functions J () and M () are defined in Appendix B. We therefore obtained an explicit solution for the color dipole cross section dip (x, r ) in terms of the parametrization of F 2 (x, 2 ) and its derivative with respect to ln 2 at LO up to NNLO approximations due to the form of kernels. Numerical results The effective parameters in the GBW model have been extracted from a fit of the HERA data according to Ref. Calculations were performed at the Bjorken variable x to vary in the interval x = 10 −6...10 −2. The DGLAP improved model due to the parameterization of F 2 (x, Q 2 ) gives a good description of the ratio dip / 0 in comparison with the GBW saturation model at low x in a wide range of the momentum transfer Q 2. Figures 1, 2, and 3 clearly demonstrate that the extraction procedure provides correct behaviors of the extracted dip / 0 within the LO up to NNLO approximations. At low and high Q 2, the extracted values of dip / 0 are in good agreement with the GBW saturation model. We observe that the higher-order corrections are in a very good agreement with the GBW model in comparison with the LO approximation in a wide range of r. We see that the two results (the GBW and DGLAP improved models) overlap in small and large values of r, where the gluon distribution obtained from the parametrization of the proton structure function plays a significant role in the evolution of the gluon distribution. To emphasize the size of the higher-order corrections, we show the ratio "order/GBW for the ratio dip / 0 at the LO up to NNLO approximations in Fig. 4. As can be seen, these corrections are determined in the interval 10 −3 fm < r < 5 fm for x = 10 −6. In Fig. 4, the results for the NLO and NNLO approximations are very similar. It is seen that the NLO corrections are smaller than the NNLO corrections in the interval 0.1 fm < r < 1 fm and are larger in the interval 10 −3 fm < r < 0.1 fm. The LO up to NNLO corrections are completely equivalent for r > 1 fm. Indeed, these results due to the NLO and NNLO corrections are comparable to the GBW model in a wide range of domains. Of particular interest is the ratio dip / 0 defined by the scaling variable r Q s, where all the curves in the GBW model merge into one solid line. In Figs. 5, 6, and 7, we show that the ratio dip (x, r )/ 0 has a property of geometric scaling as dip (x, r ) = dip (r Q s (x)). The results of the DGLAP improved saturation model due to the parametrization of the proton structure function have become a function of a single variable, r Q s, for all values of r and x at LO up to NNLO approximations in Figs. 1, 2, and 3, respectively. From Fig. 7, one can infer that the NNLO results essentially improve the agreement with the geometric scaling in the GBW model in comparison with the LO and NLO calculations. The geometric scaling in the dipole cross sections in these calculations is visible in a wide range of r Q s at LO up to NNLO approximations. In these figures we observe that the violation between the geometric scaling of our results and the GBW model for low r Q s is clearly visible. The violations in this region are rather small and can be covered by the statistical errors in the parametrization of the proton structure function and its derivative. In Fig. Fig. 5 The extracted ratio dip (r Q s (x))/ 0 as a function r Q s for x = 10 −6...10 −2 from the parameterization of F 2 within the LO approximation (circle and dot curves) merges into one line due to the geometric scaling and compared with the GBW model (solid curve) Table 1. As can be seen from the related figures, the ratio results with respect to the Laplace transform method are consistent with the geometric scaling at low and large values of r Q s. To summarize, the essential elements of the GBW model, the saturation scale and geometric scaling, are preserved in the DGLAP improved dipole cross section when the gluon distribution function is derived from the parametrization of the proton structure function and its derivative due to the Laplace transform method in a wide range of the variables r and r Q s, respectively. Conclusions In conclusion, we have presented a certain theoretical model at LO up to NNLO approximations to describe the color dipole cross section based on the Laplace transform method at small values of x. Indeed, there are various methods to consider the color dipole model to obtain dip, and in this paper we have shown that the Laplace transform technique is also a reliable alternative scheme to analytically obtain the color dipole cross section. A detailed analysis has been performed to find an analytical solution of the color dipole cross section into the parametrization of F 2 (x, Q 2 ) and its derivative of the proton structure function with respect to lnQ 2 at LO up to NNLO approximations. We used the DGLAP improved model of the dipole cross section with saturation in which the parameterization of the proton structure function is used. The results according to the saturation scale and geometric scaling are consistent with the GBW saturation model in a wide range of r and r Q s, respectively. With regard to the statistical errors due to the effective parameters, the NNLO results give a reasonable data description in comparison with the other models. Indeed, the small size of the dipole cross section is improved in the DGLAP improved model, which is based on the evolution of gluon density in this region. In summary, we have analyzed the dipole cross section at low values of x and shown that the geometric scaling holds for the DGLAP improved model if the gluon distribution is defined by the parameterization of the proton structure function, and this is comparable to the GBW model curve in the whole region r Q s. Data Availability Statement This manuscript has no associated data or the data will not be deposited.. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. The evaluation of the higher-order coefficients is straightforward but is too lengthy to be included in this note, and will be given in the future when we numerically evaluate G(x, Q 2 ) in NLO and NNLO approximations. Finally, the gluon distribution function directly is obtained from the parameterization of the structure function F 2 (x, Q 2 ) and its derivatives by the following form at LO approximation x y 5 + − 120 23 x y 11. where D F 2 (x, Q 2 ) ≡ ∂ F 2 (x, Q 2 )/∂lnQ 2. |
The Mosella of Ausonius A prodigious memory, a facile talent for versification, a cheerful and kindly optimism, and an avoidance of all that was serious or profound or disquieting so Professor Robert Browning not unjustly sums up the poetic character of Decimus Magnus Ausonius. He was a professor of Latin, and it shows in his work. That is not surprising, since it was on his professional and professorial skill as teacher and rhetorician that his whole career was founded. It was by displaying proficiency in Latin Prose and Verse Composition that he rose to provincial governorships and to the consulate, and to the post of tutor to the future Emperor Gratian. It is to this last circumstance that we owe his best poem, the Mosella, a celebration of the beauties of the river Moselle. In A.d. 367 the Imperial court had been established at Trier in Gaul; and Ausonius was required to accompany the Emperor Valentinian, with his pupil, on his German campaigns. These, continuing a policy inaugurated by Valentinian's predecessors, were directed towards the consolidation of a Roman presence on the German bank of the Rhine, with the ultimate object of incorporating the Germans in the Empire and setting up a bulwark against further barbarian encroachments. It is against this political and military background that the Mosella must be read. It was clearly intended as propaganda. The purpose of the poem was to inspire the Gauls with confidence in the renewed peace and security. They stood in need of such reassurance, for the recent past in Gaul had been far from secure. |
A −793 G to A transition in the factor IX gene promoter is polymorphic in the caucasian population We report the incidence of a prevalent polymorphism at position −793 in the promoter region of the factor IX gene in caucasian individuals. This DNA change was originally reported as one of two changes in the factor IX gene of a severely affected haemophilia B patient from Japan. We confirm the neutral nature of this change and demonstrate that, despite showing linkage disequilibrium with the previously reported MseI RFLP <100bp distant, the use of these two loci together in a carrier screening strategy significantly increases the level of informativity over that achieved using either polymorphism alone. |
Passive Janus Particles Are Self-propelled in Active Nematics While active systems possess notable potential to form the foundation of new classes of autonomous materials, designing systems that can extract functional work from active surroundings has proven challenging. In this work, we extend these efforts to the realm of designed active liquid crystal/colloidal composites. We propose suspending colloidal particles with Janus anchoring conditions in an active nematic medium. These passive Janus particles become effectively self-propelled once immersed into an active nematic bath. The self-propulsion of passive Janus particles arises from the effective $+1/2$ topological charge their surface enforces on the surrounding active fluid. We analytically study their dynamics and the orientational dependence on the position of a companion $-1/2$ defect. We predict that at sufficiently small activity, the colloid and companion defect remain bound to each other, with the defect strongly orienting the colloid to propel either parallel or perpendicular to the nematic. At sufficiently high activity, we predict an unbinding of the colloid/defect pair. This work demonstrates how suspending engineered colloids in active liquid crystals may present a path to extracting activity to drive functionality. INTRODUCTION Active matter extracts energy from internal or external sources and transforms it into persistent motion. Although many physical realizations occur in living systems, such as bacterial colonies, cellular tissues, flocking animals and human crowds, there has also been considerable success developing and understanding synthetic active systems. Examples include swarming bots, vibrated granular matter, selfpropelled nanorods and active colloidal particles. In each of these examples, the work needed to achieve motility comes from either chemical reactions or responses to external fields. An alternative route is to immerse a passive object into an out-of-equilibrium system, extract work out of the environment, and transform it into a driving force. In previous work, passive particles were suspended into motile bacterial baths. However, only rarely has it been explored how active suspensions can power motility of passive objects [1,. Active nematics [30, possess many appealing properties for this task. Active nematic films are characterized by the presence of point disclinations in which the orientational order vanishes and around which the nematic director rotates by an angle ±. Because this winding cannot be untangled by smooth variations of the director, these defects are topological in nature, carrying a topological charge of ±1/2 (higher charge defects are energetically unfavorable). However, in contrast with passive nematics, a +1/2 defect in an active nematic system generates spontaneous flows that yield non-zero advection at the defect core. This makes +1/2 defects motile. The self-propulsion of these defects is sufficient to overcome the attractive interaction exerted by neighbouring non-motile −1/2 defects, allowing unbinding of defect pairs, which drives the system into a chaotic state FIG. 1. TheBC contribution to the anchoring condition for a Janus colloid of radius R0, see Eq. The red semicircle represents homeotropic anchoring, while the blue pole is planar with continuous transition zones of width 2 at the equator that soften the /2 jump between homeotropic and planar anchoring (purple wedges). Notice the finite jump of between = 0 and = 2, which provides the colloid with an effective topological charge of +1/2. On the insert, we can find, with matching colors, the different regions of anchoring over the colloid. At a distance RD sits the companion −1/2 defect, here depicted in orange. Another important property of nematics is that they can anchor to surfaces with a prescribed orientation. If the anchoring is strong enough, this prescribed orientation leads to a non-zero winding of the nematic director and thus to an effective topological charge localized at the interior of the colloid. Colloids with strong, uniform homeotropic or planar anchoring have an effective charge of +1, which must be balanced by associated defects in the nematic surroundings or at the colloid surface. Given both the motility of +1/2 disclinations in active nematics and the possibility of creating effective topological defects through the inclusion of colloids, it is natural to ask if it is possible to design a two-dimensional colloid with topological charge +1/2 and if such colloid acts as an effectively self-propelled particle. We focus in 2D because most active nematics occur in 2D films. In this paper, we demonstrate that such self-propulsion is possible by designing a colloid with a Janus structure. Our Janus colloid possesses homeotropic anchoring on one semicircle (red pole in figures), planar anchoring on the other (blue pole), and continuous transition zones at the equator, resulting in an effective +1/2 topological charge in the system. In analogy with positive +1/2 topological defects, the Janus colloid is self-propelled. The resulting colloidal self-propulsion is similar to that of active Janus particles or Quincke rollers, in that our passive colloid extracts energy from its surroundings to achieve propulsive motion. Conservation of topological charge requires that each Janus colloid is accompanied by a −1/2 topological defect in the surrounding nematic. We employ analytical models to construct such a colloid, estimate its self-propulsion and characterize its dynamics. By estimating and integrating the total stress at the colloid's surface, we learn that the surrounding activity does indeed drive a net non-zero propulsive force. Thus, the passive Janus particle effectively behaves as a self-propelled particle. Crucially, we show that the selfpropulsion of the colloid, at the low activity limit, is primarily parallel or perpendicular to the direction of local nematic order. On the other hand, high activity leads to spontaneous decoupling of the colloid with its companion defect. This novel activity-driven method of self-propulsion sheds light in how to extract work from active nematics and may serve as a new direction for collective phenomena. As these colloids are topologically charged, they experience long-range elasticity-mediated dipolar interactions which could lead to flocking or other collective behaviours. Furthermore, combining these systems with light activated molecular motors could open the door to activity gradients and a possible control mechanism for colloid self-propulsion through active materials. MODEL We consider a single colloid of radius R 0 centered at the origin, immersed in a two-dimensional nematic that is globally aligned along the x-axis far from the colloid. For the moment, the nematic can be treated as passive, but we will consider non-zero activity further below. For simplicity, we work in the far-field approximation. This implies our model is a long length-scale theory in which the size of a defect core acts as a natural cut-off. As such, the alignment of the nematic liquid crystal is entirely described through its directorn = cos() x + sin() y via the nematic orientation. Furthermore, we will work in the one-Frank-constant approximation, and thus the energetic cost of bending or splaying the director is given by where K denotes the Frank constant. The colloid imposes an anchoring condition for the nematic at its surface, denoted by BC (). This function is: The dotted line denotes the curve of stationary radii rs(i.e., points at which ∂rF = 0). Its shape is a result of the competition between the long-range attractive force between opposite topological charges, the short-range repulsion due to the strong anchoring condition and the repulsion between the negative topological charge density distributed along the transition zones. Notice how the landscape has its minima on the poles, which are the furthest away from the transition zones. In contrast, on the equator the landscape only has a local maximum. In agreement with the above, we also see the stationary radii in the equator are further away of the colloid than in the poles, leading to a lemon-shaped curve. (b) Curve of stationary radii for different values of; the coloring of the curves correspond to the local value of F. In agreement with our interpretation of a locally distributed negative charge density on the transition zones, as these become wider and occupy a larger sector of the colloid, their repulsion becomes less pronounces leading to a more isotropic curve, and finally a circle for = /2. i) not constant, and ii) not continuous, since the nematic director is defined modulo. We discuss the physical consequences of such discontinuities further below. Moreover, we work in the strong anchoring limit, in which it is energetically prohibitive for the director to deviate from the prescribed anchoring. Under these conditions, the anchoring can be treated as a boundary condition for the nematic orientation, (r = R 0, ) = BC (). This results in the nematic director winding around the colloid surface k = ( BC (2) − BC )/(2) times, producing an effective topological charge k. Because the system is globally aligned, the total topological charge of the system must remain zero; therefore, the colloid must induce one or more companion topological defects whose topological charge must sum to −k. In polar coordinates centered in the colloid, the position of the i-th defect is (R i D, i D ). Their angular position D matches the angular position of the discontinuities in the anchoring boundary condition BC, with the defects' topological charge being proportional to the size of the jump in the discontinuity. Here, we consider a colloidal design which necessitates only a single −1/2 defect. Moreover, we work in an adiabatic approximation in which the relaxation time for the nematic director is assumed microscopic. As such, the system is invariably at its energy minimum. Minimizing the free energy, F/ = 0, we obtain contingent on (r = R 0, ) = BC (; D, c ) and (r → ∞, ) = 0, where c is the orientation of the colloid. Our description does not explicitly account for contributions due to the orientation of the companion defect -the defect orientation is that which corresponds to the minimized free energy. Likewise, if the angular position D of the defect is set then the colloidal orientation c must minimize the free energy. Further details on the boundary conditions of are provided in Appendix A. JANUS BOUNDARY CONDITION Inspired by the motility of +1/2 defects, we choose a boundary condition with an effective topological charge of +1/2. We divide the colloid into two semicircles, one with homeotropic anchoring (red) and the other with planar anchoring (blue), with two continuous transition zones, each of angular width 2, at the equator (see Fig. 1). The role of such transitions zones is to smooth the /2 jump between homeotropic and planar anchoring, which cannot happen discontinuously since such a jump would violate nematic symmetry. In the reference frame of the colloid, defined by c = 0, this boundary condition takes the form where denotes angles in the colloid reference frame with respect to the colloidal equator. The connection between the colloid reference frame and the lab frame is illustrated in Appendix A, Fig. 8. We defin and (x) denotes the Heaviside function. The BC term sets the director across all regions of the colloid (Fig. 1), while the Heaviside function in Eq. introduces a − jump at D, which is physically allowed under head-tail symmetry. Mathematically, however, this introduces a branch cut which identifies the position of the companion defect. In the limit → /2 (i.e., when the entire colloid is covered by transition zones), BC ( ) = /4 + /2, identical to the form that acquires around an isolated +1/2 defect. As such, our choice for the boundary condition also allows us to explore the properties of a colloid that perfectly imitates a +1/2 defect. We solve Eq. for this choice of BC in Appendix A. The solutions are obtained by writing general forms of the interior and exterior solutions to the Laplace equation and imposing appropriate boundary conditions to obtain explicit forms for the coefficients of the solution. Crucially, since the total topological charge enclosed by concentric radius changes from +1/2 to 0 at the location of the companion defect, we must separate our domain into two regions: Region I, from the surface to the companion defect (R 0 ≤ r < R D ), and region II, beyond the companion defect (r > R D ) see Fig. 7(d). Our solution reveals that the polarization of the colloid, c, and the position of the companion defect are linked in a oneto-one relationship, c = /4 − D /2 (see Eq. 20 and Fig. 8 in Appendix A). This one-to-one relationship implies that re-orienting the companion defect's position induces a change in the colloid orientation to minimizes the free energy. As we have not set the colloid or the companion defect orientation a priori, our solution selects the combination that minimizes the free energy for each given configuration. The colloid's orientation dependence on the lab-frame defect angular position, D, is given by c = ( − 2 D )/2, which tells us that as the set defect position is moved, the equilibrium colloid orientation will counter-rotate. Although this effect is apparently independent of the separation, the colloid must move a distance comparable to R D in order for D to change considerably. This shows that the effect actually weakens with distance. Having establish this connection, we can write the solution to Eq. in the lab frame(see Appendix A) to be in which I and II denote the solution in regions I (R 0 ≤ r < R D ) and II (r > R D ), respectively. The first two terms in Eq. for I (r, ) provide both the orientation of the colloid and the behaviour of the nematic in the intermediate regime R 0 r R D. In this region, all other terms are negligible. Not surprisingly, this corresponds to a nematic surrounding an isolated +1/2 defect. The first term inside the sum in Eq. describes the behaviour of the nematic in the close vicinity of the colloid and directly reflects the prescribed anchoring. The second term inside the sum describes the deformation of the nematic field due to the presence of the companion defect. Regarding the form of II, its terms play the same role that their counterpart in I, with the distinction that the first term in the sum, which goes as ∼ (R 0 /r) n with r > R D, only becomes relevant if R D is comparable with R 0, i.e., when the companion defect resides in the vicinity of the colloid surface. Figure 2 depicts the solution of Eqs. for representative companion defect positions when R D = 3R 0. In Fig 2(a), D = 0, such that the colloid/defect orientation is aligned parallel to the global nematic orientation. In this configuration, the colloid is also oriented parallel to the far-field. Because the planar-anchored pole (blue) is closest to the defect, the elastic deformation between the surface and the defect is primarily bend. As the defect position is moved to D = /4, the equilibrium colloid orientation must turn clockwise to compensate (Fig 2(b)). Because of this, the defect is closest to the equator in this angular position. This process continues as we move the defect position to D = /2. By this point, the equilibrium colloid orientation has rotated −/2; thus, the colloid/defect complex is now perpendicular to the far-field director. The companion defect is now closest to the homeotropic pole (red) and so the principle deformation mode between the colloid and the surface is splay. COLLOID-DEFECT INTERACTION The interaction energy between the colloid and its companion defect can be obtained by inserting Eqs. and in Eq. (see Appendix B). The result, in the colloid reference frame, is in which we have defined the dimensionless coordinate = R D /R 0 and Li 2 denotes the dilogarithm. The term ∼ ln() corresponds to the attractive interaction between +1/2 and −1/2 defects. Since this term is identical to a +1/2 defect in the far field, it can be thought of as the leading term in a multipole expansion of the interaction. The second term, which goes as ∼ ln(1 − −2 ), is a short-range repulsion between the colloid and the defect, and is logarithmically divergent. It arises because of our assumption of a strong anchoring: In contrast to an "actual" +1/2 singularity, the anchoring at the surface of the colloid does not change regardless of how close the defect is, leading to an increase of elastic energy. As such, at close distances the colloid behaves as a wall, strongly repelling the defect. The third term corresponds to the remaining contributions of the multipole expansion and contain the details of the interaction at intermediate distances. As it can be seen in Fig. 3(a), F is highly anisotropic. The colloid repels the defect more strongly at the transition zones. This can be seen in Fig. 3(b), which depicts the curves of radial stationary points at which ∂ r F = 0. The repulsion weakens and becomes more uniform as the transition zone covers a larger sector of the colloid's surface. If the transition zone covers the entire colloid, then the repulsion is completely uniform. This behaviour originates due to the presence of a negative topological charge density distributed along the surface of the transition zones (see Appendix B). Indeed, Eq. can be recast as is the distance between the defect and an element of topological charge −d/(4) at the surface of the transition zone in units of R 0. This negative charge density integrates to a total of -1, and is superposed with a positive charge of 3/2, which is uniformly distributed along the entire colloid surface, as can be seen from the first term in Eq.. These two sources of charge conspire to a total colloidal topological charge of +1/2, as expected. Finally, notice from Fig. 3(a) and (b) that the minima of the free energy landscapes lay on lines along the poles of the colloid, which are farthest from the transition zones. As such, the defect prefers to lie near the poles of the homeotropic (red) and planar (blue) zones. It follows that the colloid prefers to align its orientation either parallel or perpendicularly to the global orientation of the nematic, as depicted in Fig. 2. Since both minima are identical, there is no energetic preference between one state or the another. As can be seen in Fig. 2 by comparing when the defect lies near the pole of the planar zone ( Fig. 2(a); blue) versus when it lies near the homeotropic pole (Fig. 2(c); red), bend and splay are exchanged. Since we make a one-Frank-constant approximation, both deformations have the same energetic cost, leading to equivalent wells. In contrast, on the colloid's equator there are only local maxima, since the defect has to overcome the repulsion of the transition zones in order to approach it. However, as can be seen in Fig. 3, the difference between the minima and the maxima becomes shallower as the transition zones become wider, consistent with a more distributed negative charge. We therefore expect it to become easier for the defect to cross the equator with increased. Fig. 2) as a function of its distance to the colloid, Eq., for different value of activity (parametrized through z = 4(R 2 0 )/(Kc)) for = 0.3. As activity increases, the defect's proffered position (filled points) becomes larger until we reach a critical activity, zc at which there is no longer a local energy minima, (b) Same as in (a), but for a defect at the planar pole (blue semicircle in Fig. 2). With increasing activity, the defect becomes closer to the colloid. There is always an energy minima. (c) Curve of stationary radii at a finite activity; defects near the homeotropic pole are closer to the colloid than those on the homeotropic pole. (d) Critical z at which defects unbind from the colloid, zc, as a function of the transition zone width. The orange dot marks the value of zc in (a). COLLOID PROPULSION Having established the colloid-defect interaction and the form of the nematic field in the vicinity of the Janus particle, we now seek to understand the dynamics of the colloid and its companion defect in an active nematic fluid. In order to analytically approximate the propulsive force acting on the Janus colloid, we make two key assumptions. Firstly, we assume sufficiently low activity to neglect couplings between the director and the flow, which implies that the passive solutions derived in the previous section remain accurate. This approximation suggests that the active nematic length scale a = K/ is much larger than any other length scales in the system. Independently, we make a second assumption that the incompressible active film (of viscosity ) sits above a thin underlying oil-layer which dissipates momentum and imposes an effective friction. This produces a dissipative length scale H = / which we take to be smaller than any other length scales in the system. This assumption amounts to taking the overdamped limit, in which frictional forces dominate over all non-active stress contributions. This has proven a useful limit for developing theoretical predictions for 2D active nematic dynamics. As discussed in Appendix C, the total stress is dominated by the active contribution and passive pressure contributions are negligible. The active stress is proportional to the nematic tensor A = − Q, where quantifies the activity and Q denotes the nematic tensor. Positive values of > 0 correspond to extensile stress, whereas < 0 corresponds to contractile stress. We compute the force exerted by the active nematic on the colloid by using the total stress to compute the traction force t at the colloid's surface where denotes the normal to the colloid surface. As seen in Appendix C, the dominant term in the total stress is its active part A. Neglecting all other passive stresses, we find in which L is the polar angle with respect to the horizontal in the colloid's reference frame and BC is given by Eq.. The total force F is obtained by integrating t over the surface of the colloid wherep denotes the colloid's orientation, which points from the colloid center to the planar (blue) pole. Hence, for extensile activity the colloid travels towards its planar pole. In a contractile system, the colloid travels towards the homeotropic (red) pole. There is no net-torque from the traction. While the defect position reorients the colloid (as in Fig. 2), the propulsion direction is always predicted to be parallel top. Crucially, this shows that passive Janus particles are subject to a net force and, and thus are effectively self-motile, when suspended in an active nematic. As we can see in Fig. 4, the magnitude of the propulsive force increases with and approaches a plateau as → /2. Since this value of corresponds to the boundary condition of an isolated +1/2 defect, we see that self-propulsion of the Janus colloid with a vanishing transition zone can reach at least a 63% of the magnitude of the "perfect" +1/2 defect condition (Fig. 4). COLLOID-DEFECT PAIR DYNAMICS Similar to what occurs with a pair of two nematic topological defects, the colloid's active propulsive force can have strong effects on the dynamics of a colloid-defect pair. In this section, we explore such phenomena by modelling both the colloid and defect as interacting point particles located at R c and R D, respectively, in the lab frame (Fig. 8). Assuming overdamped dynamics with drag coefficient c and D for the colloid and the defect, respectively, we have that their dynamics are governed by where denotes random forces acting on the defect, which we assume to be negligible on the larger colloid. As such, we see that the relative coordinate r = R D −R c satisfies in which = c D /( c + D ) is a "reduced friction" coefficient. The radial component of the above equation has the same structure in the colloid's co-rotational reference frame, with the exception that, for an extensile system, F always points downwards in this frame (Fig. 8). Thus, we write in which we have identified the effective potential V eff (r) = F(r) + c F r and the radial component of the noise r. Explicitly, this amounts to where we have defined the parameter z = 4(R 2 0 )/(K c ), which acts as a dimensionless activity number that balances the colloid size against the active nematic length scale a = K/ and the ratio of drag coefficients / c. As such, activity introduces a radial potential whose slope depends on the angular position of the colloid. This angular dependence breaks the symmetry between the homeotropic and planar poles. Configurations with defects on the poles of the homeotropic (red) or planar (blue) sides carry active contributions to the effective potential with opposite signs. Hence, the symmetry between the colloid being oriented parallel or perpendicular to the global nematic orientation is also broken. The reason behind this symmetry breaking is entirely due to the polarity of the self-propulsive force: In an extensile system, when the defect sits on the homeotropic side (red), the colloidal propulsive force pulls away from the defect. This widens the separation between the two, which can be seen in Fig. 5(a), depicting the stationary radii shifting to larger values with increasing activity (Fig. 5(a); closed circles). In contrast, if the defect sits on the planar side (blue) the propulsive force on the colloid pushes it towards the defect. This reduces their separation, as can be seen from the shift of the stationary radii to smaller values with increasing activity (Fig. 5(b)). These effects and their variation with the angular position of the companion defect can be observed in the curve of stationary radii deformed by activity illustrated in Fig. 5(c). While we have focused on extensile stresses with both and z > 0, the consequence of considering contractile forces in Fig. 5 is straightforward. In contractile active nematics, +1/2 defects move in the opposite direction compared to in extensile activity and the same is to be expected of our Janus colloid. Thus, the situation for which the colloid moves towards or away from the companion defect is flipped. The ultimate effect is simply that Fig. 5(a) and Fig. 5(b) are exchanged for a contractile activity. Interestingly, the effective potential for the defect on the homeotropic pole ( D = /2) is very similar to the one discussed in Ref., in that the negative linear potential creates a local maximum or energy barrier which the defect can overcome through noise and thus unbind from the colloid. However, in contrast to Ref., our colloid has a divergent short-range repulsion, which creates an additional possibility: For a sufficiently large activity the effective potential no longer possesses a local minimum (i.e., the energy barrier completely disappears), becoming effectively completely repulsive ( Fig. 5(a); z = {0.76, 0.9}). As a consequence, the defect will spontaneously unbind from the colloid, even in the absence of noise. The critical activity necessary for such unbinding to occur gives a z c between 0.3 and 1 as a function of transition zone width (Fig. 5(d)). The critical activity number z c corresponds to an active Effective potential (V eff (rs) = V eff (rs, D ) − V eff ( D = /2)) along the curve of stationary radii for different angular widths of the transition zone (). Notice how the energy barriers get shallower and narrow as increases. length a comparable to the colloidal diameter when the drag coefficients c and D are taken to be similar. Thus, defect unbinding is a high activity phenomena. At high values of activity, many of the approximations we have assumed are no longer valid such that the coupling of the nematic with the flow cannot be disregarded and the length scales at which we can assume the nematic to be aligned become comparable with the size of the colloid. Although we don't expect our analytical results to hold in such a high-activity regime, we note that the first three terms of the effective colloid-defect potential do not depend on the nematic global orientation and are sufficient to create the repulsive potential at high activity. As such, we can expect this phenomenological prediction to persist; if not quantitatively, at least qualitatively. To test this, future work will require full dynamical simulations of both the colloid and the active nematic. In the small activity limit, in which a R 0, or for a large discrepancy between the defect and colloidal drag coefficients, the defect should remain attached to the colloid, wiggling in the vicinity of a slightly perturbed lemon shaped "orbit", such as the ones in Fig. 3. Although the probability of unbinding through noise is not zero, it is exponentially small, and thus likely to be rare. As such, colloid and defect remain close to each other, enacting strong orientational effects on one another. In particular, we saw in the previous section that the colloid propulsion is parallel top in the colloid reference frame. For an extensile system, its orientation is p = −/2. Relative to the global nematic alignment, we find p = p + c = − D. As such, the direction of selfpropulsion is directly dictated by the angular position of the companion defect. However, not all angular positions have the same probability of being occupied. On the one hand, the free energy is minimum when the position vector of the companion defects lays at the middle of either the homeotropic or planar regions. On the other hand, the maxima of the free energy occurs when the position vector of the companion defects lays along the equator of the colloid (Fig. 6). As a result, the companion defect lies at one of the poles of the colloid, although fluctuations may allow it to jump the energy barrier. Such jump events would be expected to sharply change the colloid's direction of selfpropulsion. The height determines the resilience of the colloid's persistence to fluctuations. As the transition zones become wider, the potential well becomes flatter and, in the case in which the entire colloid is a transition zone (i.e., → /2), we observe a flat potential (Fig. 6). The width of the potential well on the other hand, sets how collimated the colloid trajectories are, i.e., how much it deviates from travelling parallel or perpendicular to the nematic director. CONCLUSIONS We have designed a 2D Janus boundary condition for a colloidal particle, which becomes effectively self-propelled when immersed in an active nematic. This boundary condition sets homeotropic anchoring on the surface of one semicircle and planar anchoring on the opposite one, with a continuous transition zone at the equator. This boundary condition amounts to a net effective +1/2 topological charge. Moreover, balancing the total stress and traction on the colloid surface reveals a non-zero net force caused by the fluid activity, which propels the colloid forward. Conservation of topological charge induces a companion passive −1/2 defect. We analytically solve the nematic field for an arbitrary position of this companion defect and calculate its interaction energy with the colloid. Moreover, we show the angular positions of this companion defect and orientation of the colloid are linked in a one-to-one relationship. This prediction relies on an adiabatic approximation in which the nematic's director and flows relax infinitely fast. However, we also show that the transition zones at the equator of the colloid have an effective topological charge density which induce an energy landscape for the position of the companion defect, with valleys at the poles and peaks at the equator. As a result, the defect tends to sit near one of the colloid poles, which results in the colloid orientation pointing either along the far-field nematic director or perpendicular to it. In the small activity regime, this leads to an anisotropic run-and-tumble-type process. The colloid's propulsive force further affects the separation between colloid and defect, breaking the symmetry between homeotropic and planar sides (closer at the planar side, further away on the homeotropic side). In the high activity regime, the colloid-defect interaction becomes fully repulsive on the homeotropic side, leading to spontaneous unbinding of the companion defect. Although in such high activity regime the nematic would be highly turbulent, we expect this phenomena to persist, leading to potentially interesting dynamics. We hope this work will serve to motivate research on predefined nematic anchoring as a method of self-propulsion in active fluids. Similarly, we hope that it can also shed light in how to extract work from active nematics, as well as introduce a new playground for collective phenomena. For example, as these colloids are topologically charged they exert long-range elasticity mediated dipolar interactions which, similar to motile disclinations, can lead to flocking. Furthermore, combining these systems with light activated molecular motors opens the door to activity gradients and possible control of colloidal selfpropulsion. 7. (a) Non-periodic boundary conditions for an isolated colloid. Because of its topological charge, must jump by at some arbitrary radial line, (b) solution for the nematic director anchoring to the Janus colloid with its prescribed anchoring, (c) same as in (b) but from further away; at long length-scales the colloid appears exactly as a +1/2 defect, and (d) boundary conditions for a Janus colloid immersed in an ordered nematic. Conservation of topological charge implies the presence of a −1/2 defect, which lies at a radial distance RD from the colloid. For r < RD, must jump by at a line whose polar angle sets the position of the defect. For r > RD the solution must be periodic. The remaining conditions are continuity and smoothness of at RD. Appendix A: Director Field Solution We focus first on the case of a Janus colloid immersed on a free nematic crystal. As described on the main text, we work in the strong anchoring condition in which the preferred colloid anchoring, Eq., becomes a boundary condition for the nematic. Because this boundary condition rotates the nematic around the colloid by, the colloid has an effective 1/2 topological charge, which entails that after a full rotation around the colloid, must jump by. Notice that this differs from the traditional boundary condition for the Laplace equation in polar coordinates which demands periodicity on the polar angle. Here, in contrast, we must have an arbitrary line, defined by 0, along which we have this discontinuity in, i.e., ( 0 −) = ( 0 +) +, see Fig. 7(a). Notice that a particular solution that satisfies this boundary condition is = /2 − ( − 0 ) + f (r, ), where f (r, ) is a periodic harmonic function in, whose role is to make sure satisfies the anchoring condition at the colloid's surface. Since must not diverge as r → ∞, then f (r, ) must take the form f = ∞ n=0 (a n sin(n ) + b n cos(n ))r −n. By demanding (R 0, ) = BC ( ) we then obtain the solution This solution is displayed on Figs. 7(b) and (c), which show, respectively, how the nematic follows the prescribed anchoring and how, looked from afar, the colloid behaves as a +1/2 topological defect. Next, we discuss the case of a Janus colloid immersed on an ordered nematic, i.e., a nematic that, without loss of generality, satisfies the following asymptotic condition in the lab reference frame (r → ∞) = 0. For simplicity, we work in the reference frame of the colloid in which the homeotropic semicircle (red) points towards the y axis (Fig. 8). In this reference frame, denoted by primed coordinates, the asymptotic condition becomes simply (r → ∞) = 0, for some constant 0. In order to go back to the lab frame, we simply rotate the solution by − 0. Fig. 8 presents a diagram of the frame transformation. More importantly, imposing an ordered nematic far way from the colloid implies that the net topological charge in the system must be zero. However, because we know that the colloid itself has topological charge +1/2, this necessarily implies that there must be a companion −1/2 defect somewhere in the system, say at a radial distance R D and a polar angle D. This naturally divides our system into two regions, see Fig. 7(d). On the one hand, for R 0 ≤ r < R D we have a situation very similar to the colloid in the free nematic: Any rotation around the colloid must lead to a jump in of. As such, we use the same boundary condition as before with the caveat that in this case 0 = D. That is, the line that in the previous case was arbitrary, in this case determines the angular position of the defect. Therefore, we look for a solution of the form I (r, ) = 1/2 − ( − D ) + ∞ n=0 (a n sin(n )r n + b n sin(n )r −n + c n cos(n )r n + d n cos(n )r −n ), where now, because this region is finite, we also allow for positive powers of r. On the other hand, for r > R D the total topological charge encircled by a loop is zero, which implies that our solution should be periodic in; i.e., (r > R D, − D ) = (r > R D, + D ). As such, the solution in this region must have the form II (r, ) = ∞ n=0 (A n sin(n ) + B n sin(n ))r −n. The coefficients in both I and II are obtained by imposing the colloid anchoring ( I (R 0, ) = BC ( )), continuity ( I (R D, ) = II (R D, )) and smoothness (∂ r I (r, )| r=R D = ∂ r II (r, )| r=R D ). With this, we obtain the solution FIG. 8. Relationship between the colloid frame and the lab frame: In the colloid frame, the colloid is always oriented with the homeotropic (red) semi-circle pointing up. As such, the global orientation of the nematic, 0, changes when we change the position of the defect. Since in the lab frame the nematic is oriented along the horizontal, in order to change frames we just need to rotate clock-wise by 0. This sets up the colloid orientation as c = −0. Notice also how the angular position of the defect changes between the two frames. which has allowed us to isolate the divergent part of the integral in the ln( /R 0 ) term. We now proceed to renormalize by subtracting the bare energy 2F 0 /(K) = − ln We then change the variable of integration to, defined via t = e 2i( D +)/ 2, to find from which Eq. naturally follows. Appendix C: Neglecting hydrodynamic stresses and pressure In the main text, leading up to Eq., we neglected all hydrodynamic stresses, effectively considering overdamped active stresses. In this limit, the viscous and passive nematohydrodynamic stresses can be neglected in the limit of a vanishingly small hydrodynamic length H. Yet, the hydrostatic pressure may not be necessarily small; however, we demonstrate here that it is indeed negligible. The passive pressure P contributes to the force on the colloid Unfortunately, the pressure contribution cannot be obtained analytically. To numerically estimate the pressure, we write the relevant Stokes equation for the fluid velocity which is complemented by the incompressibility condition ∇ v = 0. Using incompressibility in Eq., leads to The boundary condition associated to this equation comes from demanding an impenetrable colloid, i.e., r v = 0. This leads us to the following Neumann boundary condition for the pressur r ∇P = −r (∇ Q). We solve Eqs. and using finite elements and use the solution in Eq. to obtain an estimate for F P. The results show that, although this force is different from zero, it is much smaller than the active contribution as it satisfies |F P |/|F A | ∼ 10 −2, allowing us to neglect it. |
Development of a Wearable Haptic Device that Presents the Haptic Sensation Corresponding to Three Fingers on the Forearm Numerous methods have been proposed for presenting tactile sensations from objects in virtual environments. In particular, wearable tactile displays for the fingers, such as fingertip-type and glove-type displays, have been intensely studied. However, the weight and size of these devices typically hinder the free movement of the fingers, especially in a multi-finger scenario. To cope with this issue, we have proposed a method of presenting the haptic sensation of the fingertip to the forearm, including the direction of force. In this study, we extended the method to three fingertips (thumb, index finger and middle finger) and three locations on the forearm using a five-bar linkage mechanism. We tested whether all of the tactile information presented by the device could be discriminated, and confirmed that the discrimination ability was about 90%. Then we conducted an experiment to present the grasping force in a virtual environment, confirming that the realism of the experience was improved by our device, compared with the conditions with no haptic or with vibration cues. |
Stable isotopic evidence of nitrogen sources and C4 metabolism driving the worlds largest macroalgal green tides in the Yellow Sea During recent years, rapid seasonal growth of macroalgae covered extensive areas within the Yellow Sea, developing the worlds most spatially extensive green tide. The remarkably fast accumulation of macroalgal biomass is the joint result of high nitrogen supplies in Yellow Sea waters, plus ability of the macroalgae to optionally use C4 photosynthetic pathways that facilitate rapid growth. Stable isotopic evidence shows that the high nitrogen supply is derived from anthropogenic sources, conveyed from watersheds via river discharges, and by direct atmospheric deposition. Wastewater and manures supply about half the nitrogen used by the macroalgae, fertiliser and atmospheric deposition each furnish about a quarter of the nitrogen in macroalgae. The massive green tides affecting the Yellow Sea are likely to increase, with significant current and future environmental and human consequences. Addressing these changing trajectories will demand concerted investment in new basic and applied research as the basis for developing management policies. Concentrations of ammonium and nitrate in coastal waters surrounding China, and the Yellow Sea region in particular, have increased dramatically during recent decades; for example, mean nitrate concentrations increased 7-fold between 1985 and 2010 17. Concentrations of ammonium and nitrate discharged by rivers into the Yellow Sea are quite high (Table 1). Even in the open Yellow Sea, concentrations of dissolved inorganic nitrogen are 10-80 M near the coast, and 0.7-5.8 M offshore 7. Other reports confirm that concentration of coastal dissolved inorganic nitrogen range 7.4-95 M 18 and 1-15 M offshore 10. Such high concentrations of nitrogen must stimulate the onset and maintain the Ulva bloom. N/P values are high, ranging from 48/1 to 259/1. These values are considerably above the 16:1 Redfield ratio, suggesting that there is sufficient nitrogen such that development of green tides in the Yellow Sea later in the season may be limited by phosphorus supply 10,14,17,19, or not be nutrient-limited at all 20. The alarming rise of eutrophication of Chinese coastal waters follows from remarkable increases in nitrogen loads, transported by rivers, and by direct atmospheric deposition. Increasingly, watersheds discharge nitrogen from wastewater disposal, fertiliser and manure use, and atmospheric deposition on land, into rivers 17,. The increased concentrations and loads borne by rivers then translate into increased discharges to the Yellow Sea. For instance, discharges of nitrogen from the Yangtze River increased by 135% between 1980 and 2010 26. Direct atmospheric deposition on coastal waters has also increased 19,27,28, and may be involved in a tripling of the nitrate concentration in the Yellow Sea West of Korea 19. Such recent increases in fluvial and atmospheric contributions have skewed coastal nutrient concentrations in Chinese coastal waters toward larger values than those found in seawater across other coastal regions of the world, particularly in the case of nitrate (Fig. 3). The literature 7,19,25, reflects broad agreement that, in general, in the Yellow Sea and its watersheds, human activities have increased atmospheric nitrogen deposition, that applications of agricultural fertiliser are excessive (more than half of the added fertiliser nitrogen is not used by crops, and is released into the environment), that rivers transport nitrogen from human and animal wastes, fertiliser use, and atmospheric deposition on watersheds to coastal waters, that direct atmospheric deposition on the Yellow Sea is significant, and that natural biological sources, such as fixation of nitrogen within the coastal environment, are much less than anthropogenic contributions. There is also little doubt that human and animal waste materials, fertilisers, and atmospheric nitrogen deposition, all support the remarkable green tides of the Yellow Sea. In contrast, there are substantial disparities in published estimates of magnitudes of contributions by different sources to nitrogen budgets of Chinese coastal waters. For example, some references conclude that rivers contribute 52% of nitrogen inputs to the Yellow Sea, while direct atmospheric deposition on the sea surface adds 42% 35. Other references provide yet more differing estimates of atmospheric deposition to the Bohai Sea ( Fig. 4) but focus on high deposition of NH 4 and lower deposition of NO 3 36,37. Others argue that river discharge of nitrogen is much smaller than atmospheric deposition 23, while still others aver that rivers may carry perhaps 62% of nitrogen loading into the Yellow Sea, with direct atmospheric deposition adding about 36%, and mariculture adding only about 2% 17. The latter is much smaller than an estimate that mariculture activity may be of the same magnitude as atmospheric deposition 27. Others conclude that increasing discharges of animal manure to rivers is the major cause of eutrophication 24. Such disparities among published assessments of relative magnitude of different terms emerge from differences in processes and inputs considered, the forms of nitrogen included, and contrasts in area, estuary, or region studied. Calculated magnitudes of contributions by different sources differ enough to challenge comprehensive synthesis of nitrogen budgets, and confusing interpretations as to the relative importance of the drivers governing the green tide phenomenon. We add, in passing, that nitrogen load estimates for the region largely omit consideration of two features that have been found to be important elsewhere. First, contributions via groundwater flow for Chinese coastal sites are significant 38,39, and may reach nutrient discharges equivalent to river fluxes 40. Second, inputs and dynamics of dissolved organic nitrogen are large for Chinese river discharges 9,41. Both of these aspects merit more attention, as they are likely to be of a quantitative magnitude that changes perspectives on nutrient loading and transformations. While it is difficult to make mass flow comparisons based on the published data, stable isotope analyses can furnish an empirical check on the relative magnitude of contributions from the various sources of nitrogen. Measuring stable isotopic values of macroalgae has been widely used to partition such contributions. In this paper, we first use stable carbon isotopic signatures of the macroalgae to see if they furnish insight into the remarkably fast growth of macroalgae in the Yellow Sea. The carbon isotope signatures were also used to observe the relationship between C3 and C4 metabolic pathways of the macroalgae. Stable isotopic nitrogen signatures of macroalgae collected from a series of Yellow Sea stations during a green tide event (Fig. 4) were examined to ascertain the degree to which different sources (human and animal wastes, fertilisers, and atmospheric deposition) are responsible for a high supply of available nitrogen, and hence for the green tides in the Yellow Sea. Fig. 5). This is an unusual 13 C range for macroalgae, but similar values have been reported by others ( Table 2). In general, producers-plants and algae-found in coastal aquatic environments carry out carbon fixation via C4 or the C3 metabolic pathways 47. These pathways are characterised by differences in biochemistry and architecture 48, particularly internal air spaces where CO 2 and carbonate may be re-used. Presence of C3 or C4 metabolism is associated with relatively constrained values of 13 C ( Table 2). Most algae use the C3 pathways for fixing carbon and have corresponding 13 C values; the contrasting range of 13 C found in Ulva in the Yellow Sea and elsewhere (Table 2) is therefore in need of explanation. Results and Discussion It turns out that C3 and C4 photosynthetic pathways co-occur in Ulva, as demonstrated by transcriptome sequencing that revealed presence of C4 and C3 genes, as well as shown by presence and activity of enzymes involved in C4 metabolism 49,50. These results confirm earlier findings that C4 and C3 characteristics co-occur in certain producers 51, and that C4 metabolism evolved in different plant groups, and at different geological periods 52, rather than exclusively in grasses in the late Miocene. The aspects of C4 metabolism most relevant here is that this pathway minimises photorespiration, increases photosynthetic efficiency, raises nutrient uptake efficiency, and favours high rates of photosynthesis-even where CO 2 concentrations are low 53,54. These features potentially confer high rates of growth and productivity, which can in turn lead to the ability to accumulate Fig. 4. For comparison, we added range of 13 C values for algae with C3 and C4 photosynthesis (along the x axis) 76, and ranges of values for 15 N derived from human wastewater and livestock manure fertilisers, and atmospheric deposition (stable isotopic range data from sources in Table 4). biomass at much faster rates than possible with C3 pathways. It has been speculated that the remarkable fast development of green tides in the Yellow Sea may be supported by sustained release of propagules from maricultural rafts 55 and high supply of nutrients 14,17. We conjecture that perhaps the fast growth associated with ability to carry out C4 metabolism-revealed by the isotopic evidence in Fig. 5-might be an additional and important explanation, particularly adapted for fast growth in waters where there is a high supply of available nitrogen. It seems likely that macroalgae other than Ulva might be able to carry out combinations of C3 and C4 metabolism. The brown macroalga Sargassum showed 13 C values ranging −24 to −14 (Fig. 5), and there are many reports of ranges of 13 C that span values between those typical of C3 and C4 metabolism. These results suggest that the co-occurrence of C3 and C4 carbon fixation pathways might be widespread among macroalgae (Table 2). There is a further implication of the confirmation that C4 and C3 metabolism co-occur in certain macroalgae. As atmospheric CO 2 rises in coming decades, more CO 2 will be stored in the oceans. If nutrient supply increases, macroalgae metabolically pre-adapted to efficiently fix carbon seem therefore likely to proliferate, much as they have done in the Yellow Sea, across other nutrient-and carbon-enriched coastal waters of the world 56,57. A number of potential biogeochemical and ecological consequences might ensue. There are also concerns about the potential for green tides to foster wholesale shifts in the composition of producers (and of food webs) in the water column 58. Competition for nutrients with other producers, such as phytoplankton, seems implausible, as concentrations of dissolved inorganic nitrogen remain high in the Yellow Sea through the growing season. It seems more likely that in the Yellow Sea competition with phytoplankton might be mediated by shading by macroalgal canopies. There may be increased delivery of carbon to deeper layers of the sea as green tides senesce and biomass sinks. The shifts in metabolism in the producers, associated with high nitrogen supply, might therefore extend to the alter food webs in the Yellow Sea and other similarly affected ecosystems. Testing of such possibilities will be of interest. Table 2. 13 C (mean ± se) in C3 and C4 producers, and ranges of 13 C in fronds of selected macroalgal species collected from different locations. Fig. 4 span a range of 4 to 11 (y axis in Fig. 5). The range we measured in Ulva reasonably overlap values in other reports, for green, brown, and red macroalgae ( Table 3). The 15 N values we measured in the Yellow Sea U. prolifera reflect uptake of inputs of nitrogen entering these coastal waters, plus within-estuary biochemical nitrogen cycle transformations. For the Yellow Sea, the nitrogen inputs are, to a degree, better known than the internal biogeochemical transformations. Here we therefore focus on the nitrogen inputs. To interpret the distribution of points in Fig. 5, we compare the position of measurements from the samples in relation to reasonably well-established bounds reported for stable isotopic values on nitrogen in nitrate (Table 4). These bounds are shown in Fig. 5. We highlight major external anthropogenic sources of N, including human wastewater and animal manures (the latter is likely a smaller contribution, since in our unpublished review of fate of manure N in watersheds, we found that only 3.7% of manure N reaches receiving coastal waters), fertilisers, and atmospheric deposition. Other sources of nitrogen, which are likely to make lesser contributions to the Yellow Sea, include river-transport of soils and sediments, nitrogen fixation, inputs from the extensive mariculture industry in the region, and contributions of "natural" nitrogen from upwelled deeper water or wandering Kuroshio Stream sources. The nitrogen brought into the Yellow Sea by sediments and soils holds a mix of what was introduced by use of fertilisers, disposal of wastes, and by atmospheric deposition, all on watersheds. The intermediate values of the 15 N for soils (Table 4) reflect that mix of sources. To avoid possible double-accounting of these sources, we did not consider nitrogen in soil particles brought into the sea by river transport. There is evidence of some nitrogen fixation from the detection of diazotrophic microorganisms in the Yellow Sea water column 59. In earlier studies, nitrogen fixation only amounted to 6% of the nitrogen inputs to the Yellow Sea. Since then, available ammonium has increased in the water column, which should further depress the contribution by nitrogen fixation 60, and hence, we ignored fixation here. Nitrogen inputs from mariculture activities need to be considered in the context that there is a countering removal of nitrogen inherent in the industrial-level macroalgal culture in the Yellow Sea 17,27,61. For the Chinese coast as a whole, by 2010, shell-and fin-fish culture may have released 0.2 10 6 tons of nitrogen per year 61. The nitrogen removed in harvests of macroalgal crops reached 2 10 6 tons in 2014 60. Clearly, maricultural efforts are a net remover of nitrogen from the Yellow Sea and might be an effective management counter to increased nitrogen loadings. In regard to upwelled N sources, we found no measurements of 15 N in deep-layer nitrate within the region, but it has been reported that 15 N of particulate organic matter (POM) from deeper layers ranged from 3.1 to 5.8 65, which should be somewhat heavier than those of the nitrate taken up by the POM. 15 N signatures of nitrate upwelled from deeper layers should therefore be lighter than those of nitrate derived from wastewater discharged from watersheds into the sea. Possible nitrate inputs from Kuroshio wanderings have been discussed 66, but river flow seemed to be the dominant source of nitrate in the localities where the green tide bloom started during the growing season (Fig. 2). The overwhelming dominance of fluvial N transport to the Yellow Sea is corroborated by many papers 18,. Moreover, during warmer seasons in the Yellow Sea, vertical stratification is marked, as evident in sigma t and salinity profiles, constraining upwelling of deeper layers so that upward transport of heavy N may not be a dominant mechanism 66. In addition, the macroalgae discussed here float in the upper half meter of the water column or so. This is also the layer most affected by river flow, with lowest salinities. It is therefore not at all surprising that isotopic signatures of the floating green tides reflect fluvial inputs rather than inputs from deeper Yellow Sea layers or Kuroshio sources. The distribution of 15 N values measured in samples of macroalgae from the Yellow Sea range from about 2 to 11 (Fig. 5). For both U. prolifera and S. horneri, the range of isotopic values reasonably match values of 15 N of nitrate contributed by rivers (Table 1), which seems reasonable because fractionation during uptake by algae is minimal, perhaps adding only 1 to the signature of the source. The lower isotopic values appear to be a result of uptake of nitrogen originally from fertiliser use and atmospheric deposition. The upper ranges of isotopic values in the macroalgae were considerably higher than would be expected if nitrogen from fertiliser and atmospheric deposition had been the main sources. In fact, most points in Fig. 5 fall in an intermediate region between those found in human and animal waste, and those characteristic of fertiliser and atmospheric deposition values (Table 4). This implies that a mix of these nitrogen sources was taken up by the macroalgae. To obtain approximate estimates of the relative contributions from the most likely and distinguishable sources, human and animal wastes; fertilisers; or atmospheric deposition, we used IsoSource, a stable isotope mixing model 72. To simplify the calculation, we entered the mid-point in the range of 15 N for each of the three sources (Table 4) and calculated the % contribution of nitrogen in Ulva and Sargassum for the samples included in Fig. 5. The values for wastewater (and to a rather smaller extent, manures) differ clearly from those in fertiliser and atmospheric deposition. The latter two sources bear similar signatures, which impairs partition. The salient result from the IsoSource partition, however, is that about half the N taken up by both species of macroalgae derived from anthropogenic wastewater in the Yellow Sea as a whole. Fertilisers and atmospheric deposition each may have added a quarter of the N found in the macroalgae (Fig. 6). The proportions of waste N uptake were larger (about 60%) in the more coastal regions of Qingdao and Subei (Fig. 4), perhaps suggesting that near-shore environments are more subject to waste disposal effects. The substantial degree of eutrophication in Chinese rivers and coastal water, made strikingly evident by the massive green tides, suggests two high priorities. First, developing management policies to address the issues will need greater understanding of basic aspects. While we show above that certain sources of nitrogen seem important, examination of the literature shows that actual estimates of mass fluxes from different sources, and of flows though rivers, groundwater, and atmospheric deposition differ from one publication to another. Concerted and critical synthesis of published and new work is needed to constrain the estimates into a comprehensive context. Better quantified knowledge of the sources, and transport routes, will go a long way to suggest how to best target approaches to manage nitrogen loads. Second, while work on developing approaches to manage N loads is taking place, it seems also important to assess the various basic and applied effects of the green tides. One obvious 44,70,80,. These compilations included some of the same sources of information. To quantify inorganic fertilisers, as available, we mainly used isotopic values for ammonium-based fertilisers, because Chinese farmers use primarily ammonium and urea rather than nitrate fertilisers 29, and urea released into aquatic environments is rapidly hydrolysed to ammonium. The ranges of isotopic signature for human waste and animal manures overlap so closely that we combined both into a single range. It would be of interest to find ways to separate these two sources, since mass balance estimates of the relative magnitude of these sources do not agree 24,25,29. The soils and sediments transported by rivers are not a clear example of an input, since they carry nitrogen that was delivered by atmospheric deposition, fertilisers, wastewater and manures, and other inputs. This item is included here to show its 15 N range is intermediate, as befits a bearer of a mix of nitrogen from different sources. aspect is to understand the economic, social, and infrastructural costs; another might be to develop work aiming at understanding the environmental effects on the Yellow Sea ecosystem. These might include major current and future changes in biodiversity, food webs, economically important shell and fin-fish stocks, and biogeochemistry, exerted by the remarkable re-routing of nutrient fluxes, carbon production and sinking, shading, and most likely other still-unknown changes. Methods Field surveys from a small craft were carried out during May-June 2017, sampling macroalgal biomass at a series of stations in Jiangsu and Qingdao coastal waters, and aboard RV Su88 across the western Yellow Sea, covering a region of 32.67-33.67°N and 120.50-122.70°E (Fig. 4). Samples of the macroalgae collected were dominated by a green (Ulva prolifera) and a brown (Sargassum horneri) species. Macroalgal samples were briefly rinsed in filtered seawater and frozen. The frozen macroalgal samples were dried in a Christ ALPHA 1-4 LSC freeze dryer and ground in the East China Normal University laboratory in Shanghai. Samples of ground macroalgae were then shipped to the Stable Isotope Laboratory at the Marine Biological Laboratory in Woods Hole, MA to be analysed for carbon and nitrogen content, as well as carbon, nitrogen, and sulphur stable isotope signatures. The stable isotopic values were determined using a Europa 20-20 Continuous-Flow Isotope Ratio Mass Spectrometer system. All data analysed during this study are included in this article. Any further inquiries can be directed to the authors. |
One Nationality or Two? The Strange Case of Oppenheimer v. Cattermole OVER fifty years ago Russell J. laid down the basic principle governing questions of foreign nationality in English law, when he said in Jtoeck v. Public Trustee:"Whether a person is a national of a country must be determined by the municipal law of that country." * Many cases have applied this principle, but whether, and under what circumstances, an English court can refuse to give effect to a foreign law affecting nationality is a question on which, authority in English law is scanty. No less obscure is the answer to the question whether, if a foreign nationality law is denied effect in English law, the English court may also decline to recognise its effect within the foreign system. The interest in the case of Oppenheimer v. Cattermole lies in its posing of these two questions, and the very different answers supplied by the learned judge at first instance and the members of the Court of Appeal. |
Performance Analysis and Improvements of TCP Downstream in an Heterogeneous Network In an infrastructure network based on IEEE 802.11a/b/g wireless LAN, heterogeneous hosts such as desktop PCs and PDAs can be connected to it. We measured a performance of data transmission between these heterogeneous hosts. When a PDA as a mobile host is used for downloading data from its stationary server, i.e., a desktop PC, a PC and a PDA acts as a fast sender and a slow receiver, respectively, due to substantial differences in their computational capabilities. Our experimental results show that the transmission time during downstream is slower than that during upstream by 20% at maximum. To mitigate this, we present two distinct approaches. First, by increasing the size of a receive buffer for a PDA the congestion window size of TCP becomes more stable. Second, a pre-determined delay between packets to be transmitted at the sender should be given. From the performance point of view a method of buffer sizing is preferred rather than that of adjusting the inter-packet delay. However, such a delay reduces the number of erroneous packets remarkably. |
Hypogonadotropic hypogonadism due to GnRH receptor mutation in a sibling. Hypogonadotropic hypogonadism (HH) is characterised by delayed puberty and infertility. Congenital HH comprises Kallmann syndrome with hypo-/anosmia and idiopathic HH (IHH). The genetic origin remains unknown in most cases, but the defective GnRH receptor gene (GNRHR) accounts for a considerable proportion of IHH. Here we describe a pair of siblings diagnosed with IHH. Aged 17 years, the boy was referred because of short stature (162 cm) and overweight (62.5 kg). He presented no signs of puberty, bone age of 14.5 years and insulin resistance. His sister, aged 16 years, also displayed delayed puberty. She was 166 cm tall and weighed 52 kg; her bone age was 12.5 years. Pelvic ultrasonography showed an infantile uterus and fibrous ovaries. In both siblings, serum gonadotropins were extremely low, and non-responsive to GnRH. Testosterone (1.38 nmol/l) and IGF1 (273 ng/ml) were decreased in the boy, although the girl did not present IFG1 deficiency. Her serum oestradiol was 10 pg/ml. MRIs of the hypothalamo-pituitary region and olfactory bulbs revealed them to be normal. The patients' sense of smell was unaltered. Their parents appeared to be first degree cousins. Considering the clinical data and potentially autosomal recessive HH transmission, the GNRHR gene was screened. The siblings turned out to be homozygous for the G416A transition, which had previously been identified in other HH individuals. The parents were heterozygous mutation carriers. The proband, moderately responding to LH, was started on low dose testosterone replacement, and his sister on transdermal oestradiol. Molecular data indicative of GnRH resistance could guide their future therapy should they desire fertility restoration. Further observations of the male patient may provide insights into androgen's influence on body mass, growth and insulin sensitivity. |
Chemical Fractionation and Phytoavailability of Heavy Metals in a Soil Amended with Metal Salts or Metal-Spiked Poultry Manure Chemical fractionation patterns and plant tissue concentrations were used to assess nickel, copper, zinc, cadmium, and lead phytoavailability to maize in a soil amended with metal salts or poultry manure. A sandy loam was treated with 80400 mg kg−1 doses of a quinternary mixture of the metal nitrates either directly or as spiked poultry manure. The European Communities Bureau of Reference sequential extraction procedure partitioned the metals among three operationally defined pools in the soil. Metal mobilities were lower in the poultry manureamended than the metal salttreated soil, indicating the manure's ability to fix the metals in soil. Pot experiments revealed high metal transferabilities with no apparent phytotoxic symptoms in maize at the doses applied, suggesting some degree of tolerance to the metals. Heavy-metal concentrations in maize increased linearly with metal doses in metal salttreated soil, but were less phytoavailable in soil amended with poultry manure. Heavy-metal concentrations in maize were reasonably predicted from soil parameters using stepwise multivariate regression models. The findings are useful in the assessment and remediation of heavy metalcontaminated soils. |
Effect of Sampling Without Replacement on Isolated Populations of Endemic Aquatic Invertebrates in Central Arizona ABSTRACT We measured population size and density of Pyrgulopsis morrisoni and Heterelmis sp. within a single spring in central Arizona over four sampling periods in 2001 to evaluate the effect of sampling without replacement. Our analysis detected significant differences in total population size across sampling periods. Sampling without replacement caused a transitory decline in total population size of each organism, though P. morrisoni was again locally abundant the following year. Spring ecosystems are affected by several anthropogenic stressors and many endemic aquatic invertebrates have been afforded Federal protection. Studies should not contribute stress. Until more is known about fecundity, recruitment, and population fluctuations, employ sampling methods that do not remove significant numbers of individuals. |
SMT-Based Cost Optimization Approach for the Integration of Avionic Functions in IMA and TTEthernet Architectures The design of avionic systems is a complex engineering activity. The iterative integration approach helps in controlling the complexity of such activity. On the other hand, using such approach to design evolving systems requires the reconfiguration of scheduling parameters of already integrated parts. This reconfiguration results in a recertification process having a cost that depends on the criticality level of the affected application. We propose a new approach which helps the system designer at each integration step in establishing the new scheduling parameters that minimize such cost. In this work, we focus on the Integrated Modular Avionic (IMA) architecture connected through a Time-Triggered Ethernet (TTEthernet) network. We present a formal model for such systems and we use this model to define a set of constraints that ensure the real-time requirements. These constraints are expressed using an SMTbased language and we used the SMT-solver YICES to find automatically a feasible scheduling parameters that minimize the cost of integration. We show our framework at work by analyzing the iterative integration of some functionalities of the Flight Management System. |
Adjustment of women and their husbands to recurrent breast cancer. The psychosocial adjustment of women with recurrent breast cancer (N = 81) and their husbands (N = 74) were compared to determine if they report different levels of adjustment, support, symptom distress, hopelessness, and uncertainty. Women with recurrent breast cancer reported more emotional distress than their husbands, but both had a similar number of psychosocial role problems. Women and husbands differed in the amount of support and uncertainty they reported but not in the levels of symptom distress or hopelessness they perceived. Women, in contrast to their husbands, expressed more surprise that their cancer recurred and found the recurrent phase of illness more distressing than the initial diagnosis. |
The use of a Mechatronic Systems Simulator in Engineering Courses This Innovative Practice Work in Progress presents the proposal and the development of a simulator for an Evolvable Production System that aims to represent a complete mechatronic system, simulate its operation, and support the learning of associated subjects. A mechatronic system is composed of mechanical and electronic modules that in turn may be associated with a software layer that is responsible for the intelligence of the entire system. This intelligence comes through interactions among software agents belonging to the software layer. The mechatronic devices are described through Finite State Machines, whose transitions are triggered by mechatronic software agents. The output of the simulator is a list with the set of skill calls, the time in which such calls are made, the respective system's answer, and the set of communication messages exchanged among the agents' peers. By using this simulator in engineering classrooms, it is possible to construct several proposals of mechatronic systems, represent those systems as complete manufacturing processes, and use intelligent software agents to simulate the complete functionality of the designed system. Afterward, with the results obtained from the simulation, students are prepared to implement real systems. The proposed simulator has been used in engineering courses at the Federal University of Amazonas, and the goal of this paper is describing the characteristics of the simulator and the results of its use in engineering disciplines. |
Cluster Based Mobile Key Management Scheme to Improve Scalability and Mobility in Wireless Sensor Networks The demands of Wireless Sensor Networks(WSN) increase the challenges in terms of scalability and mobility. The scalability is important to improve the energy efficiency and network lifetime, while mobility helps to improve the reachability of network. In this paper a new Cluster-based Mobile Key Management Scheme (CMKMS) is proposed. The CMKMS Scheme algorithm is used for the management and maintenance of keys under cluster-based mobile WSN network. In this scheme a cluster is formed and Cluster Head (CH) is selected and it is acting as a key manager. The work makes an assumption that sensor nodes and CH can move from one position to another.CH manages and maintains the private keys of sensor nodes.This algorithm also shows less computational overheads and energy consumption. This paper accomplishes the scalability of WSN using mobility-supported key management algorithms. |
The influence of the duration of chronic unpredictable mild stress on the behavioural responses of C57BL/6J mice The chronic unpredictable mild stress (CUMS) model of depression in mice is a model commonly used to investigate stress-induced depressive-like behaviours. The duration of the stress-inducing procedure is variable, thus making it difficult to compare results and draw general conclusions from different protocols. Here, we decided to investigate how the duration of the CUMS procedure affects behavioural changes, body weight as well as the level of plasma corticosterone in stressed and nonstressed C57BL/6J mice subjected to CUMS for 18 or 36days. We found that 18days of CUMS induced a robust decrease in grooming time in the splash test and a significant increase in the immobility time in the tail suspension test (TST) and the forced swim test (FST). All of these stress-induced depression-related behavioural effects diminished or even disappeared after 36days of CUMS. Plasma corticosterone levels were increased in the CUMS mice compared to those in the nonstressed mice. However, this effect was more pronounced in mice stressed for 18days. On the other hand, a gradual decline in weight loss in the stressed animals was observed as the duration of the CUMS procedure increased. Altogether, the results indicate that 18days of CUMS did not affect body weight but caused significant behavioural effects as well as a robust increase in corticosterone levels, while 36days of CUMS induced significant reduction in weight gain but only slight or even non-significant behavioural effects. These results may indicate the presence of adaptive changes to the long-term CUMS procedure in C57BL/6J mice. |
Retrieval from software libraries for bug localization This retrospective on our 2011 MSR publication starts with the research milieu that led to the work reported in our paper. We brie y review the competing ideas of a decade ago that could be applied to solving the problem of identifying the les in a software library related to a query. We were especially interested in nding out if the more complex text retrieval methods of that time would be e ective in the software context. A surprising conclusion of our paper was that the reality was exactly the opposite: the more traditional simpler methods outperformed the complex methods. In addition to this surprising result, our paper was also the rst to report what was considered at that time a large-scale quantitative evaluation of the IR-based approaches to automatic bug localization. Over the years, such quantitative evaluations have become the norm. We believe that these contributions were largely responsible for the popularity of this paper in the research literature. |
Size reproducibility of gadolinium oxide based nanomagnetic particles for cellular magnetic resonance imaging: effects of functionalization, chemisorption and reaction conditions. We developed biofunctionalized nanoparticles with magnetic properties by immobilizing diethyleneglycol (DEG) on Gd2O3, and PEGilation of small particulate gadolinium oxide (SPGO) with two methoxy-polyethyleneglycol-silane (mPEG-Silane 550 and 2000 Da) using a new supervised polyol route, described recently. In conjunction to the previous study to achieve a high quality synthesis and increase in the product yield of nanoparticles; assessment of the effects of functionalization, chemisorption and altered reaction conditions, such as NaOH concentration, temperature, reaction time and their solubility, on size reproducibility were determined as the goals of this study. Moreover, the effects of centrifugation, filtration and dialysis of the solution on the nono magnetic particle size values and their stability against aggregation have been evaluated. Optimization of reaction parameters led to strong coating of magnetic nanoparticles with the ligands which increases the reproducibility of particle size measurements. Furthermore, the ligand-coated nanoparticles showed enhanced colloidal stability as a result of the steric stabilization function of the ligands grafted on the surface of particles. The experiments showed that DEG and mPEG-silane (550 and 2000 Dalton) are chemisorbed on the particle surfaces of Gd2O3 and SPGO which led to particle sizes of 5.9 ± 0.13 nm, 51.3 ± 1.46 nm and 194.2 ± 22.1 nm, respectively. The small size of DEG-Gd2O3 is acceptably below the cutoff of 6nm, enabling easy diffusion through lymphatics and filtration from kidney, and thus provides a great deal of potential for further in-vivo and in-vitro application Introduction Magnetic resonance imaging (MRI) is one of the various medical techniques in diagnosis of diseases. The signal of MRI is dependent on the T 1 (spin-lattice relaxation time) and T 2 (spin-spin relaxation time). The relaxation times can be manipulated using magnetic compounds/chelates of gadolinium or iron oxides. The complexes of Gd 3+ or Mn 2+ ions are paramagnetic contrast agents(CA) that to alter the surface characteristics of magnetite. The core-shell synthesis of magnetic nanoparticles will protect their surface from chemical reactions and the magnetic core from oxidation, causing hydrophobic effects and magnetic attractions, increasing cellular uptake rate, and possibility of various therapeutics attachment. Also, the biocompatibility of magnetic nanoparticles depends on the type of surface covering them, as well as, on their size. Likewise, coating of nanoparticles can increase relaxivity and half-life of the CA and protect them from aggregation. The total size of magnetic nanoparticles depends on the thickness of its coating such that nanoparticles coated with inorganic materials generally are smaller than 100 nm, where as polymer coating will result in larger particles above 100 nm. As a result, specific surface of the nanoparticles, and type of coating particle, will determine lipophilicity, surface charge and hydrophilicity of them. In magnetic liquids that are predominantly being prepared for biomedical applications, the surface charge which established by surface groups or by charge surface of the surrounding liquid medium results in a potential layer for physicochemical interactions. We developed biofunctionalized nanoparticles with magnetic properties by coating of gadolinium oxide with diethyleneglycol (DEG), and PEGilation of small particulate gadolinium oxide (SPGO) with two different molecular weights of methoxypoly ethyleneglycol-silane (mPEG-Silane 550 Da and mPEG-Silane 2000 Da)through a new supervised polyol route, introduced recently by this group. These types of NMPs are important in biosystems and expected to show higher contrast enhancement than that of commercially available CAs for MRI, such as Gd-DTPA. In a conjunction to the previous observations to achieve a high quality synthesis and increase in the product yield of NMPs, assessments of the effects of functionalization, chemisorption and altered reaction conditions on size reproducibility for increasing their stability against aggregation, were determined as the goals of this study. Also, a thorough description of the synthesis methods along with their chemical schemes was presented. Finally, causes brightening of MR images (positive contrast agents). On the other hand, iron oxide particles (super paramagnetic contrast agents) darken the MR images and are known as negative contrast agents. Gd +3 based agents with seven unpaired f-electrons are widely opted for large magnetic moment and applicability for different organs such as liver, spleen and lungs, where as, iron oxide is specific to liver. Despite the good magnetic properties, the free Gd 3+ ion is extremely toxic. To reduce the toxicity, Gd 3+ usually is complexed with strong organic chelators, e.g. diethylenetriaminepentaacetic acid (DTPA) which is used conventionally in daily MRI examinations. Due to the intrinsically low sensitivity of MRI, high local concentrations of the CA at the target site are required to generate higher image contrast. In addition, the MRI targeted CA should recognize targeted cells with high sensitivity materials such as nanomagnetic particles (NMPs).These particles should be biocompatible and have proper size and surface properties for optimum effects (size of the nanoparticles that are used in MRI are about 3 to 350 nm). Therefore, physicochemical properties of nanoparticles will determine efficiency of nanoparticles, either polymeric or lipidic one. Nanoparticle surface modification with various coating materials is of utmost importance to prevent nanoparticle aggregation, decrease the toxicity, and increase the solubility and the bio compatibility. In recent years, extensive amount of experiments has been focused on the synthesis and surface modification of nanoparticles with high sensitivity characteristics.However, mechanisms of chemical synthesis, particle growth during formation, stability and reproducibility are still a challenge and require repetitive, accurate and cumbersome measurements. For this reason, it is quite important to develop methods in order to increase the product yield of nanomagnetic particles, control of the shell thickness, and elimination of the large size particles. Many of these coating materials, typically, involve some kind of polyethylene glycol (PEG) molecule or DEG. In the case of PEG, an intervening silane layer is often used for attaching the molecule to the nanoparticle. Furthermore, the PEG with various chain lengths has been used after optimization of reaction parameters, we evaluated the magnetic properties by relaxometric measurements of the three contrast agents with different core-shells and molecular weights in comparison with the previous reports from PEGylation with larger molecular weights of 6000 Da and the conventional Gd-DTPA. Physicochemical characterization The particle size of nano crystals was determined by Dynamic light scattering (DLS, Brookhaven Instruments-USA), and the measurements were repeated in different time intervals. Additionally, in the study of chemical reaction between DEG and Gd 2 O 3, the sizes of nanoparticles were measured as functions of OHconcentration of the solution, temperature and reflux time. Morphology of NMPs was examined by transmission electron microscopy (TEM, CM120 Model, Koninklijke, Philips Electronics, Netherlands). The chemical characteristics and reaction completeness of Gd 2 O 3 -DEG nanoparticles, prepared by the supervised polyol method before and after dialysis and centrifugation, and the chemical binding of mPEG-silane to the SPGO were investigated using Fourier transform infrared (FTIR) spectrophotometer (Tensor 27, Bruker Cor., Germany). FTIR spectra were obtained within the range of 4000-400 cm -1 at room temperature (26 C ± 1 C). Saturation of magnetization and superpar amagnetic characteristics of SPGO, Gd 2 O 3 -DEG and SPGO-mPEG-silane (550 and 2000 Dalton) were measured by vibrating sample magnetometer (VSM, 7400 model, Lakeshore Cryotronics Inc., OH, USA). At last, signal intensity and relaxivity measurements were performed using a GE 1.5-T MRI scanner (General Electric, WI, USA). Synthesis of Gd 2 O 3 -DEG nanocrystals NaOH Solutions were prepared by dissolving different amounts (0.3, 0.5 and 0.7 mM) of solid NaOH in 5 mL DEG and sonicated and/ or shook for 4 hours to get clear solution and were kept in proper condition for the subsequent steps. GdCl 3 6H 2 O powder was prepared by dissolving 2 mmol Gd 2 O 3 (not nano in size) in 1 mL HCl. On the experiment day, in a small reaction balloon, 0.9 mmol of GdCl 3 6H 2 O was dissolved in 5 mL DEG by heating the mixture to 140 C. After obtaining a clear solution, 5 mL of NaOH solution was added, and the temperature was raised to 180 C for 4 h under reflux and magnetic stirring condition, leading to a dark yellow colloid. The solution was cooled, then the formed nano crystals were separated and filtered using centrifuge filtration at 2000 rpm(filters: polyethersulfone, 0.2 m, Vivascience Sartorius, Hannover, Germany) for 30 min at 40 C to remove agglomerations or large-size particles. Colloidal DEG-Gd 2 O 3 was dialyzed against deionized water in for 24 h using dialysis membrane (1000MW, Dialysis tubing benzoylated, Sigma-Aldrich, USA) to eliminate free Gd 3+ ions and excess of DEG. These parts of work were not included in our previous study.The reaction scheme of capping Gd 2 O 3 nanoparticles with DEG is shown in Figures 1 and 2. Synthesis of SPGO-mPEG nanocrystals To synthesize SPGO-mPEG nano crystals, 1 g of Gd 2 O 3 (<40 nm) and 15 mgmL −1 mPEGsilane (MW 550 or MW 2000) were mixed in 10 mL of deionized water and sonicated for 2 h at 40 C. Agglomerations or large-size particles which may remain were eliminated in a similar fashion as above (i.e. centrifuged for 30 min, 2000 rpm at 40 C). PEG-SPGO colloidal suspension was dialyzed in two separate steps: first using cylindrical dialysis membranes, (1000 MW), as described for DEG-Gd 2 O 3 synthesis, free Gd 3+ ions were eliminated. Second, cellulose membrane (12000 MW, Dialysis tubing cellulose membrane, Sigma-Aldrich, USA) was employed, in a separate tank for another 24 h, to remove free ligand plethora. Magnetic stirring was applied to increase the circulation in dialysis membrane, ensuring efficiency of the dialysis process. The capping reaction of Gd 2 O 3 nanoparticle with mPEG-silane is shown in Figure 3. Figure 4 compares the FTIR spectrum of Gd 2 O 3 powder, pure DEG, prepared Gd 2 O 3 -DEG nano crystals before and after centrifugation and dialysis process, in which characteristically different bands of ligands were detected. The bands in pure DEG (spectrum b) at 2876 and 1460 cm −1 correspond to symmetric stretching and bending of CH 2. The band at 1127 cm −1 corresponds to C-O stretch, and the broad band of O-H stretch was observed in the 3100-3500 cm −1 range. FTIR spectra b and c showed similar bands before the supervised polyol rout was applied. After centrifuge and dialysis, however, the FTIR spectrum corresponding to DEG, in the range of 1060-1130cm −1, diminished and shifted from 1127 to 1120 cm −1 (C-O). Furthermore, the bands at1460 and 2876cm −1 (CH 2 ) diminished as well ( Figure 4d). Figure 5 compares the results of FTIR spectroscopy for the two different mPEGsaline polymers, SPGO and the PEGylated SPGO nanoparticles (550 and 2000 Dalton). The spectrum of the PEG 550 Da (Figure 5a) showed characteristic peaks at 1284, 1627, 1107, 2876, 1458, 3100-3500 cm -1. Some of the strong absorptions of PEG are assigned for the -CH 2 CH 2 -symmetric stretching and bending around 2876 and 1458 cm -1 which demonstrate the presence of saturated carbons -(CH 2 CH 2 ) n -.The Peak at 1284 cm -1 corresponds to Si-C stretching vibration. The bands at 1627 and 1107 correspond to C=O stretching vibration and C-O ether stretching vibration, respectively. The band at 1551 cm -1 corresponds to -NH bending vibration in the amide located between the silane and the PEG. Noticeably broad bands in the 3100 and 3600 cm -1 region indicate exchangeable protons in N-H. Spectra d and e ( Figure 5) belong to SPGO-PEG nanoparticles, whereas spectrum c ( Figure 5) belongs to SPGO before adding PEG. As can be seen, pure SPGO possesses characteristic peaks at 850 and 1500 cm -1. In addition, two shifts of PEG-silane 550 Da bands' peaks from 1284 to 1247.21 and from 2876 to 2925 cm -1 (Figure 5d) were exhibited. It should be noted that the PEG coated SPGO particles were dialyzed before measurements, to remove the excess of PEG polymers (i.e. PEG polymers that were physically absorbed onto the surface of the particles), and the observed signals, thus only belong to the PEG polymers that are chemically attached only. There are small differences in the FTIR spectrum of SPGO-mPEG-silane 2000 Da compared to that of SPGO-mPEG-silane MW 550 (Figure 5e). Particle size measurements Dynamic light scattering (DLS) was used for estimating the hydrodynamic radius of the nanoparticles. Figure 6 shows relationship (c) and (e) show that for PEG-coated Gd 2 O 3 nanoparticles, a silane linker molecule is used to couple the PEG to the nanoparticle. between particle size and concentration of OH -; increasing the concentration of OHincreases the particle size and leads to rapid precipitation of nanoparticles. Figure 7 shows the recorded Gd 2 O 3 -DEG nano crystal sizes as a function of refluxing time in the reaction. The results indicated that increasing the reaction time causes decrease in the size of the nanoparticles(i.e. the smallest particle sizes were obtained after 4 h reflux). Figure 8 shows the size of the particles relative to the reaction temperature (NaOH 0.3 mM, 4 h). The smallest size was obtained at 180 C, while increasing the temperature from 180 to 190 C caused aggregation of the particles. nanoparticles in optimization process reached nearly 20 nm in size; still they needed to undergo filtration and dialysis. Thereby, in the optimum reaction conditions, and after filtration and dialysis, the small proper particle size of 5.9 ± 0.13 nm (respective pdI of 0.387) was obtained (Table 1), compared to the larger size in our previous study. Figure 9 shows the TEM images of Gd 2 O 3 -DEG nanocrystals, used for hydrostatic size measurements. Gd nanomagnetic particles are clearly formed in uniform spherical or ellipsoidal shapes and visualized separately in nano scaled grains. These findings show that main nucleus (Gd 2 O 3 core) is coated by DEG molecules through a strong interaction between DEG with the Gd 2 O 3 nanoparticle surface. Figure 10 shows TEM images of two other PEGylated nanoparticles, which in contrast to Gd 2 O 3 -DEG nanoparticles, PEG coated NPMs were not visualized as evidently due to agglomeration and their large molecular weights. Analysis of magnetic properties Measurements of magnetic properties were done using VSM and at room temperature. Figures 11 and 12 demonstrate the relationship between relative magnetization curves and applied field. Removing the applied magnetic field will not lead to coercivity and remanence In 180 C, the smallest particle size was obtained. in paramagnetic, diamagnetic and super para magnetic materials. Paramagnetic materials also have a linear relationship between their magnetization (M) and applied field (H) with positive slope. Figure 11a vividly shows paramagnetic properties of SPGO particles. Where, Gd 2 O 3 -DEG nanoparticles exhibited S shape (sigmoidal) Magnetization curve of super paramagnetic materials in Figure 11b. Figure 12 shows the magnetometry of PEGylated nanoparticles. A linear relationship is apparent between magnetization (M) and applied field in this figure, thus, inferring that these two PEGylated nanoparticles are paramagnetic materials. Please note that the susceptibility (as slope of curve) for SPGO-mPEG-silane2000 is less than that of SPGO-mPEG-silane550 ( 2000 =9.2010 -5 < 550 =3.2810 -4 ). Maximum signal intensities in different concentrations and relaxivity measurements Signal intensity Images and curves for Gd-DTPA, Gd 2 O 3 -DEG, SPGO-mPEG-silane550 and SPGOm-PEG-silane 2000 using standard spin echo imaging with TR/TE=600/15ms has been presented in Figure 13. The results of quantitative variation of signal intensities in Figure 13(b) are in complete accordance with the image visualization in Figure 13(a) for in-vitro dilutions of the four materials. Concentrations of 0.6, 0.6 and 0.9 mM corresponded to the maximum signal intensities for Gd 2 O 3 -DEG, SPGO-mPEG-silane550 and SPGO-mPEG-silane2000, respectively. Table 2 shows the r 1 and r 2 as the slope of R 1 and R 2 relaxation rates versus concentration values for Gd-DTPA, and the three nanoparticles in water. Discussion In this study, Gd 2 O 3 nanoparticles with three different core-shells (DEG, m-PEG Silane 550 and 2000 Dalton) were synthesized, functionalized and dialyzed for further in-vitro and in-vivo applications in biological systems (Figures 1-3). This was done through the newly described supervised polyol rout, in our previous experiment. Using this method, we were able to obtain substantially small size of about 6 nm for DEG coated Gd 2 O 3.FTIR results for Figure 4, showed no significant differences between b and c spectra, accounting for over abundance of DEG molecules. After purification of the DEG coated Gd 2 O 3 nanoparticles with centrifuge and dialysis, unreacted DEG has been removed. Consequently, in the FTIR spectrum of Gd 2 O 3 -DEG nanoparticles, diminishing and shifting of DEG band peaks to lower frequencies were observed; especially, for position of CH 2 and C-O stretching bands of DEG which could be due to surface interaction and chemisorptions with Gd 2 O 3 particles. The shift in C-O peak from 1127 to 1120 cm −1 can suggest a new configuration for DEG molecules, in which its oxygen binds with two Gd atoms ( Figure 5). This has also been reported by Pedersen. In SPGO-mPEG-silane nanoparticles (550 and 2000 MW), Gd 2 O 3 nucleates grow to form Gd 2 O 3 nano crystals, and are subsequently capped and stabilized by mPEG-silane.A silane molecule can act as a linker to help chemisorbtion of the PEG polymer on the nanoparticle surface ( Figure 3) which is in great agreement with the FTIR results ( Figure 5). The shifts of the characteristic peaks of the PEG-silane 550 Da, from 1284 to 1247.21 and from 2876 to 2925 cm -1 ( Figure 5, Spectrum d) are strong evidence that PEG is bonded to the surface of SPGO through a reaction of PEG silane 550 Da with the nanoparticles, also reported by Wu. Spectrum e, Figure 5, showed very similar results for FTIR spectrum of PEGylated SPGO particles with mPEG-silane 2000 Da to that of SPGO-mPEG-silane MW 550. The small differences observed between them are due to the size effect or molecular weight. Different sizes of Gd nanoparticles, in the range of 20 nm, coated with DEG were obtained through investigating the alteration of reaction conditions. High yield of DEG coating, however, may be achieved by adjusting NaOH concentration, temperature and reaction time of solution to 0.3 mM, 180C and 4 h, respectively. Considering the thermodynamical instability of products in a prolonged period of time due to aggregation, fusion and precipitation in stacks of infinite; size measurements have been repeatedly performed in a month, year and even longer time intervals to assure the physical characteristic stability of NMPs. Obtained poly disperesity Index (PdI) by DLS, indicative of hydrodynamic diameter distributions (Table 1), and also morphology and hydrostatic diameter distributions, presented by TEM (Figure 9-10), were employed as the stability measures. The results of these serial measurements, however, revealed no significant changes of the Gd 2 O 3 -DEG/PEG sizes, as well as, repeated relaxometry measurements with no significant differences (Table 2), imply the chemical stability of the products during the experimental period. And these findings are evidence of absence of degradation and oxidation during the imaging protocols. Eventually, filtration and dialysis of particles yielded in optimum reaction condition, after which we achieved above mentioned hydrodynamic distribution of 5.9 nm. In order to evaluate the influence of the chain lengths of coating agents, two PEG molecules with different molecular weights (PEG550-OCH 3, PEG2000-OCH 3 ) were selected. These PEG chains carry a reactive group at one end for grafting onto the surface of the particles, and a methoxy group at the other extremity (PEG550-OCH 3, PEG2000-OCH 3 ) which influences the colloidal stability and biodistribution. Thus, they have similar terminal groups but differ in their chain lengths. The effect of the molecular weight of PEG on the colloidal stability and size of nanoparticles was also investigated. Our r 2 /r 1 ( relaxivity ratios) r 2 (mM −1 s −1 ) r 1 (mM −1 s −1 ) Nanoparticle Please note that longitudinal relaxivity (r1) of Gd-DTPA and PEGylated nanoparticles (SPGO-mPEGsilane550 and 2000) were smaller than that of Gd 2 O 3 -DEG findings in Table 1 showed that molecular weight can affect particle size. The difference between size distributions of Gd-nanoparticles coated with mPEG-silane 550 and 2000 polymers were observed by the light scattering method. This difference in sizes for PEG 550 and 2000 nanoparticles molecules can be deduced from the different steric stabilization effects of these two polymers on nanoparticles ( Figure 10). PEG-2000 extends into the medium in the form of a thicker layer on the particle surface as a result of the longer side chains, compared to PEG 550. These larger side chains, causing steric instability in the media, led PEG 2000 not to be as effective a steric stabilizer as PEG 550. Thus they aggregated, and as a result larger sizes were observed. Several studies have been done related to the effect of particle size on magnetic properties or relaxivities. Some of these investigations have also showed that the relaxation ratios increase with larger sizes of nanoparticles. In this endeavor, after obtaining the NMPs particles with proper size, by altering reaction conditions, magnetic properties and relaxivities were investigated. The relaxometric measurements showed significant magnetic properties for Gd 2 O 3 -DEG compared to the conventional Gd-DTPA with relaxivity ratios of 0.89 and 1.13 (Table 2), that in part is due to the small size of the nanoparticles. Figure 11a showed that Gd 2 O 3 -DEG reached its maximum signal intensity at a concentration close to the daily clinical concentration of Gd-DTPA (i.e. 0.1 mM). Also for SPGO-mPEG-silane (550 and 2000 Dalton) with lower relaxivity ratios (37.62 and 33.72 respectively) compared to that of previous reports of PEGylation with larger molecular weights (6000 Dalton), showed more promising results as a negative contrast agent. At last but not least, surface properties and particle size are crucial factors for cell internalization through the plasma membrane. Studies have shown that nanoparticles smaller than 50 nm of size or with lipophilic polymer coatings diffuse across cell membranes easily. The size of approximately 6 nm, which we acquired, is exceptionally important in that the particles up to 6nm are easily diffused from lymphatics and typically filtered through the glumerolar capillary. Another advantage of surface modification is increased circulation time of nanoparticles in blood stream, by avoiding agglomeration and protein adsorption. Thus, they can reach target cells without being phagocytosed. All these characteristics can potentially hold for Gd 2 O 3 -DEG, as well. Conclusion Optimization of reaction parameters leads to strong coating of nanoparticles with the ligands which in turn increases the reproducibility of particle size measurements. Besides, the ligandcoated nanoparticles can show the enhanced colloidal stability as a result of the strict stabilization function of the ligands grafted on the surface of particles. Thus, the relaxometric measurements will lead to significant positive magnetic properties for Gd 2 O 3 -DEG compared to conventional Gd-DTPA and also better results as a negative contrast agent for SPGO-mPEG-silane (550 and 2000 Dalton) with lower r 2 /r 1 (relaxometry ratio)compared to higher MW polymers. Moreover, Optimal clearance resulting in less toxicity, lymphatic diffusion and increased circulation, which all can be attributed to Gd 2 O 3 -DEG nanoparticles, altogether hold promise for further investigation of these nanomagnetic particles in-vitro and in-vivo, for cellular and molecular imaging for cancer and other diagnostic purposes. |
Amino acid encoding schemes from protein structure alignments: multi-dimensional vectors to describe residue types. Bioinformatic software has used various numerical encoding schemes to describe amino acid sequences. Orthogonal encoding, employing 20 numbers to describe the amino acid type of one protein residue, is often used with artificial neural network (ANN) models. However, this can increase the model complexity, thus leading to difficulty in implementation and poor performance. Here, we use ANNs to derive encoding schemes for the amino acid types from protein three-dimensional structure alignments. Each of the 20 amino acid types is characterized with a few real numbers. Our schemes are tested on the simulation of amino acid substitution matrices. These simplified schemes outperform the orthogonal encoding on small data sets. Using one of these encoding schemes, we generate a colouring scheme for the amino acids in which comparable amino acids are in similar colours. We expect it to be useful for visual inspection and manual editing of protein multiple sequence alignments. |
. INTRODUCTION The Girona Dementia Registry (ReDeGi, from Spanish: Registro de Demencias de Girona) is a population-based epidemiological surveillance mechanism that registers the cases of dementia diagnosed by the reference centres in the Girona Health District. AIM To report on the frequency of the diagnoses and their clinical and sociodemographic characteristics, as well as to compare differences depending on the different subtypes of dementia. PATIENTS AND METHODS The method used consisted in a consecutive standardised register of the diagnoses involving dementia in specialised procedures in the Girona Health District between 2007 and 2010. RESULTS A total of 2814 cases were registered, which represents a clinical incidence of 6.6 cases per 1000 persons/year. Of this total number, 69.2% were primary degenerative dementias, 18.9% were dementias secondary to a vascular pathology, 5.4% were other secondary dementias and 6.5% were non-specific dementias. The mean age was 79.2 ± 7.6 years (range: 33-99 years) and 59.3% were females. The mean time elapsed since the onset of symptoms and clinical diagnosis was 2.5 ± 1.7 years. The mean score on the Blessed dementia scale was 7.7 ± 4.5 points and in the minimental test it was 17.6 ± 5.4 points. A family history of dementia was present in 26.6% of cases and 69.6% presented one or more cardiovascular risk factors. In 60.6% of cases they were cases of mild dementia, 28.5% were moderate and 10.9% were severe cases. CONCLUSIONS The epidemiological surveillance activity carried out by the ReDeGi throughout the period 2007-2010 has made it possible to record information that is extremely valuable for the planning and management of health care resources. |
A State-aware Proof of Stake Consensus Protocol for Power System Resilience The resilience of the power grid is critical for the energy delivery system and the assurance of the energy distribution process. The integration of blockchain technology with the power grid can provide a consistent view of the system state and constant validation. However, there are still significant challenges in the integration of blockchain in current power system architecture. From the cyber physical perspective, the physical components in the power system are heterogeneous and there are limited communication capabilities among these components. In this work, we propose a two-layer architecture that enables a state-aware blockchain integration with power grid state estimation and still scales in terms of performance dealing with high volume data. |
A Study of the Hazardous Glare Potential to Aviators from Utility-Scale Flat-Plate Photovoltaic Systems The potential flash glare a pilot could experience from a proposed 25-degree fixed-tilt flat-plate polycrystalline PV system located outside of Las Vegas, Nevada, was modeled for the purpose of hazard quantification. Hourly insolation data measured via satellite for the years 1998 to 2004 was used to perform the modeling. The theoretical glare was estimated using published ocular safety metrics which quantify the potential for a postflash glare after-image. This was then compared to the postflash glare after-image potential caused by smooth water. The results show that the potential for hazardous glare from flat-plate PV systems is similar to that of smooth water and not expected to be a hazard to air navigation. Introduction Before construction of utility scale photovoltaic (PV) power plants near airports or within known flight corridors in the United States, the Federal Aviation Administration (FAA) requires that the glare from the proposed plant not be a hazard to navigable airspace. The purpose of this paper is to demonstrate that glare from flat-plate PV power plants is similar to that of water and therefore does not pose a hazard to navigable airspace. This was done by calculating the glare potential from a theoretical flat-plate PV power plant located near Las Vegas, Nevada, and comparing that glare to the glare potential of smooth water. To estimate potential glare from flat surfaces, a model developed which used conservative assumptions. This model is a generalization of work done by Ho et al.. The model calculated glare hourly from 1998 to 2004 to find the times when the possibility for glare would be the greatest. The potential for after-image (hazardous glare) was then compared to the potential for hazardous glare from smooth water which pilots often view while on approach to land. Method A review of published literature on modeling glare was conducted. The effects of glare on humans has been quantified by Metcalf and Horn, Saur and Dobrash, Severin et al., and Sliney and Freasier. In other studies Brumleve, Chiabrando et al., and Ho et al. developed mathematical methods to quantify the potential danger of glare causing flash blindness. Flash blindness is defined by Ho as a "temporary disability or distraction" that can cause an afterimage and is understood to be comparable to what a human experiences when viewing the flash of a camera. Ho explains in detail various methods for modeling glare from concentrating solar systems which use mirrors and lenses to concentrate light onto a central receiver. This technology is different than flat-plate PV modules which directly convert solar energy to electricity. However, the afterimage estimation method Ho outlines for concentrating solar systems is easily generalized to flat-plate PV modules. The flow diagram in Figure 1 shows the general method implemented to translate solar radiation to the after-image potential caused by energy received on an observer's retina. The subsections below provide more detail for each step of the process. Insolation. The SUNY-Perez Satellite dataset was used for modeling glare. The National Renewable Energy Laboratory (NREL) compiled this dataset for the years 1998 to 2005 on an hourly basis for a 10 10 km nationwide grid. Solar radiation in the visible spectrum can be broken up into two primary components, diffuse and direct. Diffuse radiation is defined as radiation that has been scattered by the atmosphere. Direct radiation, also commonly referred to as beam, is radiation which moves from the source to the observer via the shortest distance possible without scattering. For example, on a heavily overcast day when the sun is highest in the sky (solar noon), it is probable that all insolation is diffuse. On a clear day at solar noon, most of the insolation reaching earth's surface would be direct. Direct radiation is the component of solar radiation that causes visible glare from flat plate PV systems. PV Module. The next step in the modeling process was to quantify the amount of visible radiation would be reflected off of a PV module for every hour from 1998 to 2004. The year 2005 was omitted for computational reasons. This was done by multiplying the power (Watts per square centimeter, or W/cm 2 ) of direct radiation with the reflectivity of the PV module at the average incidence angle for each hour evaluated. Incidence angle is defined as the angle between the direct component of insolation and a ray perpendicular to the module. If the incidence angle is zero, the angle between the surface of the module and the direct component of radiation is 90. The reflectance at 633 nm of a polycrystalline silicon (p-Si) PV module is a function of the incidence angle as seen below in Figure 2 developed by Parretta et al.. This reflectance as a function of incidence angle was to determine how much of the direct insolation in the visible spectrum would be reflected off of the PV module and thus reach the observer. The data shown above is for a glass encapsulated p-Si solar cell. The use of this data is a conservative assumption as the glass used to encapsulate the cell was not solar glass and no antireflective coating applied to the p-Si cell. Actual p-Si modules would likely have lower reflectance values as textured glass, and antireflective coatings are often used to reduce reflected irradiance and increase module efficiency. The power of the reflected direct radiation was calculated hourly from 1998 to 2004 using the reflectivity in Figure 2, satellite data from NREL, and established sun position equations. The use of hourly data allows quantification of how the power of the reflected direct radiation will vary as the sun moves across the sky. Energy at the Cornea. An assumption was made that the power of the direct radiation reflected off of the PV module was equal to the power incident on the cornea of the pilot. This is a conservative assumption as it ignores atmospheric attenuation, refraction, and further reflection. While it is likely that there will be energy diffusion or absorption due to the atmosphere, cockpit glass, or shielding, these effects were ignored during this initial estimation. Later calculations took these potential mitigation efforts into account, as can be seen in Figure 7. Retinal Irradiance. The last step in the modeling process was to calculate retinal irradiance hourly from 1998 to 2004. Retinal irradiance can be calculated us a derivation provided by Sliney from the energy incident on the cornea as where E r is retinal irradiance, E c is irradiance at a plane in front of the cornea, f is the focal length of the eye (∼0.17 cm), d p is the diameter of the human pupil adjusted to sunlight (∼0.2 cm), is the subtended angle of the image (or apparent size of the image which in the case of the sun is 0.0093 radians), and is the transmission coefficient of the eye (∼0.5). This equation assumes that the arc of a circle f is equal to its chord, which is a good approximation for small angles such as these. Ocular Safety Metrics Next, the calculated values of retinal irradiances were compared to known ocular safety metrics. Extensive research has been done on ocular safety metrics and how to calculate the potential for after-image or retinal burns from radiation in the visible wavelengths. The threshold for retinal irradiance corresponding to the potential for retinal burns has been defined as where E r,burn is the retinal burn threshold and is the subtended angle of the sun or 0.0093 radians, Ho et al., and Sliney and Freasier. Ho also compiled data from Metcalf and Horn, Severin et al., and Saur and Dobrash to find a fit corresponding to the minimal retinal irradiances that caused after-image (glare). This is calculated by where E r,flash is the threshold for potential after image. Ho then plotted both of these thresholds and the three regions these thresholds define (potential for retinal burn, potential for after-image, and low potential for afterimage) which are illustrated in Figure 3. The subtended source angle is a function of the size of the image viewed. For the purposes of this report, the image is a reflection of the sun which causes the subtended angle to be constant at 0.0093 radians or roughly 10 mrads. Results Retinal irradiance was calculated hourly from the years 1998 to 2004 for a fixed-tilt polycrystalline system under the assumptions illustrated in Table 1. These results were then compared to the same results from smooth water. The assumption of a fixed-tilt system is conservative because, as seen in Figure 2, the reflected component of irradiances increases as incidence angle increases. Having the system held at a fixed tilt increases the average incident angle and therefore the average reflected irradiance. The results of the calculations are displayed in Figure 4 and Table 2. Figure 4 shows retinal irradiances for all hours in the six-year period when direct radiation was present. For example, the blue bar furthest to the left in Figure 4 represents the number of hours in the years 1998 to 2004 where retinal irradiance was between 0 and 0.02 W/cm 2 (approximately 2250 hours). The potential for an after-image corresponding to the different retinal irradiance powers are shown based on the zones defined in Figure 3. The ranges of these zones are quantified in Table 2, showing that a potential for an after-image for both PV panels and smooth water exists but is slight. Table 2 shows that the median values of both distributions reside in the region "potential for an after-image." The histogram in Figure 4 shows that 79 to 88 percent of hourly retinal irradiances from smooth water and fixed PV modules fall in this region. However, all calculated retinal irradiances fall in the bottom 5% of the region, indicating that although the glare hazard exists, it is relatively low. Figure 5 illustrates this point by expanding the x-axis to the entire range of retinal irradiances that would be classified as "potential for an after-image." The major difference between this figure and the one developed by Ho in Figure 3 is the use of a linear, not logarithmic scale. Figure 6 displays the maximum value of hourly glare (highest retinal irradiance) from smooth water and fixed tilt p-Si PV modules plotted onto Figure 3. As can be seen from Figure 6, the maximum glare from a solar PV array using conservative assumptions is expected to be comparable to that of smooth water. This maximum value is in the region defined as "potential for after-image" where a potential exists, but the potential is on the low end of the range. The nuisance of glare for pilots cannot be completely avoided. Therefore, it is typically mitigated using darkened visors, sunglasses, and glare shields. If these objects are manufactured to meet American National Standards Institute (ANSI) Standard Z80. 3-2001, they will reduce the intensity of retinal irradiance by roughly 70 percent. A 70 percent reduction of retinal irradiances from radiation reflected off of water and PV modules move all retinal irradiance values below 0.14 W/cm 2 as displayed below in Figure 7. Under these conditions, 92 percent of the hours over the six-year period investigated for solar PV would now be in the "low potential" zone in Las Vegas. Conclusions The potential flash glare a pilot could experience was modeled from a proposed 25-degree fixed-tilt flat-plate polycrystalline PV array installed outside of Las Vegas, Nevada. Hourly insolation data measured onsite via satellite from the years 1998 to 2004 was used to perform this modeling. These results were then compared to the potential glare from smooth water under the same assumptions. The comparison of the results showed that the potential for glare from flat plate PV systems is comparable to that of smooth water and not expected to be a hazard to air navigation. Glare from ground-based objects can be a nuisance to pilots if proper mitigation procedures are not implemented. Portland white cement concrete (which is a common concrete for runways), snow, and structural glass all have reflectivities greater than water and flat plate PV modules as shown by Levinson and Akbari, Nakamura etal. and Hutchins et al.. Pilots viewing these objects under specific conditions may experience a distracting level of glare. The nuisance of glare cannot be completely avoided. Therefore, it is typically mitigated using darkened visors, sunglasses, and glare shields. If these objects are manufactured to meet ANSI Standard Z80. 3-2001, they will reduce the intensity of retinal irradiance by roughly 70 percent. irradiance values below 0.14 W/cm 2. Under these conditions, 92 percent of the hours over the six-year period investigated for solar PV would now be in the "low potential" zone at Las Vegas. Highlights (i) Ocular safety metrics were used to quantify the potential for hazardous glare from a photovoltaic system hourly. (ii) The results show that the glare hazard from smooth water and flat plate photovoltaic systems are similar. (iii) Glare mitigation is common and significantly reduces glare hazards. Abbreviations ANSI: American National Standards Institute NREL: National Renewable Energy Labs PV: Photovoltaic p-Si: Polycrystalline silicon. |
A Retrospective Evaluation of Pregnancy Outcomes Following Bariatric Surgery: A Single-Center Experience Background Bariatric and metabolic surgery (BMS) is an effective treatment for obesity and its complications, but its effect on pregnancy outcomes is inconclusive. The present study aimed to investigate womens pregnancy status and outcomes as well as the impact of pregnancy intervals after BMS. Methods The menstrual cycle and fertility status of women who underwent BMS in our centre between July 2010 and January 2021 were retrospectively analyzed and followed up until one-year post-delivery. The pregnancy outcomes after BMS were observed, including changes in weight, pregnancy interval, pregnancy complications, weight and health status of the newborn (premature birth, admission to neonatology, or deformity). Results We identified 31 women who were successfully conceived after BMS. There were statistical differences in weight and menstrual status before and post-operation (P < 0.05), and 77.97% of them had remission or recovery of obesity-related comorbidities. Eighteen patients delivered successfully after BMS, but there were still 12 cases of spontaneous abortion and 1 case of induced abortion. The abortion rate in pregnancy intervals less than 2 years was higher than those ≥2 years (P = 0.045). Of the women who delivered successfully, 5 had pregnancy-specific complications, including gestational diabetes mellitus and hypertensive disorder of pregnancy. However, the growth and development of the newborn are normal since the birth follow-up. Conclusion The present results suggest that the abortion rate in pregnancy intervals less than 2 years was higher than those ≥2 years. It is recommended that postoperative patients avoid pregnancy until their weight is stable to reduce the risk of adverse pregnancy outcomes. Introduction Recently, the prevalence of obesity is increasing worldwide. More than one-third of adults are overweight or in severe obesity in China, and the overweight and obesity rates of Chinese women of reproductive age are 25.4% and 9.2%, respectively. However, the prevalence of severe obesity in China remains unclear. A variety of diseases caused by obesity have become a worldwide public health problem. Obesity can not only increase the risk of a series of noninfectious chronic diseases but also lead to many pregnancy complications and adverse pregnancy outcomes, such as miscarriage, premature delivery, hypertensive disorder complicating pregnancy, gestational diabetes mellitus, fetal abnormalities, postpartum haemorrhage, thrombosis and puerperal infection. 10,11 Therefore, it is important for pregnant women to control the weight. Many patients with severe obesity have received bariatric and metabolic surgery (BMS), including women of childbearing age. In China, the number of BMS in 2020 is 12,837, but the number of BMS performed in women of childbearing age is unclear. 15 BMS for women of childbearing age also poses clinical challenges for subsequent pregnancies, requiring multidisciplinary collaboration for rigorous prenatal management. Bariatric and metabolic surgery for pregnancy has been reported to reduce the incidence of preeclampsia, gestational diabetes, and large for gestational age infants. Still, it may increase the risk of fetal growth restriction, preterm birth, neonatal intensive care unit hospitalization, and increased perinatal mortality. 16,17 Short pregnancy intervals may also increase the risk of maternal morbidity and mortality. 18 However, some studies have shown that pregnancy after BMS is not associated with adverse perinatal outcomes. 19,20 There were also no significant differences in pregnancy complications and neonatal outcomes between women who conceived within the first 12 months of surgery and those who conceived later. 21 Thus, the effect of BMS for women's pregnancy outcomes could not be fully determined. The existing evidence mainly comes from the population of western developed countries, which is not fully applicable to the Chinese population, and further research and discussion are needed. Therefore, in the present study, we reviewed the pregnancy and delivery outcomes of women undergoing BMS in our center, aiming to evaluate the impact of BMS on pregnancy. Materials and Methods Patients Participants who were age ≥ 20 and underwent spontaneous and singleton pregnancy after BMS in the First Affiliated Hospital of Jinan University from July 2010 to January 2021 were included. Retrospective analysis of pregnancy outcomes and pregnancy complications after BMS was performed using the prospective managed database in our center. The follow-up of these patients after surgery and their relevant clinical information were obtained through the electronic medical record, questionnaire surveys, and telephone interviews. Information on BMS and 6 months after this surgery was obtained from the electronic medical record, and information on postoperative pregnancy was obtained from the questionnaire surveys and telephone interviews. WeChat, questionnaire star, and telephone were used for survey and interview. All patients met the criteria for BMS according to the Chinese Surgical Guidelines for Obesity and Type 2 Diabetes by the Chinese Society for Metabolic and Bariatric Surgery (CSMBS). 22 Since the nature of the present study is retrospective and there were no interventions that may affect the patient's interests as well as the included patients were not recontacted specifically for the study, it was orally approved by the Scientific and Ethics Review Committees of the First Affiliated Hospital of Jinan University and the informed consent and approval number for the study had been waived. Notably, the guidelines of the Declaration of Helsinki were followed, and all patient data was deidentified to achieve the confidentiality. Surgical Procedures Standard laparoscopic sleeve gastrectomy (LSG) and laparoscopic Roux-en-Y gastric bypass (LRYGB) were performed by a single surgeon and managed by the same surgical team. The surgical techniques were described previously. 23 Clinical Parameters Clinical data of the patients included preoperative weight, preoperative body mass index (BMI), types of surgeries, postoperative weight change and BMI, the interval from surgery to pregnancy, weight gain during pregnancy, postdelivery weight, postpartum one-year weight and BMI, miscarriage events, complications of pregnancy (including gestational diabetes mellitus, preeclampsia, intrahepatic cholestasis of pregnancy, etc.), and adverse birth outcomes (including preterm birth, stillbirth, small for gestational age, and birth defects). Notably, Questionnaire stars was used to directly ask patients whether their menstrual cycle was regular. For pregnant women with a pre-pregnancy BMI < 18.5, the weight-gain range should be 12. weight-gain range should be 11.5-16.0 kg; for women with a pre-pregnancy BMI of 25.0-29.9, the weight-gain range should be 7.0~11.5 kg; For pregnant women with a pre-pregnancy BMI over 30.0, the weight gain should range from 5.0 to 9.0 kg. Below or above the abovementioned weight-gain range is defined as inadequate or excessive weight gain. For the convenience of statistics, cases with two or more births are included in the statistics with the first child. Statistical Analysis SPSS 23.0 statistical software was used to process and analyze the measured data. The missing data were replaced by the median of other non-zero data. Fisher's exact test was used for comparing the counting data (total number of cases < 40). Student's t-test (obeyed normal distribution) or nonparametric (did not obey normal distribution) test was used for comparing the measuring data. Friedman test was used for comparing preoperative and postoperative data, which did not follow a normal distribution. The test level was = 0.05. Basic Characteristics of the Patients Forty women of childbearing age who underwent BMS in our center were conceived spontaneously without receiving ovulation stimulation therapy or other medical treatment. Among them, four women who were unwilling to be followed up (consent rate: 90.0%, attrition rate: 10.0%), four women who were still pregnant, and one woman who was less than one year postpartum were excluded, 31 women were finally eligible for inclusion. The characteristics of these patients are shown in Table 1. Changes in Weight and BMI Compared with pre-operation, the weight and BMI of the patients after the operation were significantly lower than before, and the mean difference at each observation time point was statistically significant (P < 0.05). For those who delivered after operation (n = 18), 9, 7, and 2 gained excessive, adequate, and inadequate weight during gestation, respectively. The BMI at conception was 25 Table 1 and Figure 1. Menstrual Changes Pre-and Post-BMS Seventeen patients had menstrual disorders before surgery, with an overall menstrual disorder rate of 54.84%, while 11 patients had normal menstruation after BMS, with a 64.71% improvement rate. After the operation, the total menstrual disorder rate was 19.35%, significantly lower than before. The difference in menstrual changes before BMS was statistically significant (P = 0.000), as shown in Table 2. In addition, we found that three patients with irregular menstruation had polycystic Ovary Syndrome (PCOS) before BMS and two of them normalized menstrual function postoperatively. Comorbidities and Complications Pre-and Post-BMS In our cohort, 96.77% of patients undergoing BMS had one or more obesity-related diseases before surgery, including dyslipidemia, uric acid abnormality, diabetes, sleep apnea syndrome and hypertension. Most of these comorbidities were relieved or even cured after the operation (77.97%) at the follow-up point in December, 2021. However, two patients with diabetes were still under control with drugs, and two patients with thyroid diseases continued to receive medical treatment. In addition, two patients developed symptoms of anaemia after the operation, and one patient developed dumping syndrome after the operation, as detailed in Table 3. Complications of Pregnancy and Pregnancy Outcomes Twenty-two patients had never had a pregnancy before surgery, and nine had been conceived spontaneously. Eighteen cases were successfully delivered, but 12 cases still had a spontaneous abortion, and one patient had induced abortion due to a personal request. We compared the pregnancy outcomes with pregnancy intervals less than 2 years (n = 10) and ≥2 years (n = 20) and found that the abortion rate of pregnant women with pregnancy intervals less than 2 years was higher (P = 0.045), as detailed in Table 4. Table 5 shows the pregnancy characteristics of pregnant women who delivered successfully after BMS. Among them, five patients (27.78%) had pregnancy-specific complications, including gestational diabetes mellitus and hypertensive disorder of pregnancy. None of the patients had complications related to delivery. The growth and development of the newborn are normal since the birth follow-up. Comparison of Results Based on Pregnancy Intervals In our postoperative successful delivery cohort, we compared the results of pregnancy between ≤ 24 months after BMS (A1) and > 24 months of pregnancy (A2). Among them, 50% (n = 9) of patients became pregnant less than 24 months after weight loss, and 50% (n = 9) became pregnant more than 24 months after the operation. In group A1, 33.3% underwent LSG, and 66.7% underwent LRYGB. In A2, 55.6% underwent LSG, and 44.4% underwent LRYGB. The 3673 study showed no significant difference in pregnancy weight, pregnancy weight gain, late pregnancy weight, postpartum weight, one-year postpartum weight, newborn birth weight, gestational age, delivery mode, and pregnancy complications between the two groups (P > 0.05). See Table 6 for details. Comparison of Results Based on BMI at Conception In our cohort of successful postoperative deliveries, we compared the differences between patients with BMI < 30 kg/m 2 (B1) and BMI ≥ 30kg/m 2 (B2) at conception. In B1, 7 (53.8%) patients had LSG and 6 (46.2%) had LRYGB. In B2, 1 (20%) had LSG and 4 (80%) had LRYGB. The study showed differences in weight in the third trimester of pregnancy, weight post-delivery, and weight in the first-year post-delivery between the two groups. At the same time, there were no significant differences in weight gain during pregnancy, birth weight of newborn, gestational age, mode of delivery, and complications of pregnancy (P > 0.05), indicating that BMI increased weight during pregnancy and pregnancy safety were the same at different times of conception. See Table 7 for details. Discussion The present study aimed to observe women's pregnancy status and outcomes after BMS. Our results indicate that pregnancy after BMS seems to exhibit certain risk. And it is recommended that postoperative patients avoid pregnancy until their weight is stable to reduce the risk of adverse pregnancy outcomes. The incidence of overweight and obesity is increasing rapidly worldwide, and BMS is still the most effective way to treat severe obesity and its related complications. 24 Studies have shown that BMS can contribute to dramatic weight loss, 3675 sustained at least 4 years after surgery, as well as help to achieve glycemic control in obese patients with uncontrolled type 2 diabetes. 25,26 In the present study, patients' average weight and BMI decreased by 25% within six months after the operation. The body weight of 72.2% of patients returned to normal at conception (BMI < 30 kg/m 2 ). Thus, BMS has a significant effect on weight loss and can highlight the impact of surgery within a short period after the operation, which is consistent with the conclusions of previous studies. 27 Additionally, BMS can also improve the symptoms of menstrual disorders in obese patients of childbearing age and improve women's fertility. A study showed that 38.6% of women had menstrual dysfunction before the operation, and about 35.4% of patients returned to normal menstruation after BMS. 28 In the present study, among the women with menstrual disorders before the operation, about 65% of the patients had regular menstruation recovery after an operation. However, there are still some patients whose menstruation has not recovered after the operation. The relationship between bariatric surgery and menstrual disorders remains to be further explored, and patients after BMS should continue to strengthen lifestyle management since menstruation may be affected by various factors. 29 Studies have shown that more than half of the patients undergoing BMS are women of childbearing age, most of whom have reproductive needs. 30 Overall, BMS helps to reduce the incidence of preeclampsia, gestational diabetes, and large for gestational age infants but increases the risk of small for gestational age infants (SGA). 16,31,32 Gastric bypass is associated with a higher risk of SGA than other procedures, which may be related to malabsorption of nutrients during postoperative pregnancy. 33 In particular, with rapid weight loss during the first 1-2 years after surgery, pregnancy has a higher risk of nutrient deficiency, leading to increased rates of fetal malnutrition and obstetric complications. 18 Therefore, most scholars recommend delaying pregnancy after BMS for at least one year, 34-37 but the evidence supporting this recommendation is insufficient. In addition, several studies have shown no association between interval time and adverse pregnancy or neonatal outcome and delaying pregnancy time does not appear to have any greater benefit. 21, The present study compared pregnancy results two years after surgery with pregnancy after two years. We also found that the incidence of pregnancy complications and the newborn's birth weight were not statistically significant between the two groups. However, we observed spontaneous abortion in up to 38% of pregnancies in the study cohort. The time interval from surgery to pregnancy was shorter in these patients than in live births after surgery. We further found that the abortion rate of pregnant women with pregnancy intervals less than 2 years was higher compared with those with pregnancy intervals ≥2 years. A previous study indicated that abortion occurred more often after RYGB (OR=9.81, 95% CI: 1.12-85.71), 33 but other studies observed no change in abortion rate (38.7% vs 56.5%, P = 0.256). 41 Therefore, the relationship between BMS and abortion was inconsistent, which needs more evidence. In addition, during the follow-up, we found that anemia during pregnancy is a common problem. After BMS, changes in intestinal anatomy, poor absorption of trace elements, and increased nutritional requirements during pregnancy make pregnant women more prone to anemia. A recent systematic review also confirmed an increased risk of anemia and decreased ferritin levels in pregnancy after BMS. 42 However, there is a close correlation between pregnancy anemia and adverse pregnancy outcomes. 43 Therefore, the guidelines also recommend that nutritional supplementation be optimized 3 to 6 months before conception, and iron, ferritin, and transferrin levels should be regularly monitored to prevent and treat anemia during pregnancy. 34,44 The present study had some limitations, such as this was a retrospective and single-institution study which may limit the generalizability of study results, and a relatively small sample size, resulting in insufficient statistical power. Additionally, the potential confounding variables could not be controlled. But to the best of our knowledge, this is the first study on pregnancy after BMS in the Chinese population, which may provide preliminary evidence for the effect of BMS for pregnancy. It added evidence for pregnancy safety after BMS in specific populations in China under the "onechild policy". With the policy that has become more open in the past few years that people can have more than one child, future multicenter research can be conducted to clarify further the impact of BMS on pregnancy outcomes in female patients and their offspring. Conclusions The present study showed that BMS substantially impact weight control and management in women with obesity and can significantly improve most obesity-related comorbidities. The abortion rate in pregnancy intervals less than 2 years was https://doi.org/10.2147/DMSO.S386773 DovePress Diabetes, Metabolic Syndrome and Obesity: Targets and Therapy 2022:15 higher than those ≥2 years. In addition, it is necessary to strengthen the intake of nutrients during postoperative pregnancy. For example, increasing iron supplements can reduce gestational anemia. Therefore, postoperative trace element supplements and monitoring are also essential. Ethical Approval The present study was performed based on the retrospective data, which was orally approved by the Scientific and Ethics Review Committees of the First Affiliated Hospital of Jinan University. Notably, the Scientific and Ethics Review Committees had waived informed consent and approval number for the study as the nature of the present study is retrospective. |
The Role of Twitter in the World of Business This paper examines the services people seek out on Twitter and the integration of Twitter into businesses. Twitter has experienced tremendous growth in users over the past few years, from users sharing to the world what they had for lunch to their opinions on world events. As a social media website, Twitter has become the third most popular behind only Facebook and YouTube. Its user base statistics ensure a wide audience for business to engage with. However, many find this a daunting prospect as there are no set guidelines as to how business might use the service. The ability to post quick short messages for the whole of the social network to see has encouraged people to use this microblogging platform to comment and share attitudes on company brands and products. The authors present how the business world is using the social network site as a new communication channel to reach customers and examine other possible uses for Twitter in a business context. This paper also discusses how Twitter plans to move forward and evolve with its service, ensuring that personal, business and third party developers' best interests are catered to. |
Tibial spine fractures in children. Fractures of the intercondylar eminence of the tibia are not uncommon in the pediatric age group. This eminence consists of two projecting tibial spines: the anterior cruciate ligament being attached to the medial one. In spite of an avulsion of this bony fragment and its attached ligament, cruciate laxity does not appear to be a significant clinical finding on follow-up examination, regardless of the type of fracture, method of treatment, or mode of injury. Computed instrumental testing of these knees, however, revealed measurable degrees of residual cruciate laxity in spite of the absence of patient symptoms. In the pediatric age group, treatment by closed or open methods is directed toward reattachment of the loose fragment and achievement of some degree of joint congruity rather than restoration of cruciate integrity. |
Crystallization and preliminary X-ray studies of V-ATPase of Thermus thermophilus HB8 complexed with Mg-ADP. Crystals have been grown of the V-ATPase sector of the V-type ATP synthase complex (VV) from the thermophilic eubacterium Thermus thermophilus HB8. These crystals are grown by the vapor diffusion method in the presence of 5 mM Mg-ADP, from solutions containing 100 mM sodium acetate and 2 M sodium formate, pH 5.5. The crystals diffracted X rays beyond 3.4 A in resolution on a synchrotron radiation source. The crystals belong to the trigonal space group P3, with unit cell dimensions of a = b = 89.0 A, c = 179.2 A, and gamma = 120 degrees. The unit cell presumably contains one molecule of V-ATPase and the V(m) value is calculated as 3.0 A/Da. |
Outcomes of Laryngeal Reinnervation for Unilateral Vocal Fold Paralysis in Children Objective: Outcomes of laryngeal reinnervation with ansa-cervicalis for unilateral vocal fold paralysis (UVFP) may be influenced by age of the patient and time interval between laryngeal nerve injury and reinnervation, suggesting less favorable outcomes in older patients and greater than 2-year time interval after injury. This study examines these issues in the pediatric population. Method: Review of prospectively collected data set of 35 children and adolescents (1-21 years) that underwent ansa-recurrent laryngeal nerve (RLN) laryngeal reinnervation for UVFP. Results: The time from RLN injury to reinnervation averaged 5.0 years (range, 0.8-15.2 years). No correlation was found between age at reinnervation (r = 0.15) and patient- or parent-reported global percentage voice outcome or perceptual ratings. There was slight negative correlation in duration between RLN injury and reinnervation and voice outcomes (r = −0.31). Postoperative voice self/surrogate global percentage rating average was 80.5% (range, 50%-100%), and perceptual rating GRBAS sum score average was 2.9 (range, 0-7). Conclusion: In pediatric ansa-RLN reinnervation for UVFP, no correlation between age at surgery and postoperative outcome was found. Denervation duration showed slight negative correlation, similar to what has been reported in adults, though voice improvement was seen in all patients. |
BMO Teichm\"uller spaces and their quotients with complex and metric structures The paper presents some recent results on the BMO Teichm\"uller space, its subspaces and quotient spaces. We first consider the chord-arc curve subspace and prove that every element of the BMO Teichm\"uller space is represented by its finite composition. Moreover, we show that these BMO Teichm\"uller spaces have affine foliated structures induced by the VMO Teichm\"uller space. By which, their quotient spaces have natural complex structures modeled on the quotient Banach space. Then, a complete translation-invariant metric is introduced on the BMO Teichm\"uller space and is shown to be a continuous Finsler metric in a special case. Introduction The Teichmller space is originally a universal classification space of the complex structures on a surface of given quasiconformal type, but according to complex analytic objects we focus on, we can also consider various kinds of Teichmller spaces. The universal Teichmller space plays a role of their ambient space, and its intrinsic natures (complex structures and invariant metrics) dominate any included Teichmller spaces. For instance, the Teichmller space of a Riemann surface can be represented in the universal Teichmller space as the fixed point locus of the Fuchsian group. In a different direction to this, Teichmller spaces in our study are obtained by adding a certain regularity to ingredients of the space. Recently, this type of Teichmller spaces become more popular as a branch of infinite dimensional Teichmller theory. The Bers model of the universal Teichmller space T is defined by the Schwarzian derivative S(f | D * ) of the conformal homeomorphism f of the exterior of the unit disk D * that is quasiconformal on the unit disk D. In this way, T is embedded in a certain Banach space as a bounded domain. The image of the unit circle S under f is called a quasicircle. The universal Teichmller space T can be also characterized as the set of all quasicircles up to a Mbius transformations of the Riemann sphere C. Let denote the inner domain of, and let moreover g be a Riemann map of D onto. We define the conformal welding homeomorphism h with respect to by h = (g| S ) −1 (f | S ), which is quasisymmetric. The universal Teichmller space T is identified with the group QS of quasisymmetric selfhomeomorphisms h of S modulo the group Mb(S) of Mbius transformations of S, i.e., T = Mb(S)\QS. In general, the quasisymmetric homeomorphism h does not satisfy any regularity conditions such as absolute continuity. As well, the quasicircle might not even be rectifiable. In fact, its Hausdorff dimension, though less than 2, can be arbitrarily close to 2 (see ). The BMO theory has been studied often in the framework of Teichmller theory. The corresponding subspaces of T have generally satisfactory characteristics in terms of the quasicircle and the quasisymmetric homeomorphism h (see ). In this paper, we shall continue to study the BMO theory of the universal Teichmller space, because of its great importance in the application to the harmonic analysis (see ) and also of its own interest. We will especially focus on BMO Teichmller spaces, the subspaces of the universal Teichmller space T closely related to BMO functions, Carleson measures and A ∞ weights. In Section 2, we survey the standard theory of the universal Teichmller space and BMO Teichmller spaces. Basic problems are considered by going back and forth between the quasicircle and the conformal welding homeomorphism h corresponding to. It is known that the set SQS of all strongly quasisymmetric homeomorphisms of S, which correspond to Bishop-Jones quasicircles, forms a partial topological group under the BMO topology; the neighborhood base is given at the identity by using the BMO norm and is distributed at every point h ∈ SQS by the right translation. It is proved in that its characteristic topological subgroup SS consists of strongly symmetric homeomorphisms, which correspond precisely to asymptotically smooth curves in the sense of Pommerenke. We consider intermediately the set CQS of conformal welding homeomorphisms with respect to chord-arc curves. In Section 3, we prove that every element of SQS can be represented as a finite composition of elements in CQS (Theorem 3.3). As a consequence, we see that CQS does not carry a group structure under the composition (Corollary 3.4). The Bers embedding of the universal Teichmller space T is a map into the Banach space of bounded holomorphic quadratic differentials. Affine foliated structures of T and the quotient Bers embeddings are induced by its subspaces. This was first investigated by Gardiner and Sullivan for the little subspace T 0, which consists of the asymptotically conformal elements of T. Later, it was proved that the Bers embedding is compatible with the coset decomposition T 0 \T and the quotient Banach space. By which the complex structure modeled on the quotient Banach space is provided for T 0 \T through the quotient Bers embedding. In In Section 6, a new invariant metric m C under the right translation is introduced on the BMO Teichmller space by using the Carleson norm. We call this the Carleson metric. This is shown to be a continuous Finsler metric in the special case for T v (Theorem 6.2). Moreover, the Carleson metric m C induces a quotient metric on the quotient BMO Teichmller space. Then, a list of intended results is presented in this section, following the work on the asymptotic Teichmller space T 0 \T by Earle, Gardiner and Lakic. In the following Section 7, we show that the Carleson distance induced by m C is complete in the BMO Teichmller spaces (Theorem 7.4). We also compare the Carleson distance with the Teichmller distance and the Kobayashi distance. One of our motivations to study those structures of BMO Teichmller spaces is to consider an open problem of the connectivity of the chord-arc curve subspace (see ). The topology on this space is induced by the BMO norm of the conformal welding homeomorphisms. The distribution of the chord-arc curve subspace T c in the BMO Teichmller space T b (Theorem 4.1) can translate the problem of the connectivity to the quotient T v \T c. By introducing the (quotient) Carleson metric in this space, we can investigate a certain convexity of the chord-arc curve subspace to consider the problem. The universal Teichmller space T is defined as the group QS of all quasisymmetric homeomorphisms of the unit circle S = {z | |z| = 1} modulo the left action of the group Mb(S) of all Mbius transformations of S, i.e., T = Mb(S)\QS. A topology of T can be defined by quasisymmetry constants of quasisymmetric homeomorphisms. The universal Teichmller space T can be also defined by using quasiconformal homeomorphisms of the unit disk D = {z | |z| < 1} with complex dilatations in the space of Beltrami coefficients Then, T is the quotient space of M(D) under the Teichmller equivalence. The topology of T coincides with the quotient topology induced by the projection: M(D) → T. The universal Teichmller space T is identified with a domain in the Banach space by the projection, i.e., =. Here, for every ∈ M(D), () is defined by the Schwarzian derivative S(f | D * ) of the conformal homeomorphism f of D * that is quasiconformal on D with the complex dilatation. The Bers embedding: T → B(D * ) is a homeomorphism onto the image (T ) = (M(D)), and it defines a complex structure of T as a domain in the Banach space B(D * ). It is proved that (and so is ) is a holomorphic split submersion from M(D) onto its image. The space M(D) of Beltrami coefficients and the universal Teichmller space T are equipped with a group structure. This can be easily seen by normalizing the elements of quasiconformal self-homeomorphisms of D and quasisymmetric self-homeomorphisms of S. The normalization is defined by fixing three distinct points (e.g., 1, i, −1) of S. Then, M(D) is identified with the group of all normalized quasiconformal self-homeomorphisms of D, and T is identified with the group of all normalized quasisymmetric self-homeomorphisms of S. The operation on the groups M(D) and T is denoted by *. For every ∈ M(D), the normalized quasiconformal self-homeomorphism of D with the complex dilatation (and its quasisymmetric extension to S) is denoted by f. Then, the operation * is defined by the relation For every ∈ M(D), the right translation r: Both r and R are biholomorphic automorphisms of M(D) and T, respectively. Moreover, for = (), we have R = r. A quasisymmetric homeomorphism h ∈ QS is called strongly quasisymmetric if for any > 0 there is some > 0 such that for any arc I ⊂ S and any Borel set E ⊂ I, |E| |I| implies that |h(E)| |h(I)|. It should be noted that each h is absolutely continuous and log h is in BMO(S). Here, a locally integrable function on S belongs to BMO(S) if where the supremum is taken over all arcs I on S, |I| = I d/2 is the length of I, and I denotes the average of over I. We denote by SQS the group of all strongly quasisymmetric homeomorphisms. We assign the following BMO distance to SQS: The BMO Teichmller space is defined by T b = Mb(S)\SQS, which is equipped with a topology induced by the BMO distance. As in the case of the universal Teichmller space, the BMO Teichmller space T b has the corresponding space for Beltrami coefficients. For a simply connected domain in the Riemann sphere C with ∞ / ∈ ∂, a measure. = (z)dxdy on is called a Carleson measure if is finite. We denote the set of all Carleson measures on by CM(). For any ∈ L ∞ (D) and for the Poincar density D (z) = (1 − |z| 2 ) −1 (with curvature constant equal to −4) on D, we set (z)dxdy = |(z)| 2 D (z)dxdy. Then, a linear subspace L(D) ⊂ L ∞ (D) consisting of all with ∈ CM(D) is a Banach space with a norm * = ∞ + There is also a subspace of bounded quadratic differentials corresponding to T b. For ∈ B(D * ), another norm is given by 3. Chord-arc curves do not have the group structure Let be a Jordan curve in the Riemann sphere C, let and * denote its inner and outer domains in C, respectively, and let g and f be conformal maps of D and D * onto and *, respectively. We define the conformal welding homeomorphism h with respect to by h = (g| S ) −1 (f | S ). A rectifiable Jordan curve in the complex plane C is called a chord-arc curve if l (z 1, z 2 ) K|z 1 − z 2 | for any z 1, z 2 ∈, where l (z 1, z 2 ) denotes the Euclidean length of the shorter arc of between z 1 and z 2. The smallest such K is called the chord-arc constant for. It is a well-known fact that a chord-arc curve is the image of S under a bi-Lipschitz homeomorphism f of C. That is, there exists a homeomorphism f: C → C with a constant C 1 such that f (S) = and C −1 |z − w| |f (z) − f (w)| C|z − w| for all z, w ∈ C. When is a Jordan curve passing through ∞, we may replace the Euclidean distance in the definition above with the spherical distance in order to define to be a chord-arc curve. Bi-Lipschitz homeomorphisms preserve the Hausdorff dimension, and hence Hausdorff dimensions of chord-arc curves are one. Although chord-arc curves are in a very special class of quasicircles, no characterization has been found in terms of their conformal welding homeomorphisms of S. We denote the set of all these conformal welding homeomorphisms by CQS. It is known that if h ∈ CQS then h ∈ SQS (see ), that is, h is strongly quasisymmetric, and in particular, log h BMO < ∞. Conversely, there exists some constant c > 0 such that if log h BMO < c then h ∈ CQS and the corresponding is a chord-arc curve with the chord-arc constant K sufficiently close to /2. In this section, we prove that every element of SQS can be represented as a finite composition of elements in CQS. As a consequence, we see that CQS does not carry a group structure under the composition. We state our results in the framework of Teichmller theory. The chord-arc curve space is identified with a subspace T c of the BMO Teichmller space T b, which is given by the set CQS modulo Mb(S), i.e., T c = Mb(S)\CQS ⊂ T b. By regarding T c as a subset of the group (T b, * ), we can think of the inverse and the composition of elements of T c. For the proof of the main result in this section, we first claim that T c (or CQS) is preserved under the inverse. for some constant C > 0 depending only on the strongly quasisymmetric constant for h. Thus, the inverse mapping under the BMO topology. However, this correspondence should not be continuous except at the origin. We now prove the claim mentioned above as follows.. This is essentially shown by Zinsmeister (see also ). We see that o ∈ V, and hence V is nonempty. By the definition of V, V is open. Now we prove that V is closed. Let { n } ⊂ V be a sequence such that n → as n → ∞. We will show that ∈ V. Let U be an open neighborhood of o in T c. Then, = W of such that each ∈ W can be represented as a finite composition of elements in T c. This completes the proof. As T c T b by definition, we have the following immediate consequence from this theorem. Corollary 3.4. T c is not a subgroup of (T, * ). Foliated structure of the chord-arc curve subspace We have mentioned that the chord-arc curve subspace T c is an open subset of T b. There is a long-standing open question about whether T c is connected or not. For a recent account to a related result, see Astala and Gonzlez. In this section, we prove a result concerning the distribution of T c in T b. In the universal Teichmller space T, there is a closed subspace T 0 defined by T 0 = (M 0 (D)), where M 0 (D) is the space of Beltrami coefficients vanishing on the boundary. The subspace T 0 can be also defined to be Mb(S)\Sym by the subgroup Sym ⊂ QS consisting of symmetric homeomorphisms of S, which are the boundary extension of asymptotically conformal homeomorphisms of D whose complex dilatations belong to M 0 (D). We denote by B 0 (D * ) the Banach subspace of B(D * ) consisting of all elements such that −2 D * (z)|(z)| → 0 as |z| → 1 +. By the Bers embedding: Similarly, there is a closed subspace in T b that can be given by vanishing Carleson measures on D. Here, we say that a Carleson measure (z)dxdy on a simply connected domain is vanishing if The set of all such vanishing Carleson measures on is denoted by CM 0 (). Let M 0 (D) be the subspace of M(D) consisting of all Beltrami coefficients such that (z)dxdy ∈ CM 0 (D). Then, The VMO Teichmller space T v can be also defined to be T v = Mb(S)\SS by the characteristic topological subgroup SS of the partial topological group SQS consisting of all strongly symmetric homeomorphisms. Here, we say that h ∈ SQS is strongly uniformly. In fact, VMO(S) is the closed subspace of BMO(S) which is precisely the closure of the space of all continuous functions on S under the BMO topology. The inclusion relation SS ⊂ Sym is known. We prove that T c is distributed in T b entirely in all directions of T v in the following sense. We decompose f into f 0 f 1 as follows. The quasiconformal homeomorphism f 1: C → C is chosen so that its complex dilatation 1 coincides with on − 0 for some compact subset 0 of and zero elsewhere. Then f 0 is defined to be f f −1 1. We have the following commutative diagram: Here, the compact subset 0 ⊂ is chosen so that | 1 | 2 ∈ CM 0 () has a sufficiently small norm of the Carleson measure. It follows from [26,Lemma 4, and moreover we see that it can be of a small norm according to that of |S( f 1 )| 2 −3 *. Combined with the facts that is a chord-arc curve and that the subspace T c is open, it implies that ∂ f 1 () is also a chord-arc curve. Since the complex dilatation 0 of f 0 has the compact support f 1 ( 0 ) ⊂ f 1 (), we conclude that 1 is the image of ∂ f 1 () under the conformal mapping f 0 defined on C − f 1 ( 0 ), which is bi-Lipschitz in a neighborhood of ∂ f 1 (). Thus, we see that 1 is again a chord-arc curve, which implies that ∈ T c. We consider the projection The quotient space T v \T b is endowed with the quotient topology. We apply this quotient map to the subspace T c. Then, Theorem 4.1 is equivalent to saying that T c = p −1 (p(T c )). Concerning the topology of T c and p(T c ), we immediately see the following. Noting the fact that T v is contractible shown in, the connectedness problem on T c can be also passed to this quotient. The quotient Bers embedding from T v \T c = p(T c ) into B 0 (D * )\B(D * ) is considered in to be well-defined and injective. We also generalize this theorem to the entire space T v \T b = p(T b ) in the next section. Combining the claim for T c with Theorem 4.1, we have the following result naturally. By this result, we have the decomposition of the Bers embedding as We call this decomposition the affine foliated structure of T c induced by T v. The quotient Bers embedding of the BMO Teichmller space In this section, we prove the affine foliated structure of the BMO-Teichmller space T b and the injectivity of the quotient Bers embedding induced by the VMO-Teichmller space T v. From this result, we provide the quotient BMO Teichmller space with a complex structure modeled on the quotient Banach space B 0 (D * )\B(D * ). Proof. For every ∈ T b = Mb(S)\SQS, let f: D → D be a normalized quasiconformal extension of with complex dilatation ∈ M(D) (i.e. () = ) that is bi-Lipschitz under the Poincar metric on D (for instance the Douady-Earle extension of; see ), and let = () ∈ B(D * ). For one inclusion ⊂, we divide the arguments into two steps. We first deal with the special case that ∈ M 0 (D) has a compact support. Then, we extend this to the general case by means of an approximation process. We take a Beltrami coefficient on D with compact support. Clearly, ∈ M 0 (D). We will show that ( * ) − () ∈ B 0 (D * ). Then, the inclusion ⊂ follows from. Then, f is a quasiconformal homeomorphism with complex dilatation on = f (D) with a compact support contained in a Jordan domain 0 with 0 ⊂, and is conformal on We note that f is conformal on 12 for z ∈ * 0 (see ). Combined with the monotone property * 0 (z) * (z) of Poincar densities, this inequality implies that there exists a constant C such that. By ( * ) and welldefinedness of the pull-back operator from CM 0 ( * ) into CM 0 (D * ) (see [26,Theorem 3. For any ∈ T v = Mb(S)\SS, the complex dilatation of the Douady-Earle extension of is denoted by. Then, ∈ M 0 (D) by (see also ). We take an increasing sequence of positive numbers r n < 1 (n = 1, 2,...) tending to 1. Let ∆ n = D(0, r n ), the disk of radius r n centered at the origin, and let A n = D − ∆ n. We define Then, { n } is a sequence of complex dilatations with compact support such that as n → ∞. Indeed, it was proved in that the complex dilatation of the Douady-Earle extension of a symmetric homeomorphism is in M 0 (D). Combined with the inclusion relation SS ⊂ Sym, we see that belongs to M 0 (D), which yields that the first term tends to 0. By the definition of M 0 (D), we have that the second term tends to 0. Since f is bi-Lipschitz under the Poincar metric, induces a biholomorphic automorphism r −1: M(D) → M(D) (see ). Then, we have as n → ∞. We have proved that ( n * ) − () ∈ B 0 (D * ) in the first step. Then, it follows from the fact that B 0 (D * ) is closed in B(D * ) that ( * ) − () ∈ B 0 (D * ). This proves the inclusion ⊂. For the other inclusion ⊃, it can be proved by using the following claim, which is shown in. Claim. Let f: C → C be a quasiconformal homeomorphism with complex dilatation ∈ M(D) that is bi-Lipschitz between D and = f (D) under their Poincar metrics, and is conformal on D * with S(f | D * ) =. Then, for every ∈ B 0 (D * ), there exists a quasiconformal homeomorphism f: C → C with complex dilatation on vanishing at the boundary that is conformal on * = f (D * ) with S( f f | D * ) = + such that the following statements are valid: f is decomposed into two quasiconformal homeomorphisms satisfying the following properties: the complex dilatation 1 of f 1 on satisfies for some > 0 and for every z ∈ D; the support of the complex dilatation 0 of the normalized quasiconformal homeomorphism f 0: D → D, which is conformally conjugate to f 0: f 1 () → f (), is contained in a compact subset of D; for the complex dilatation 1 of the normalized quasiconformal homeomorphism f 1: D → D, which is conformally conjugate to f 1: → f 1 (), we have Combining all those maps in the claim above, we have the following commutative diagram, where g, g 1, and g are the conjugating conformal maps:, there is a quasiconformal homeomorphism f: C → C conformal on * and asymptotically conformal on such that S( f f | D * ) = +. According to the claim above, we consider the decomposition f = f 0 f 1 together with other maps that appear in it, and apply the properties shown there. Since ∈ B 0 (D * ), if − 1 ∈ B 0 (D * ), then 1 ∈ B 0 (D * ). By property, 0 in particular belongs to M 0 (D), and property asserts that − 1 = ( 0 * 1 * ) − ( 1 * ). By the previous arguments showing the inclusion ⊂, we see that − 1 ∈ B 0 (D * ). Hence, 1 ∈ B 0 (D * ). By this theorem, we have the decomposition of the Bers embedding as. This is the affine foliated structure of T b induced by T v. From Theorem 5.1, we also see that the quotient space T v \T b can be identified with a domain in the quotient Banach space B 0 (D * )\B(D * ). Corollary 5.2. The quotient Bers embedding is well-defined and injective. Moreover, is a homeomorphism of T v \T b onto its image. Consequently, T v \T b possesses a complex structure such that is a biholomorphic automorphism from T v \T b onto its image. Proof. The well-definedness and injectivity of the map are direct consequences from Theorem 5.1. We will show that the quotient Bers embedding is also a homeomorphism from T v \T b onto its image. For the quotient maps p: T b → T v \T b and P: B(D * ) → B 0 (D * )\B(D * ), the following commutative diagram holds: For an arbitrary open subset V ⊂ T b, we have This shows that p is an open map. In the same way, for an arbitrary open subset U ⊂ B(D * ), we have This shows that P is an open map. Moreover, the Bers embedding: T b → B(D * ) is a homeomorphism from T b onto its image. Thus, is open and continuous. Combined with the injectivity of, this implies that is a homeomorphism of T v \T b onto its image. Concerning biholomorphic automorphisms of p(T b ) = T v \T b with respect to its complex structure, we have the following. This kind of arguments are well-known in the theory of asymptotic Teichmller spaces. Proof. For each ∈ T b, we have that R (T v * ) = T v * ( * −1 ). This shows that the correspondence → is well-defined to be a map R: p(T b ) → p(T b ) that satisfies p R = R p. By considering the inverse mapping R −1 = R −1, we see that R is bijective. In the same way as the proof of Corollary 5.2, R is shown to be a homeomorphism. For the statement, it suffices to prove that R is holomorphic. We may identify T b with the domain (T b ) in B(D * ). The conjugate R = R −1 for = ( ) is a biholomorphic automorphism of (T b ) ⊂ B(D * ). We use its projection R to P ((T b )) = (p(T b )) as a replacement of R, which satisfies P where the limits refer to the convergence in the norm. From this, we see that is bounded and the operator norm satisfies A d R. Indeed, for every ∈ B 0 (D * )\B(D * ) and every > 0, we choose ∈ B(D * ) such that P () = and +. Then, Moreover, since we may assume that 2 in the above choice of, we have that The Carleson metric and its quotient In this section, we consider translation-invariant metrics on the BMO Teichmller space T b and its quotient space We define the following translation-invariant metric on T b in a canonical way. For simplicity, the metric is given in the Bers embedding (T b ). As before, we use the conjugate of the right translation R of T b for ∈ T b by the Bers embedding, which is the bi- Definition. A translation-invariant metric m C at any point ∈ (T b ) ⊂ B(D * ) and for any tangent vector ∈ B(D * ) is defined to be m C (, ) = d R B. We call this metric m C the Carleson metric on the BMO Teichmller space T b ∼ = (T b ). The pseudo-distance induced by this metric is denoted by d C (, ), which we call the Carleson distance. We note that for a smooth curve = (t) in (T b ) ⊂ B(D * ) with parameter t ∈, its length l C () is defined by the upper integral as Then, the Carleson distance d C ( 1, 2 ) is the infimum of l C () taken over all smooth curves connecting 1 and 2. Here is a list of intended results on the Carleson metric. Concerning the classical case of the Teichmller metric, we refer to the work by Earle, Gardiner, and Lakic. In this section, we only prove that the Carleson metric restricted to T v is continuous. In the next section, we prove that T b is complete with respect to the Carleson distance and certain relations between the Carleson distance and the Teichmller-Kobayashi distance. The continuity of the Carleson metric in the special case is obtained as follows. Theorem 6.2. The Carleson metric m C is continuous on the VMO Teichmller space T v. By this theorem, we can say that the VMO Teichmller space T v has a continuous Finsler structure with the Carleson metric. We close this section by mentioning the quotient metric on p(T b ) = T v \T b induced by m C. We note that m C is invariant under the group structure of T b (the transitive group action of T b is isometric with respect to m C ) and the projection p is given by taking the quotient of the subgroup T v ⊂ T b. Then, the quotient metric m C on p( for any ∈ (p(T b )) and ∈ B 0 (D * )\B(D * ). Moreover, we see that m C is invariant under every biholomorphic automorphism R of p(T b ) verified in Corollary 5.3. The pseudo-distance induced by m C on p(T b ) coincides with and this is in fact a distance. See and Remark 7.3 in the next section. Properties of the Carleson distance In this section, we prove further properties of the Carleson distance mentioned in the previous section. First, we give the following estimate of the operator norm of the derivative d 0: L(D) → B(D * ) explicitly. This can be used alternatively in the proof of Theorem 6.2 to show the convergence of the Carleson norm of |d R ()(z)−(z)| 2 −3 D * (z). We remark that this explicit estimate is not necessary for other arguments in this section, but might serve as a refinement of the results. Proof. The derivative d 0 can be represented by Applying the Cauchy-Schwarz inequality and the equation dudv. This shows that for every ∈ S, where ∆(, r) denotes the disk with center and radius r ∈. For the first term I 1 in the right-hand side of the inequality above, we note that |w−z| 2r/3 and that D \ ∆(, 5r/3) is in the half-space divided by a line passing through a given z ∈ ∆(, r) ∩ D *. Moreover, ∆(, r) ∩ D * is included in S ∩ D *, where S is a sector with center 0, radius 1 + r, and central angle at most r. Hence, we have that For the second term I 2, similarly we have that where S is a sector with center w and central angle at most 3/2. From these estimates, we obtain that Thus, B 24 *, which implies that d 0 24. In the following result, we obtain a locally uniform estimate for the operator norm d R when ∈ (T b ) is around the origin. Proof. For the upper estimate, we decompose Here, the Ahlfors-Weill section is linear with d 1 2 (L + M) as before, and d 0 is a bounded linear operator with d 0 24. Hence, it suffices to consider d r. For the lower estimate of the operator norm d R, we consider the upper estimate We know that d 1 2 (L + M) as before. Moreover, we have the derivative d 0 r −1 in direction ∈ L(D) as Then, by a similar argument as before, we can prove that the operator norm d 0 r −1 is uniformly bounded for every ∈ U( 1 ) ( = ()) by replacing 1 with a smaller constant if necessary. The locally uniform boundedness of the operator norm d is a consequence of the holomorphy of. This is a general argument but for the completeness, we review it here. The derivative of of the second order is the derivative of d: M(D) → L(L(D), B(D * )) given by the correspondence → d, where L(L(D), B(D * )) is the Banach space of bounded linear operators L → B(D * ) with respect to the operator norm. Then, the property that d is differentiable at 0 is equivalent to the existence of a bounded linear operator A: L → L(L(D), B(D * )) such that, which yields the locally uniform boundedness of d. Remark 7.3. To make the arguments in this section precise, we should note here that the estimate of d R as in Proposition 7.2 guarantees a locally uniform comparison of the metric with the norm of the Banach space. Then, the pseudo-distance induced by the Carleson metric is a distance and it defines the same topology as the original one on T b. We consider any Cauchy sequence in (T b, d C ). It suffices to consider its tail whose diameter can be arbitrary small. As the group of the right translations {R } acts isometrically and transitively on T b, we may assume that the tail of the Cauchy sequence is contained in −1 (U( 1 ))). From the lower estimate of the derivative as in Proposition 7.2, we see that the Bers embedding of the Cauchy sequence is a Cauchy sequence, which is a convergent sequence with respect to the norm B. Hence, ( * * ) implies that the Cauchy sequence also converges with respect to d C. We compare the Teichmller metric and the Carleson metric. The Teichmller metric where A 1 (D * ) is the Banach space of integrable holomorphic quadratic differentials on D *. The operator norm H() is comparable with B, and clearly H() B. At any point ∈ (T ), the Teichmller metric is given by m T (, ) = H(d R ). The distance induced by this metric is the Teichmller distance d T. We consider the restriction of d T to the BMO Teichmller space T b. Then, the infimum of l T () taken over all smooth curves in T b ∼ = (T b ) connecting two points defines an inner distance d i T between them, which clearly satisfies d T d i T. Proposition 7.6. There exists a constant L > 0 such that m T Lm C on T b. Hence, d T Ld C on T b. Proof. It was proved in that there is some constant L such that B L B for every ∈ B(D * ). Combined with H() B, it follows that H() L B, and then the assertion follows. It was shown by Fan and Hu that the Kobayashi distance d K defined on the complex manifold T v coincides with the restriction of the Teichmller distance d T. In fact, d K = d i T = d T on T v. Then, by Proposition 7.6, we have d K Ld C on the VMO Teichmller space T v. However, d T and d C are not comparable, that is, there is no inequality of the opposite direction either for T b or for T v. This is because the Carleson distance d C is complete in T b by Theorem 7.4 and so is in the closed subspace T v, but d T is not complete either in T b or in T v. In fact, the closure of T v in the universal Teichmller space (T, d T ) is T 0, the little subspace given by vanishing Beltrami coefficients (asymptotically conformal maps), which contains an element not belonging to T b. |
State estimation in fractional-order systems with coloured measurement noise This paper presents new estimation methods for discrete fractional-order state-space systems with coloured measurement noise. A novel approach is proposed to convert a fractional system with coloured measurement noise to a system with white measurement noise in which the process and measurement noises are correlated with each other. In this paper, two new Kalman filter algorithms for fractional-order linear state-space systems with coloured measurement noise, as well as a new extended Kalman filter algorithm for state estimation in nonlinear fractional-order state-space systems with coloured measurement noise, are proposed. The accuracy of the equations and relations is confirmed in several theorems. The validity and effectiveness of the proposed algorithms are verified by simulation results and compared with previous work. Results show that for linear and nonlinear fractional-order systems with coloured noise, the proposed methods are more accurate than conventional methods regarding estimation error and estimation error covariance. Simulation results demonstrate that the proposed algorithms can accurately perform estimation in fractional-order systems with coloured measurement noise. |
Characterization of polymer nanowires fabricated using the nanoimprint method In this paper, an ormocomp polymer nanowire with possible use in integrated-optics sensing applications is presented. We discuss the structure design, the fabrication process and present results of the simulation and characterization of the optical field profile. Since the nanowires are designed and intended to be used as integrated optics devices, they are attached to tapered and feed waveguides at their ends. The fabrication process in this work is based mainly on the nanoimprint technique. The method assumes a silicon nanowire as an original pattern, and polydimethylsiloxane (PDMS) as thesoft mold. The PDMS mold is directly imprinted on the ormocomp layer and then cured by UV light to form the polymer based nanowire. The ormocomp nanowires are fabricated to have various dimensions of width and length at a fixed 500nm thickness. The length of the nanowires is varied from 250 m to 2 mm, whereas the width of the structures is varied between 500nm and 1m. The possible optical mode field profile that occurs in the proposed polymer nanowire design is studied using the H-field finite element method (FEM). In the characterization part, the optical field profile and the intensity at the device output are the main focus of this paper. The various lengths of the nanowires show different characteristics in term of output intensity. An image processing is used to process the image to obtain the intensity of the output signal. A comparison of the optical field and output intensity for each polymer nanowire is also discussed. |
Influence the Filler Orientation on the Performance of Bipolar Plate Bipolar plates significantly contribute to the development of the polymer electrolyte membrane (PEM) fuel cells technology due to their ability to produce high electrical conductivity based on the type of materials used. Mismatching of inappropriate materials and manufacture may lead to the inferior performance of PEM fuel cells. Hence, material development was determined crucial to balance the overall performance of PEM fuels including the mechanical properties and electrical conductivity of the materials. Studies on conductive polymer composites (CPCs) offered filler with orientation in terms of filler with aspect ratio and shape as an alternative method to enhance the overall performance of the bipolar plate. Filler orientations permit an excellent conductivity network formation while controlling the filler alignment based on required applications. This paper reviewed various studies of filler orientations including materials used and methods of manufacture of CPC materials for the effective development of bipolar plate. The technique to orientate the filler was highlight in terms of materials processing and its effects on the materials performance. |
PDGFB, a new candidate plasma biomarker for venous thromboembolism: results from the VEREMA affinity proteomics study. There is a clear clinical need for high-specificity plasma biomarkers for predicting risk of venous thromboembolism (VTE), but thus far, such markers have remained elusive. Utilizing affinity reagents from the Human Protein Atlas project and multiplexed immuoassays, we extensively analyzed plasma samples from 2 individual studies to identify candidate protein markers associated with VTE risk. We screened plasma samples from 88 VTE cases and 85 matched controls, collected as part of the Swedish "Venous Thromboembolism Biomarker Study," using suspension bead arrays composed of 755 antibodies targeting 408 candidate proteins. We identified significant associations between VTE occurrence and plasma levels of human immunodeficiency virus type I enhancer binding protein 1 (HIVEP1), von Willebrand factor (VWF), glutathione peroxidase 3 (GPX3), and platelet-derived growth factor (PDGFB). For replication, we profiled plasma samples of 580 cases and 589 controls from the French FARIVE study. These results confirmed the association of VWF and PDGFB with VTE after correction for multiple testing, whereas only weak trends were observed for HIVEP1 and GPX3. Although plasma levels of VWF and PDGFB correlated modestly ( ∼ 0.30) with each other, they were independently associated with VTE risk in a joint model in FARIVE (VWF P <.001; PDGFB P =.002). PDGF was verified as the target of the capture antibody by immunocapture mass spectrometry and sandwich enzyme-linked immunosorbent assay. In conclusion, we demonstrate that high-throughput affinity plasma proteomic profiling is a valuable research strategy to identify potential candidate biomarkers for thrombosis-related disorders, and our study suggests a novel association of PDGFB plasma levels with VTE. |
Mediastinal lymph node resection in stage IA non-small cell lung cancer with small nodule: is it mandatory? Lung cancer, one of the most popular cancers in the world, is still the leading cause of cancer related death. Until recently, the only proven screening tool for early detection of lung cancer was low-dose computed tomography (LDCT). These days, the detection of small sized nodule or ground-glass opacity (GGO) has increased due to the wide spread of CT screening and enhanced technology. |
Alarm calls of house wrens (Troglodytes aedon bonariae) elicit responses of conspecific and heterospecific species Nesting house wrens (Troglodytes aedon bonariae) use two basic alarm calls (Type I and Type II) when detect a threat near the nest. We experimentally analysed if calls distract predators or serve to recruit other birds to create a mobbing flock to deter predators. The results show that individuals preferentially position themselves in front of the threat, disclosing the location of the nest. Also, using playbacks of house wren alarm calls we found that these calls recruited both conspecific and heterospecific individuals to create a mobbing response. The alarm calls of house wrens seem to fulfil multiple functions, not only conveying information about the threat to their mates and nestling as revealed in previous studies, but also as a signal that attracts the attention of other conspecific and heterospecific individuals and can trigger a mobbing response to deter the predator. |
TMD pain is partly heritable. A systematic review of family studies and genetic association studies. The aim of this study was to describe the current knowledge on the role of heritability in TMD pain through a systematic review of the literature, including familiar aggregation studies and genetic association studies. For the systematic search of the literature, the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) guidelines were followed. In total, 21 studies were included in the review, including five familiar aggregation studies and 16 genetic association studies. From both familiar aggregation studies and genetic association studies, modest evidence for the role of heritability in TMD pain was found. The literature mainly suggests genetic contributions from candidate genes that encode proteins involved in the processing of painful stimuli from the serotonergic and catecholaminergic system. This systematic review shows that the evidence for the role of heritability in the development of TMD pain is cumulating. |
A Track-Wise Wind Retrieval Algorithm for the CYGNSS Mission The cyclone global navigation satellite system (CYGNSS), launched on December 15 2016, represents the first dedicated GNSS-R satellite mission specifically designed to retrieve ocean surface wind speeds in the Tropical Cyclone (TC) environment. The baseline wind retrieval algorithm for the CYGNSS mission makes use of two observables (the normalized bi-static radar cross section and the leading edge slope) to retrieve the average wind speed within a 25 km resolution cell. The premise of the algorithm is that these two observables are only a function of wind speed and incidence angle. Analysis of actual CYGNSS measurements during the course of the calibration and validation process, indicates that collected GNSS-R signals show a dependence on both winds and waves. This paper will present an alternative method in retrieving the wind speed from CYGNSS data, which will include the use of a geophysical model function dependent on both wind and wave data. |
Cheetah: Detecting false sharing efficiently and effectively False sharing is a notorious performance problem that may occur in multithreaded programs when they are running on ubiquitous multicore hardware. It can dramatically degrade the performance by up to an order of magnitude, significantly hurting the scalability. Identifying false sharing in complex programs is challenging. Existing tools either incur significant performance overhead or do not provide adequate information to guide code optimization. To address these problems, we develop Cheetah, a profiler that detects false sharing both efficiently and effectively. Cheetah leverages the lightweight hardware performance monitoring units (PMUs) that are available in most modern CPU architectures to sample memory accesses. Cheetah develops the first approach to quantify the optimization potential of false sharing instances without actual fixes, based on the latency information collected by PMUs. Cheetah precisely reports false sharing and provides insightful optimization guidance for programmers, while adding less than 7% runtime overhead on average. Cheetah is ready for real deployment. |
. The flushing pump which is applied to clean operative wound has no temperature controlling function up to now, and doctors have to prepare the flushing fluid that has previously been warmed. The flushing pump system with medical constant temperature designed in our laboratory can absorb flushing fluid at the room temperature, and then eject flushing fluid with the temperature in accordance with the requirements of operations at a controlled constant flow rate. The system combines flow rate control with temperature control functions. The flushing pump system includes flushing part, temperature controlling part, key inputting part, liquid crystal displaying part and exceptional situation monitoring part. The present paper introduces the design method and principle of each part of the system at first, and then gives the debug method of all the system parameters. Finally the paper discusses the performance of the system according to the result of the experiment. |
Does the second ischemic stroke herald a higher proportional risk for cognitive and physical impairment than the first-ever one? Post-stroke cognitive and physical disabilities are common sequelae; however, it seems that the second ischemic stroke carries a higher proportional risk more than expected. In this study, we aimed to study second stroke sequelae over first-ever one with regard to cognition and physical competence. This study was conducted on two groups; the first composed of 40 patients with acute first lifetime ischemic stroke, and the second group composed of 40 acute second lifetime ischemic stroke. The study was done at menoufiya university hospitals from August 2017 to August 2018. Modified Rankin Scale (MRS), National Institute of Health Stroke Scale (NIHSS), and MINI-Cog Score, were performed at onset, 2 weeks and 3 months later. In addition, routine lab and neuro-imaging were also done. Size of infarction is larger in 2nd group (p<0.001), MRS, and NIHSS are significantly higher in 2nd group. Also, there are significant differences between baseline, 2 weeks, and 3 months follow-up in MRS and NIHSS. Mini-Cog scale showed significant difference between the two groups in favor of better cognition in the 1st group. Atrial fibrillation (AF), p=0.012 was a significant risk factor in the 1st group while smoking, p=0.017 was the significant risk factor in the 2nd group. Large size stroke was found as independent risk factor in the 2nd group (p<0.001). There are significant cognitive and physical disabilities in the second recurrent ischemic stroke as compared to the first-ever one, and the second stroke tend to be more dangerous and carry more disability. Background Despite efforts and better control of risk factors, recurrent stroke is still common. Studies show varying recurrence rates, ranging from 7-20% at 1 year to 16-35% at 5 years. About one-half of patients who survive acute ischemic stroke (AIS) or transient ischemic attack (TIA) are at increased risk of recurrent stroke within a few days or weeks of the initial event, with the greatest risk during the 1st week. Patients who have a TIA have a 10-year stroke risk of 19% and a combined 10-year risk of stroke, myocardial infarction, and vascular death of 43%. Recurrent events lead to prolonged hospitalization, worsened functional outcome, and increased mortality. Recurrent AIS has been associated with functional dependence and increased mortality but this remains insufficiently explored. It seems that the second recurrent ischemic stroke is not just another stroke, but it carries much more disabilities and add a cumulative burden on Open Access The Egyptian Journal of Neurology, Psychiatry and Neurosurgery cerebral plasticity leading to a magnified cognitive and physical decapacitation. The aim of this study is to define the pattern of disabilities associated with the second stroke in comparison to the first one. Methods This prospective hospital-based comparative study was performed on two groups, each composed of 40 patients admitted to Neurology department, menoufiya university hospitals, from August 2017 to August 2018, informed written consent was taken from participants or their caregivers. Group I comprises first AIS insult and includes 20 males and 20 females, while group II comprises second AIS insult patients and includes 19 males and 21 females. We included patients above 50 years old. Excluded patients were those presented with TIA, severe hepatic or renal impairment, post-stroke aphasia, major psychiatric disorders. Randomization was based on to include all consecutive cases who fulfill inclusion and exclusion criteria being presented by a clinically manifest stroke. Both groups were subjected to full history taking, complete physical and neurological examination, lab investigations (blood sugar, lipid profile, complete blood picture, liver and renal functions and PT and INR). Physical disability was assessed by Modified Rankin Scale (MRS), National Institutes of Health Stroke Scale (NIHSS), and cognitive function was assessed by the Mini-Cog scale. The above-mentioned scales were performed by the same examiner at admission, 2 weeks and 3 months later. All data and materials supporting the results are available. Statistical analysis was done using IBM SPSS software package version 20 (Armon, NY: IBM Corp, USA). Qualitative data were described using number and percent. The Kolmogorov-Smirnov test was used to verify the normality of distribution. Quantitative data were described using range (minimum and maximum), mean, standard deviation and median. Significance of the obtained results was judged at the 5% level. Chi-square test was used for categorical variables, to compare between different groups. Fisher's exact or Monte Carlo correction was used for correction of Chi-square when more than 20% of the cells have expected count less than five. Student's t-test was used for normally distributed quantitative variables, to compare between two studied groups. ANOVA was used with repeated measures for normally distributed quantitative variables, to compare between more than two periods or stages, and post hoc test (Bonferroni adjusted) for pairwise comparisons. Mann-Whitney test was used for abnormally distributed quantitative variables, to compare between two studied groups. Friedman test was used for abnormally distributed quantitative variables, to compare between more than two periods or stages and post hoc test (Dunn's) for pairwise comparisons. Results We found that there was insignificant differences between the two groups regarding demographic and risk factors except AF and cigarette smoking ( Table 1). The type of stroke found in both groups was matched either atherothrombotic, cardiac embolic, lacunar, stroke of other causes and stroke of unknown causes p-value, 0.223 (Table 2). Regarding the infarct location found in both groups was matched p-value, 0.208 (Table 3). With regard to the size of infarction, we found that there was significant differences between two groups regarding size of infarction as large sized ones were present more in group II p-value < 0.001 (Table 4). We found also significant differences between two groups regarding MRS in baseline, after 2 weeks and after 3 months p-value < 0.001 as score was higher in second group (Tables 5, 6). Regarding the NIHSS, we found significant differences between the two groups regarding NIHSS in baseline, after 2 weeks and after 3 months p-value < 0.001 as score was higher in second group (Tables 7, 8). Regarding the MINI COG, we found significant differences between the two groups regarding MINI COG in baseline, after 2 weeks and after 3 months (Tables 9, 10). Discussion In the current study, we aimed to evaluate cognitive impairment and physical disability after second cerebral AIS in comparison with that following the first stroke and studying the risk factors of the recurrent AIS. Forty patients in group I with first stroke and another 40 in group II with second stroke were included. We found that the type of stroke found in both groups was matched either atherothrombotic, cardiac embolic, lacunar, stroke of other cause and stroke of unknown cause (p-value) 0.223. Two population-based studies found that recurrences were of the same subtype in almost 90% of cases. De la Cmara and colleagues showed that the type of ischemic stroke was atherothrombotic in 62% of included in the study and in 34.6% with recurrent stroke, cardioembolic in 21.5% and 33.8%, respectively, lacunar in 11% and 21.8%, respectively, due to a hypercoagulable state in 1% of patients with first diagnosis, due to nonatherosclerotic vasculopathy in 1%, 66.7%, respectively. A retrospective hospital-based study with a more detailed categorization of stroke subtypes suggested that stroke recurrences in lacunar and hemorrhagic index strokes are often of a different type, hence the hypothesis of the multifactorial origin of stroke recurrence. In the current study we found that there was significant differences between two groups regarding the second stroke location to the first one in the second group of patients as different locations (non-stereotyped) present more in group II than the same location (stereotyped) with (p-value) < 0.001. The stereotyped lesions were 2 temporoparietal, 2 frontal, 1 occipital, and 1 capsular infarctions. In Schaapsmeerders and colleagues study they found that the type of lesion found was either supratentorial stroke in 79.0%, infratentorial stroke in 18.5% or bilateral in 2.5%. In this study, we found that there was significant differences between two groups regarding size of infarction as large sized ones were present more in group II (p-value) < 0.001. In consistent with our results Khedr and colleagues, showed that infarction size was larger in patients with dementia and cognitive impairment and that occurs mainly in recurrent strokes with significant difference (p-value = 0.001). In our study, we found that there were significant differences between the two groups regarding MRS in baseline, after 2 weeks and after 3 months (p-value < 0.001) as score was higher in second group, there were significant differences in MRS in both groups from baseline and after treatment either 2 weeks or 3 months (p-value < 0.001). Ntaios and colleagues proved that embolic stroke of undetermined etiology cumulative probability of recurrence was similar to cardioembolic strokes, but higher than all the other types of non-cardioembolic stroke. These patients had a favorable functional outcome, defined as MRS ≤ 2 (62.5%), and compared to patients with cardioembolic strokes (32.2%). This explains why the MRS score was higher in recurrent patients in our study, more evidence of role of MRS in predicting unfavorable outcome like recurrent cases Long and colleagues found that MRS was significantly higher in elderly patients with stroke which had bad outcome than younger patients (p-value) < 0.001. In the current study, we found that there was significant differences between the two groups regarding NIHSS in baseline, after 2 weeks and after 3 months (p-value < 0.001) as score was higher in second group, there was significant differences in NIHSS in group I, II from baseline and after treatment either 2 weeks or 3 months (p-value < 0.001, 0.002), respectively. Alemam and colleagues showed that there was there was a highly statistically significant correlation between NIHSS score and outcome of AIS (p ≤ 0.0001). We found that there were significant differences between the two groups regarding MINI COG in baseline, after 2 weeks and after 3 months (p-value 0.033, < 0.001), respectively, there was significant differences in MINI COG in group I, II from baseline and after treatment either 2 weeks or 3 months (p-value < 0.001, 0.002), respectively. This goes with Borson and colleagues, as they found that MINI COG was sensitive to recurrent stroke Table 7 Comparison between the two studied groups according to NIHSS and any dementia that occurred. This explained why in our study its level was much higher in the recurrent group. Cao and colleagues investigated 40 young patients with ischemic stroke and assessed other domains and found that language comprehension, reasoning, and verbal memory to be most affected. Processing speed was not assessed in these patients. There were limitations in this study, such as the short duration of follow-up, the number of patients should be increased in further studies, inclusion of patients as first and second stroke should consider imaging as some clinically presented strokes could have previous silent infarcts which might affect the results, a depression scale should be added, and some risk factors like smoking can be a confounding issue that relates to poorer outcome in the second group. Conclusions There are significant cognitive and physical disabilities in the second recurrent ischemic stroke as compared to the first one. Funding None. Availability of data and materials All related data are available. Declarations Ethics approval and consent to participate Name of the ethics committee (Menoufiya ethics committee) Date of approval; July 2017 (we don't have a specific number for this approval), and informed written consent was taken from the patients and it's considered one of the inclusion criteria. Consent for publication We approve the publication. Regarding data about individual case: Not applicable. Significance between groups p 1 = 1.000, p 2 = 0.044*, p 3 = 0.044* |
Zircon U-Pb Dating and Metamorphism of Granitoid Gneisses and Supracrustal Rocks in Eastern Hebei, North China Craton Granitoid gneisses dominated by tonalitictrondhjemiticgranodioritic (TTG) compositions, with metamorphic supracrustal rocks consisting of sedimentary and volcanic rocks, are widely exposed in the Eastern Hebei terrane, North China Craton (NCC). This study presents systematic zircon UPb geochronological and whole-rock geochemical data of the Neoarchean granitoid gneisses and supracrustal rocks in Eastern Hebei. Zircon UPb isotopic dating for the representative samples reveals that magmatic precursors of granitoid gneisses were emplaced between 2524 ± 7 and 2503 ± 12 Ma, and the protoliths of the pelitic granulites were deposited in the Late Neoarchean era. Both of them have been subjected to granulite facies metamorphism during 2508 ± 10 to 2468 ± 33 Ma, coeval with the intrusion of syenogranitic pegmatite (2488 ± 5 Ma). Zircon ages of 2.452.01 Ga obtained from the analyzed samples were considered mixed data from 2.532.48 Ga and 1.91.8 Ga and were chronologically meaningless. Paleoproterozoic metamorphic zircon ages of 1.91.8 Ga were usually neglected because of hardly being obtained from TTG gneisses and supracrustal rocks. The tectonic regime during the Neoarchean era was considered to be dominated by vertical tectonism in the Eastern Hebei terrane. |
A new image binary segmentation algorithm based on frequency domain In this paper, a new image binary segmentation algorithm in frequency domain is proposed. After analysing the Fourier transformation, Gaussian kernel function is modified that a new parameter named compression factor is added into, a binary result of the image can be obtained easily and quickly through controlling this new parameter's range. Experimental results show that the proposed algorithm can get satisfactory binary segmentations with a low computational complexity, and the results are better than some traditional binary segmentation algorithms. |
Teaching derivative concept using 6 questions cognitive model Abstrak. Derivative is one of the basic concepts of calculus, has an extremely rich practical background and wide application. However, "low exploration" of the limit thought in teaching and learning activities have caused difficulties in future advanced mathematics learning. The purpose of this study is to create a learning model that helps students understand calculus. The method in this research is using research and development method, designing a learning model using 6 questions cognitive model on derivative material. Based on the results of developing a new learning model in this study using the six questions cognitive model to explore mathematics from the six dimensions of "from where", "what ", "why", "how", "what if it changed" and "think about it", The teaching and learning activities when teaching derivative under the cultural background reflects the continuity, naturalness and order of teaching. The results of this study indicate that 6 question cognitive models can help students learn the basic concept of calculus. The results of this study can be implemented by teachers in schools to improve the quality of high school student learning Article History: Received: October 1, 2020 Revised: October 22, 2020 Accepted: October 26, 2020 INTRODUCTION In China, "student-oriented" as a basic concept is proposed in the new round of basic education curriculum teaching and learning (Yi, Ying, & Wijaya, 2019). The student-oriented concept reflects the humanization of science and culture education (Lee & Kim, 2017). Its important connotation is to integrate science and humanities into the cultural connotation of the curriculum (Aixia, Ying, & Wijaya, 2020). Under the student-oriented concept, improving students' mathematical ability and cultural literacy has become the purpose of mathematics education. emphasizing the dialectical unity of mathematical literacy and humanistic literacy (Cunhua, Ying, Qunzhuang, & Wijaya, 2019). Mathematics education is not only to impart mathematics knowledge and skills to students but also more importantly, to enable students to master mathematics thinking methods, comprehend the basic concept of mathematics (Dewi, Wijaya, Budianti, & Rohaeti, 2018;Wijaya, Purnama, & Tanuwijaya, 2020), understand the value of mathematics (;Wijaya, Dewi, Fauziah, & Afrilianto, 2018), and highlight the creative thinking ability of mathematics courses (Tan, Zou, Wijaya, Suci, & Dewi, 2020). Therefore, integrating mathematics culture into mathematics education is not only a paradigm pursued by mathematics education research institutes at home and abroad but also an objective requirement and feasible way for mathematics curriculum reform in China (Qin, Zhou, & Tanu, 2019). Teaching is teaching students to think. Behind the knowledge and truth is the essence, ideology, continuity, and integrity of thinking. Its philosophical significance lies in Xiaofeng Lan, Ying Zhou ISSN: 2721-5601 128 the students' understanding of the reason why mathematicians can form such a way through mathematical knowledge and truth. The deep cultural existence of thought and creation. From the perspective of re-creation, students' mathematical ability and thinking process are the same as mathematicians' thinking, and the generation and development of mathematical knowledge and mathematical ideas are natural and reasonable. The basic unit of the modern curriculum is problem-based learning (Andini, Mulyani, Wijaya, & Supriyati, 2018;Bernard, Sumarna, Rolina, & Akbar, 2019;Hidayat & Sariningsih, 2018). Therefore, based on the awareness of the problem, Professor Zhou Ying built a 6 question cognitive model that reflects the natural sequence, overall coherence and dynamic openness based on the awareness of the problems that exist in everyday life and from the perspective of teaching and learning ( Figure 1). 6 questions cognitive model consists of 6 stages 1) from where 2) what 3) why 4) how 5) what if it changes 6) think about it (Lin, Zhou, Wang, & Wijaya, 2020;Wijaya, Ying, Cunhua, & Zulfah, 2020). Ask questions step by step on basic concepts, focusing on the cultivation of mathematical thinking ability. The internal elements of the 6 questions cognitive model are interrelated and synergistic and are externally affected by students' conditions, teacher quality, and teaching environment. The application of this model provides a methodology for teachers to improve teaching and enhance the quality of thinking of teachers and help students to get deep learning. This study takes derivatives as an example to explore the teaching of 6 questions cognitive model. Calculus is one of the great achievements of human thinking. It opened a period of transition to modern mathematics and provided an important method for studying variables and functions. In terms of knowledge, the core concept of calculus is derivative, and the theoretical cornerstone is limit theory. In 1629, Fermat began to conceive the idea of derivative in the process of studying the tangent of the curve and the extreme value of the function. In 1637, in order to solve the problem of making the tangent, Fermat constructed the difference, which implies the concept of derivative (Verlag & Leibnitiana, 2013). It was not until the 17 th century that the derivative formally entered the field of vision of humanity. The instantaneous velocity of Newton in physics created an infinitesimal amount. Leibniz used the definite integral problem to produce the concept of derivative in turn. In 1750 (Verlag & Leibnitiana, 2013), D' Alembert used = lim ∆ →0 ∆ ∆ to express the derivative; in 1823, Cauchy gave a strict definition of the derivative in the "Introduction to Infinitesimal Analysis", in the 1860s, Weilstrass created the − language to reformulate the limit theory, perfect the theoretical basis of derivatives. Closely related to the geometric meaning of derivatives is the tangent. Euclid believed that the tangent of a circle is a straight line that falls outside the circle and has only a common point with the circle. Apollonius and Archimedes also used the method of "there is only a common point with the curve and is located on one side of the curve" to define the tangent of the conic and spiral. In the 17 th century, mathematicians successively discovered and studied different methods of constructing the tangent of a general curve. It is generally believed that the tangent is the limit of the secant. From the perspective of the basic concept development history of derivatives and tangents, these histories not only provide us with valuable materials but also provide a process of knowledge regeneration and development with cultural connotation and resonance, which opens up new horizons for teaching design. Derivative teaching is an important part of high school mathematics teaching (Education, Fuentealba, & Badillo, 2019). Limit thinking is the foundation of the concept of derivative. "infinitesimal" and "limit" are concepts that the ancient Greeks dared not touch, and they are also easily overlooked in teaching practice. Owing to the neglect of the limit thought, it will be difficult to learn advanced mathematics in the future. To understand the limit thought, we need to complete the leap from finite to infinite. The geometric meaning of derivatives is the lower level knowledge class of the concept of derivatives. Students master the upper level knowledgethe average rate of change, the instantaneous rate of change and the concept of derivatives, and further understand the meaning and value of derivatives from the perspective of geometric meaning, and experience the approximation, the mathematical thinking method of successive generation of music, combination of number and shape, and limit. At the same time, it also lays a solid foundation for the lower knowledge-the calculation of derivatives and the application of derivatives in research functions. Analyze the students' academic situation from the perspective of knowledge reserve, students have learned the conic section and stayed on the concept of "the number of common points" and "whether the straight line is on the same side of the curve". So "the geometric meaning of derivatives" is taught let students realize that "whether the straight line is on the same side of the curve" is not the criterion for tangent lines, and stimulate students to learn new growth points. From the perspective of learning psychology, students have understood the derivative from the perspective of the "number" of practical meaning and numerical meaning. Students are also eager to understand the derivative from the perspective of geometric meaning, that is, the "shape", but students have individual thinking about the tangent. Set potential "the straight line with only one point in common with the curve and on one side of the curve is the tangent of the curve". Teachers need to create problem situations and use analogy methods to guide students to a level conceptually. The tangent of the general curve is defined by the approximation of the secant, to break through the teaching difficulty; "approach" idea. Based on the problems of student difficulties described above, the purpose of this study is to explain how to teach calculus. The teaching method used to teach calculus is 6 question cognitive models. By explaining how to use the 6 question cognitive models here, it will be seen where the 6 question cognitive models can improve students' mathematical abilities and lead students to deep learning. METHOD This type of research is development research using instructional design to help students get deep learning. Derivatives material is the calculus content in senior high school student in China. Chapter 2 Derivatives and their applications in Chinese textbooks of the optional course. This class is the grade 10, senior high school and is taught to students in the second semester. Developing a learning plan using a six questions cognitive model. Steps 6 question cognitive model for teaching calculus in this study can be seen in Figure 2. The researcher simultaneously explains the important points in each phase of the 6 question cognitive model so that students can better understand the basic concepts of calculus material and get deep learning. Introduce new knowledge, and a powerful answer to "why should you learn?" In the teaching process where "learning determines teaching" and "learning by teaching" complement each other, teachers and students are the cognitive subjects of teaching. Focusing on the knowledge of mathematics in the form of questions cannot only break the solidification or rigidity of teaching methods, but also It is beneficial to the reconstruction and innovation of mathematics teaching. In the specific teaching practice, the set of mathematics problems introduced in the classroom should conform to the students' cognitive law, combined with the connotative characteristics of the teaching content, and activate the growth point of new knowledge. Question 1: We can draw the reflected light on a flat surface, so how is the light reflected "From where" as the beginning of the "six what" is the first link of classroom teaching, drawing on the three major problems of tangents studied by mathematicians in the 17th centurythe reflection of light on curved surfaces, and the speed direction of curve movement The problem, the angle of the curve, design "reflection of light on the curved surface", "speed direction of the curve movement", and "slope of the arch bridge" 3 real-life problems to activate students' learning motivation and play Osubel The role of the "advanced organizer" and the promotion of learning are the "source" of new knowledge and new methods. "what" stages Mathematical concepts are the core of mathematical thinking methods, and the structure of mathematical knowledge is also developed around the core concepts. Ausubel believes that one of the prerequisites for meaning learning is to teach the concepts and principles that are inclusive, general and persuasive in the subject as much as possible. Bruner's educational principles on the basic structure of the subject emphasize that learning and mastering the concepts, definitions, principles and rules that are widely used in the subject is the best way to penetrate new knowledge fields. "What is it" means what the new knowledge is, focusing on answering "what is the nature and attributes of this mathematical object". It is not limited to the model of the concept itself but forms a hierarchical network with other mathematical objects. In teaching practice, teachers often directly tell students the concept of a certain mathematical object. What they teach is only a "noun". The memory of this "noun" may be lost in a long time. Therefore, the design is based on the nature and laws of knowledge. A string of essential questions, constructing core concepts, and commanding the entire teaching. Question 1: In junior high school, how do we define the tangent and secant of a circle? Based on the principle of enlightenment and induction, ask questions in the "recent development zone" of students' thinking. Question 1 first aksed students' memories and then leads students to the scene of generating new knowledge with questions. The setting of questions 2 to 4 not only negates the promotion of "define tangents from the number of intersections and whether the straight line is on the same side of the curve", but also triggers students' cognitive conflicts, which greatly stimulates students' learning interest and desire to explore. On the basis of obtaining intuitive perception, students experience the occurrence and development of general curve tangents through the visual impact brought by the circle-cutting technique, realize the calculus thought of "replace music with straight lines", increase rational thinking, form tangent definitions, and deepen understanding of the definition of tangents, eliminating previous cognitive conflicts. 2020, 1 Teaching derivative concept using "why" stage The rise of the subject of mathematics is not due to a single reason, but the result of multiple reasons. Materialist dialectics believes that the whole governs the part and has functions that the part does not have. The holistic principle believes that there will be non-addictiveness between "partial" and "whole". There are many potential connections between mathematics objects and between mathematics and other disciplines, among which the problems of "partial" and "whole" are involved. Excavate the associations between various objects in teaching, guide students to make extensive associations, and shift from the mastery of a certain "partial" content to the construction of the "why" content of culture. "Why" stage solves the problem of "what is the connection", and points to how the new knowledge and the old knowledge, the elements within the knowledge, the elements and the overall structure are connected, and the design of a series of related problems not only enables students to learn "partial" knowledge but also "Overall" understanding of the knowledge and culture system. Question 1: I have just intuitively perceived the change process of "secant approaching tangent". How to express this change in a quantitative relationship? Question 2: As shown in Figure 5, given that, how to write the secant Pn equation? Question 3: When the secant Pn passing through the fixed point P approaches the tangent, which part of the secant equation changes? What is the result of the change? Question 4: How to write the tangent equation? Question 5: What is the geometric meaning of the derivative? Figure 6. Circular cutting "why" stage aims to let students understand the connection between old and new knowledge and promote the integration of knowledge. Questions 1 to 5 inspire and guide students to think, let students experience the process of going from the known to the unknown, step by step, and explore the relationship between tangent and derivative. Point P Secant PPn Approaching the tangent, When the end result of the change is ,according to the point oblique, the tangent passes through the point P, is the slope of the tangent. Then it is concluded that the geometric meaning of the derivative is the slope of the tangent of the image of the function at. In this process, the derivative is used as the support and connection point, and the derivative is understood from the two aspects of numerical meaning and geometric meaning. Different aspects, focusing on the transformation between the two, further perfecting students' "holistic" cognition. "how" stage "How" stage mainly detects how students have learned and the key to develop students' logical thinking. Teachers can understand students' knowledge and blind spots and misunderstandings in thinking from the perspective, method and process of students solving problems. Its basic task is to guide students to form a solid organic whole with knowledge, skills, ideas, and methods. Through the design and application of the problem string, it starts from a certain important mathematical knowledge, skill, and method to carry out a more in-depth analysis of its internal connections, focusing on testing whether students have formed and mastered important mathematical thinking methods and whether they can apply knowledge to solve problems. And then feedback the teaching effect. Question 1: There is an elliptical surface. Can you draw the light reflected on the surface? Question 2: The trajectory of an object thrown horizontally at a speed of 10m/s is (), what is the speed direction of the object at? Question 3: As shown in Figure 6, an arch bridge is to be built. It is required that the highest point of the arch bridge is 20 m away from the water surface, and the slope of the arch bridge cannot be greater than 30°. What is the span of the arch bridge? Figure 7. Circular cutting After the previous study of from here stage, what stage and why stage, students have mastered the source of knowledge, essential characteristic values and related knowledge, but they still need to apply what they have learned to achieve the unity of knowledge and action. In this session, we take the three questions raised by "where" as an example to test how well students have learned and whether they can apply knowledge to solve problems and provide students with practice opportunities to answer the questions raised by "where" stage. "what if it changes" stage Mathematical thinking and mathematical methods cannot be accomplished only through "from where", "what is", "and what", and "how". They are spiralling upwards in repetition and must experience a step from shallow to deep, from simple to simple-the process of development from low-level to high-level. In addition, the continuity and systematicity of mathematical knowledge reveal the relevance and characteristics of knowledge. Teachers need to guide students to make horizontal and vertical comparisons and variant transfers of mathematical knowledge. The practice has proved that reasonable transformation of the nonessential characteristics of mathematical objects can promote students' in-depth understanding of them, and variant teaching has become a good carrier. Variation teaching is not a product of modern education, it can be traced back to the time when the book "Nine Chapters of Arithmetic" was written, and the variation thinking has expanded the connotation of educational-oriented mathematical culture research. "what if it changes" emphasizes variable expansion and abstract improvement, and the difficulty of the problem should be gradually increased. It guides students to re-examine the knowledge they have learned from different angles, promotes students' thinking divergence, and improves their ability to draw inferences from one another. Question 1: If a beam of light passes through one focal point of an ellipse, after reflection, will the reflected light pass through another focal point? Activity exploration: As shown in Figure 7, point A is the moving point on the ellipse, straight line l1 is the tangent of the ellipse at point A, l2 is the vertical line of l1, F1 and F2 are the 2020, 1 Teaching derivative concept using focal points of the ellipse, using the geometric sketchpad to measure ∠F1AB is it equal to ∠F2AB, verify the optical properties of the ellipse. The lecturer modified the problem of "reflection of light on curved surfaces", using the situation of light passing through the focal point of the ellipse to deepen students' understanding of the idea of "representing music with straight lines", linking physical knowledge, and helping students to understand in multiple directions know-how. The setting of Problem 2 is not only to solve mathematical problems but to guide students to explore the essence behind the phenomenon and broaden students' mathematics horizons. "think about it" stage After studying, students can conduct self-reflection, understand the educational value orientation of mathematics culture, understand the cultural importance of mathematics beyond scientism, and examine the scientific and humanistic attributes of mathematics, which will help to form a spirit of rational innovation. "What is there" to solve the problem of "what is there to reflect or new understanding", pay attention to where students grow up, what knowledge, ideas, methods, positive emotions they have learned, and where mathematics literacy has been improved, set up a series of reflection questions, one, on the one hand, teachers can obtain teaching feedback in multiple dimensions to create favourable conditions for subsequent teaching implementation. On the other hand, students can reflect on multiple perspectives and cultivate the habit of being good at the reflection, organize knowledge in a structured way, promote the development of metacognitive ability, and improve the thinking level. Question 1: What problems can the geometric meaning of derivatives solve? Question 2: In the process of studying the geometric meaning of derivatives, what mathematical ideas and methods are used? Question 3: Can you summarize the learning of this section of derivatives in the form of a mind map? Question 4: Is there any confusion or reflection? Problem-oriented guide students to summarize and comb from the three aspects of knowledge, thinking, and methods. Obtain some strategic knowledge and put forward their own puzzles based on what they have learned. student find knowledge blind spots and areas to be improved and re-read new knowledge understanding, optimizing their own knowledge structure, connecting points and networking, strengthening the awareness of selfevaluation, reflection and management. learning 6 question cognitive model is in accordance with research that has been done before that by using 6 question cognitive models, students can better understand the basic concepts of mathematics, improve the quality of students' mathematical thinking, direct students to get deep learning, improve student learning outcomes (;). Suggestions for teachers in schools to use 6 question cognitive models in every math lesson. The teachers can use 6 question cognitive models with the use of ICT. So that ICT can increase students' interest in learning. Further research, can examine the effect of 6 question cognitive models on students' mathematical abilities using the quantitave method. CONCLUSION As abstract subject with rich connotation, mathematics is widely influencing our daily life and thoughts of the public. Curriculum reform is entering the era of "basic concept literacy". Students are individuals with independent thoughts. Their emotions, attitudes, values, disciplinary literacy and key abilities should adapt to the current era. In the process of cultivating students' literacy, applying the 6 questions cognitive model to infiltrate mathematics culture into education and teaching in a problematic way, which can effectively cultivate students' higherorder thinking and reflection ability. The 6 cognitive model on calculus always pays attention to the learning psychology of students, and guides students to construct new knowledge in an orderly and gradual manner; pays attention to the connection between knowledge, and creates situations to give students sufficient opportunities to associate and complete the knowledge system construct. At the same time, the knowledge of the history of mathematics is re-created. For example, in the introduction part, three types of questions about tangents researched by mathematicians in the 17th century are used to improve students' learning interest. In the concept construction part, the 6 cognitive model circle cutting technique is connected with the concept of tangents. Help students understand the basic concept of tangent and realize the inheritance and innovation of mathematical culture. |
Right main bronchial disruption due to blunt trauma. A young soldier was crushed between two vehicles sustaining severe injury to right side of chest leading to multiple rib fractures, tension pneumothorax, bronchopleural fistula, and later on gross surgical emphysema. Rigid bronchoscopy confirmed injury to right upper bronchus. Surgical repair and postoperative care of such a major, although rare, injury was successfully achieved in this small hospital by a team augmented by a specialist from thoracic surgery centre. The risks of transport of a major thoracic injury should be assessed against a possible definitive treatment locally. Fibreoptic or rigid bronchoscopy should be employed as early as possible in all suspected cases of major airways injury. An outreach service by a thoracic surgery centre can be life-saving. |
The Improvement Of Mercury Removal In Natural Gas By Activated Carbon Impregnated With Zinc Chloride Natural gas being produced from gas fi elds around Indonesia areas, along with a large number of other harmful substances (CO2,H2S, RSH,COS etc) often contains mercury. Even in small amounts, mercury and its compounds have an extremely harmful effect on human health. Mercury content in the natural gas should be removed to avoid equipment damage in the gas processing plant or the pipeline transmission system from mercury amalgamation and embrittlement of aluminium. Mercury can be removed by using adsorption processes such as activated carbon that is impregnated with chlor, iodine or sulfur. This research is dealing with the process of mercury removal from gas based on principle of adsorption and of chemisorption of mercury by means of activated carbon impregnated with ZnCl2. Time of impregnation is a signifi cant variable that can effect adsorption capacity. The experiment results showed that ZnCl2 impregnation time of 12 hours signifi cantly enhanced the adsorptive capacity for mercury vapour. |
Bellcore's user-centred-design support centre Abstract Bellcore recently replaced its small laboratory that was designed primarily for formal testing of software usability. The new facility is a suite of rooms that handles multiple, independent activities. More importantly, the new space is a manifestation of our philosophy that the best approach to interface design is the cultivation of eclectic design practices early in and throughout the software development process. To that end, the new lab supports other kinds of user-centred design (UCD) activities in addition to formal testing of computerized prototypes of software interfaces. To encourage participatory design, nearly all the rooms are large enough for design meetings, contain entire walls of movable whiteboards, and have small tables so design teams can huddle over paper prototypes and task layouts. In this article we describe the new lab, the rationales behind its features, and the process by which it was designed. |
Examining Ca2+ Extrusion of Na+/Ca2+K+ Exchangers Abstract: Na+/Ca2+K+ exchangers (NCKX) are plasma membrane transporters that are thought to mainly mediate Ca2+ extrusion (along with K+) at the expense of the Na+ electrochemical gradient. However, because they are bidirectional, most assays have relied on measuring their activity in the reverse (Ca2+ import) mode. Herein we describe a method to control intracellular ionic conditions, and examine the forward (Ca2+ extrusion) mode of exchange of NCKX2. |
Merging Directed C−H Activations with High-Throughput Experimentation: Development of Predictable Iridium-Catalyzed C−H Aminations Applicable to Late-Stage Functionalization Herein, we report an iridium-catalyzed directed C−H amination methodology developed using a highthroughput experimentation (HTE)-based strategy, applicable for the needs of automated modern drug discovery. The informer library approach for investigating accessible directing group chemical space for the reaction, in combination with functional group tolerance screening and substrate scope investigations, allowed for the generation of an empirical predictive model to guide future users. Applicability to late-stage functionalization of complex drugs and natural products, in combination with multiple deprotection protocols leading to the desirable aniline matched pairs, serve to demonstrate the utility of the method for drug discovery. Finally reaction miniaturization to a nanomolar range highlights the opportunities for more sustainable screening with decreased material consumption. Introduction Innovation in synthetic organic chemistry is of fundamental importance to the improvement of the drug discovery process. While the field has seen tremendous developments over the past century, recent advances in synthetic methods, chemoinformatics, and increasing applicability of automation and miniaturization in synthesis have the potential to further transform and improve modern drug discovery. 1-4 Two particular technological and synthetic approaches stand at the forefront of our interest and focus in this work: high-throughput experimentation (HTE), and C−H functionalization. HTE techniques attracted significant interest from the pharmaceutical industry, and are now increasingly utilized in the drug discovery process. 5 From the methodology development perspective the advantages are clear: access to more high-quality and well-rounded results with decreased material and time cost associated. Adding to this is the importance large, high quality datasets for the generation of predictive reactivity models. 6,7 At the same time, reaction miniaturization allows for more sustainable chemistry by means of decreased material consumption, including reagents, solvents, and especially high-value advanced intermediates and catalysts. Finally, technologies such as automated liquid and solid dispensing allow chemists to avoid repetitive non-intellectual tasks, while providing high reproducibility and evading the risk for human error in setting up large arrays. Given the abundance of C−H bonds in drugs and their building blocks, C−H functionalizations are among the most desirable transformations in drug discovery. Of particular interest are late-stage functionalizations (LSF), 8,9 where the controlled chemoselective transformation of desired C−H bonds in complex drug-like molecules has the potential to greatly aid the hit-to-lead and lead-optimization processes. 10 Bypassing the need for time, material and labor intensive de novo synthesis of analogues would greatly aid structure-activity relation (SAR) studies, or even the generation of new candidate drugs. In terms of desirable transformations, the introduction of small functional groups like -CH3, -CF3, -NH2, -OH and -F are of highest priority, and would be widely used in the industry. 1 Further motivating the development of new amination methodologies, a recent analysis of X-ray structural data identified N−H hydrogen bond donors on aromatic and aliphatic amines as the most common Figure 1, a). 2 Directed C−H activations offer a means of introducing amine moieties in the vicinity of Lewis basic groups commonly present in drug-like molecules with high regioselectivity. Over the past decade a number of methodologies for C(sp 2 ) −H to C−N bond transformation have been developed, 10-14 utilizing among others Co, 15,16 Rh, Ir 20-24 and Ru 25-28 catalysts. However, applicability for LSF in a drug discovery context remains challenging. We identified several factors which limit the utility of reported C−H aminations in this respect. Firstly, the inaccessibility of free amines. The majority of reported directed C−H to C−N bond forming reactions, while introducing protected amines in form of amides and sulfonamides, do not include deprotection protocols. Although the introduction of larger substituents can be of utility for fragment-based drug discovery, applications to LSF are of limited use if the free amines cannot be obtained under mild-enough conditions to tolerate a large array of reactive and/or sensitive functional groups present in drug-like molecules. Secondly, the limited functional group tolerance. A common shortcoming of the reported procedures is a lack of compatibility with polar functional groups commonly present in drug-like molecules, such as heterocycles, alcohols, amines, carboxylic acids or amides (practical examples in Figure 1, b). The third limitation is closely related to this, and it is the lack of reporting of unsuccessful transformations and limited number of reports on applicability to complex substrates. This situation would be largely mitigated by full disclosure of the investigated substrate scope. Aside from this, two distinct approaches have been recently developed to improve the predictability of chemical methods in a more systematic fashion; the intermolecular robustness screening approach developed by the Glorius group, and the chemistry informer library approach by Krska and coworkers. 7,32 The intermolecular robustness screening approach evaluates the compatibility of additives bearing a wide variety of functional groups with the transformation of a single substrate. In the informer library approach, the compatibility of a methodology with a large number of complex substrates bearing structural features relevant to pharmaceuticals is evaluated. While the former has been previously used for reported C−H activation methodologies, 33, 34 the latter has so far only been applied to more well established crosscoupling reactions. 7,32 Herein we report the development of an iridium-catalyzed directed ortho-C−H amination applicable to a large number of directing groups (DGs) with outstanding functional group tolerance and regioselectivity. The use of SO4 catalyst allows for regioselective functionalization governed by DGs inherently present in building blocks, drugs and natural products, without the need for additional ligands. An empirical predictive model based on a DG informer library, functional group tolerance studies and LSF informer library serves to guide potential users in predicting reaction applicability for complex substrates. The obtained Moz-protected amines can be deprotected under three distinct conditions, further increasing the utility of the amination protocol to complex molecules. Results and discussion At the beginning of the study we designed a workflow ( Figure 2) which, if successful, would deliver reaction conditions for LSF applications. In the initial stage an optimization study to find suitable screening conditions was undertaken ( Figure 2, Step 1). In terms of reaction conditions, we identified several desirable features of "ideal" C−H activation methodologies applicable to HTE. 34 The following set of conditions was identified based on these criteria after Step 1, initial optimizations (see SI), and used for the directing group informer library. was chosen as catalyst, 34,37 allowing the reaction to be performed in absence of silver salts and insoluble additives, thus facilitating the use of liquid handling systems. Commercially available MozN3 (Moz = pmethoxybenzyloxycarbonyl) was selected as the nitrogen source, allowing for deprotection of the obtained carbamate under a number of conditions. 38,39 Although the transformation was performed with a satisfactory outcome in a wide range of solvents (see SI), four were chosen for the informer library: 1,2-dichloroethane (DCE), which performed best in the initial study, cyclopentyl methyl ether (CPME) and EtOAc as greener solvent alternatives, and N-methyl-2-pyrrolidone (NMP) for its general good solubility of drug-like compounds and high boiling point. In step 2 ( Figure 2), the DG chemical space was probed. Out of the 48 substrates tested under the screening conditions, 16 DGs were shown to be productive for the C−N bond formation, with observed conversions ranging from 10 to >99% ( Figure 3, for more details see SI). Given the variety of DGs tested this was an encouraging result. Although relatively low at the bottom end, we anticipated that the conversions could be improved by further optimization at a later stage. The following observations were made: DCE showed best performance throughout the scope, EtOAc and CPME had similar applicability, albeit in some cases with lower conversions. NMP performed well with heterocycles and carboxylic acids, however, was unproductive with the amide series. While the screening conditions allowed for the functionalization of a variety of substrates under a unified set of conditions, the HTE approach also facilitated rapid substrate-specific reaction optimization. Representative examples are discussed herein (for more studies see SI). In the catalyst loading study (Table 1) we observed excellent conversions in all solvents with 10 mol% catalyst loading, following the informer library conditions. Performance at lower catalyst loadings was shown to be highly solvent dependent. The best results for both model substrates were obtained with DCE, with excellent conversions even at 4 mol% catalyst loading. The solvent effect varied between the model substrates, as the second best result for 2c was obtained in CPME, while for 2h this was in EtOAc, both at 6 mol% catalyst loading. The combined effect of catalyst and MozN3 loading on conversion was investigated next (Table 2). A clear trend emerged from this study: while slight excess of MozN3 led to improved conversion, larger excess had a detrimental effect. This was most pronounced at lower catalyst loadings. Catalyst loading at 4 mol% and MozN3 (1.3 equiv) was chosen for scale-up. As formation of mono-and difunctionalization mixtures was observed for a number of compounds in the directing group informer library, tunable selectivity was investigated with 1-phenyl-1H-pyrazole (1e) as model system (Table 3). Good monoselectivity was achieved at low catalyst and MozN3 loading. Conversely, enrichment of the difunctionalization product was achieved with increased catalyst and azide loadings. To confirm the performance of the catalytic system on a larger scale, a series of building blocks were functionalized and isolated (Scheme 1). The building block selection was based on positive hits from the directing group informer library (vide supra, Figure 3). The effect of various substitution patterns on the functionalized system was investigated with the 2-phenylpyridine series. While the orthomethyl substituent was well tolerated in 1a, with increased substituent size in 1b a significant decrease in yield was observed (2b, 37%). The meta substituent in 1c was tolerated and yielded the anticipated product (2c) with complete regioselectivity. Importantly, the reaction could also be performed using CPME as solvent with maintained yield, albeit with increased catalyst loading (6 mol%). Functionalization of 2-phenylpyridine with high monoselectivity was achieved with decreased MozN3 loading (for optimization see SI), yielding compound 2d in 86 % yield. The monoselective functionalization observed with pyrazole 1e in the optimization study (Table 3) translated well to the 0.5 mmol scale. Benzoxazole 2f was successfully obtained under modified conditions based on single substrate optimization results (see SI). The utility of the presented catalytic system beyond the formation of 5-membered iridacycles was demonstrated with the 2g-2i series, yielding the desired products via 6-membered iridacycle formation. The reaction of N-phenylpyrimidin-2-amine yielded a mixture of monoaminated 2g and diaminated 2g, favoring the monofunctionalization product. In the indole series, both 2h and 2i were obtained with complete selectivity for the 7-position over the 2position, favoring 6-membered iridacycle formation. Importantly with 2h, the use of more environmentally benign EtOAc as solvent had no negative effect on yield. To our delight oxygencentered directing groups were also successfully utilized as demonstrated with the 2j -2m series. Sulfonimidamides are an emerging class of compounds within medicinal chemistry, 40 and to the best of our knowledge, the synthesis of 2j presents the first application of this moiety within directed C−H activation. Functionalization of acetanilide 2k extends the accessibility of 6-membered iridacyles to oxygen-centered directing groups. Compound 2l was obtained with improved yield by adding cyclopentane carboxylic acid as additive. Improvement of conversion with amide directing groups in combination with carboxylic acid additives was observed during the LSF scope investigation with Bezafibrate 3j (Scheme 2). This observation further extended the scope of accessible directing groups with Weinreb amides, as shown with 2m. This substrate class was unproductive under screening conditions of the directing group informer library ( Figure 2). While reaction scale-up was vital for further applications, we were also interested in extending the utility of screening libraries by product isolation from small-scale reactions. At 0.02 mmol reaction scale, the products of a series of 10 building blocks were isolated in quantities sufficient for characterization by NMR spectroscopy. The potential utility of small-scale reaction substance isolation extends beyond compound characterization, as a single milligram of compound is often sufficient for in depth biological studies. 41 A common set-back of this approach are reduced product yields due to sample handling and purification-associated loses, as demonstrated with the decreased yields of 2d and of 2e (Scheme 1, top vs. bottom). The applicability of heterocyclic DGs was further demonstrated with dihydrooxazole 1n, thiazole 1o, benzothiazole 1p, pyrimidine 1q and pyridazine 1r. The PMP-capped imine 2s was obtained with low yield as a result of hydrolysis during purification. Products from oxygen-centered directing groups in N-acetyl indoline 2t and N-methyl benzamide 2u were also isolated, the latter with lower yield due to problematic separation from unreacted starting material. Reaction solvents: NMP (top bars), DCE (bottom bars). One equivalent of additive used per reaction. Color coding based on conversion: Green >50%, orange 50-25%, red <25%. Analyzed by LCMS (UV trace). In step 3 (Figure 2), the effect of a series of 46 additives on reaction performance was evaluated ( Table 4). The additives were chosen as a means of representing functional groups commonly present in druglike molecules. 30,33,34 In terms of solvent effect, only minimal differences in performance between NMP (top bars) and DCE (bottom bars) were observed. To our delight, out of 47 modified conditions, 35 had no effect on the reaction outcome. Notably, excess water was well tolerated for the catalytic conditions used here, a feature important for use of reagents without prior drying. In terms of reagent and functional group tolerance the following insights were gained: 1) the presence of DMSO has a negative effect on the reaction outcome, while other commonly used polar and/or protic solvents are well tolerated. This observation is in accordance to similar studies. 33, 34 2) The majority of polar functional groups commonly present in drug-like molecules were well tolerated. This includes the ether, alcohol, phenol, aldehyde, ketone, carboxylic acid, ester, primary and secondary amides, urea, Weinreb amide and sulfonamides. The same was observed with functional groups commonly present in cross-coupling reagents, such as aliphatic and aromatic halides and aryl-Bpin group. 3) The utility of the method is limited by the presence of amines; with primary, secondary and tertiary amines. The presence of aniline leads to significant decrease in conversion. 4) While heterocycles are in general well tolerated, the presence of pyridine is detrimental. The activity is restored by sterically hindering the pyridine nitrogen (pyridine vs. 2,6-lutidine). 5) Alkenes are tolerated, but the presence of alkynes leads to complete inhibition of the reaction. While this approach provides valuable information on the limitations of this methodology, we recognize that such a simplified approach has its limitations, as the integrity of the additives post-reaction was not determined. Changes in electronic properties of the additives by substituents variation can also effect compatibility. In step 4 (Figure 2), we directed our attention to late-stage amination of a set of complex molecules, consisting of small molecule drugs and natural products. The value of the LSF informer library builds on the investigations presented so far, as it introduces further complexity resulting from the interplay of multiple functional groups in a single substrate. A 48 membered LSF informer library was used, this time tested against two solvents, NMP and DCE. Out of these, 11 were considered successful (conversion >10%), with structures confirmed by NMR spectroscopy. Further 4 compounds were not considered successful due to low conversion and/or product decomposition during purification. The value of performing step 2 (DG informer library) and step 3 (functional group tolerance study) of the envisioned workflow (Figure 2) was shown already at this stage. Out of the 33 remaining compounds, 14 contained unproductive directing groups and 9 contained amines in their structure. The cause behind the failure of the remaining 10 substrates is proposed to be a combination of steric effects, combination effects of functional groups and presence of unproductive directing groups not included in the directing group informer library. Although the reaction conditions for the substrates were mostly based on findings from building blocks reaction optimizations, single substrate optimization also proved of high utility for LSF examples (see SI). In case of Atazanavir, a single reaction conditions screening allowed us to rapidly identify optimal conditions for accessing the monofunctionalized product 3a and difunctionalized product 3a with high degree of selectivity in good to very good yields. Further worth noting is the compatibility of the reaction conditions with a number of polar and protic groups, including arrangements of functional groups suitable for bidentate coordination in the peptide backbone. The pyridine moiety also served as a suitable directing group in Priletivir. Compound 3b was successfully obtained, with the primary sulfonamide, thiazole and tertiary amide groups tolerated. The heavily substituted pyrazole of Apixaban served as a productive directing group, further demonstrating the utility of this moiety for directed C−H amination. Important noting is the use of NMP as the reaction solvent, as the substrate was insoluble in DCE. While the reaction offered relatively low isolated yield of 3c, the majority of unreacted starting material was successfully isolated. In the case of Sulfaphenazole, the pyrazole moiety bearing a sulfonamide group in the 5 position served as a suitable directing group, yielding analogue 3d. Worth noting is the improved tolerance of the aniline compared to the functional group tolerance study (Table 4), presumably due to different electronic properties of the nitrogen affected by the para sulfonamide group. A case of unexpected selectivity was observed with Telmisartan, where product 3e, resulting from the coordination of the N-methylbenzimidazole, was obtained as a single regioisomer, with no product formation observed from the carboxylate coordination. This is also the only example where we observed the formation of product with a 1,2,3 substitution pattern. We rationalize the observed selectivity as a result of the steric arrangement of the substrate, with the ortho substituent of the benzoic acid moiety decreasing its reactivity, and the fused ring system on the 3 position decreasing steric hindrance and allowing functionalization in a 1,2,3 layout. The imine moiety of Diazepam facilitated the amination of the phenyl core. Both monofunctionalized 3f and difunctionalized 3f were successfully isolated. It is important to note that the products were isolated directly from the LSF informer library at 0.02 mmol scale, in quantities sufficient for complete characterization. Lumacaftor was successfully functionalized in NMP with complete selectivity for the carboxylate-directed amination. While with relatively low conversion, the 2-acylanilido pyridine structural motif was tolerated. A significant amount of unreacted starting material was also recovered. The product of carboxylate-directed C−H amination of Repaglinide 3h was successfully isolated on the 0.02 mmol screening scale. The tolerance of the tertiary amine moiety is rationalized by its anilinic nature. A powerful example of the utility of the herein described amination protocol is demonstrated with the functionalization of Paclitaxel. This complex natural product contains a number polar and protic functional groups, sensitive ester groups, a strained oxetane ring and an unsaturation, all of which pose a potential challenge for LSF methods. While 3i could be obtained with 20% isolated yield under standard conditions, the use of one equivalent of acid additive allowed for increasing the isolated yield to 72%. This result, to the best of our knowledge, presents the highest yielding example of Paclitaxel C−H functionalization reported to date. The observation of the positive effect of carboxylic acid additives on conversions with substrate bearing amide directing groups was first observed with the example of Bezafibrate. The conversion to 3j was much higher than conversions of corresponding amides in the DG informer library. This unexpected observation further strengthens the case for LSF informer libraries, as the combinations of structural motifs directly aided methodology development. Finally, Levamisole, bearing an unusual sp 2 -sp 3 linkage between the benzene core and the directing group, was selectively difunctionalized to yield product 3k. A point worth noting is that even though the isolated yields for a number of presented examples were relatively low, in many cases these may still be comparable or exceeding expected overall yields from de novo synthesis of these analogues. The amount of material obtained from these reactions would suffice for the needs of biological studies, allowing for rapid access to SAR data in a fraction of time compared to de novo synthesis. Scheme 3. Deprotection studies. Isolated yields shown. The Moz group was successfully deprotected with three distinct deprotection protocols. This is of particular importance in terms of LSF applications, allowing for deprotection conditions tolerant of a wide array of functional groups. Deprotection under acidic conditions in presence of TFA yielded the corresponding aniline in excellent yields. The desired compound was also obtained under basic condition, using excess KOH in refluxing EtOH. In the third protocol, hydrogenolysis using standard Pd/C hydrogenation yielded the 4c product in very good yield. Finally, LSF application of a one-pot amination/deprotection protocol was demonstrated with Bezafibrate, yielding the free aniline product 4j in 60% isolated yield. Miniaturization studies In the final experimental study we further investigated the possibilities in reaction miniaturization enabled by the use of NMP as non-volatile solvent. We found that with as little as one microliter of total reaction volume conversion to the anticipated products with reasonable reproducibility was obtained throughout the selected substrates ( Figure 4). To the best of our knowledge, this presents the smallest scale C−H activation reported to date, and presents an exciting opportunity in terms of improved sustainability of reaction screening by decreased material consumption. We were even able to further scale down the reaction by using acoustic dispensing for setting up the reaction plates. 42 With this technique, we were able to detect product formations in reaction with a total volume as little as 5 nL (1 nmol scale). Taking Sulfaphenazole as an example, this means that from one gram of material 3181 reactions can be performed. Chemistry at this small scale brings its own set of challenges, and at this point we were not able to quantify conversions with reasonable accuracy and throughput. We are currently working on an AMI-MS analytical method to tackle this issue. 43 Application guidelines In the final part of this work, step 5 (Figure 2), we present guidelines for reaction outcome prediction. ( Figure 5). 1) Directing group selection. In total 21 productive directing groups were presented in the DG and LSF informer libraries, as well as 20 non-productive directing groups from the DG informer library ( Figure 18, for complete substrate structures see SI). 2) Determination of tolerated functional groups. This is aided by the functional group tolerance study and LSF scope. The major limitations in this respect are presence of amines, alkynes, thioureas, and residual DMSO. The DMSO sensitivity is important to consider in medicinal chemistry applications, as intermediates are often stored as DMSO solutions and residual solvent can be present after evaporation. 3) Steric effects on the substrate should be examined in order to determine selectivity and/or productivity. Based on the observed results, the reaction proceeds on the less sterically hindered ortho position when two suitable reaction sites are available. Meta substitution with sterically demanding substituents blocks 1,2,3-substitution. Substituents on the directing group and in the ortho position of the system to be functionalized can negatively affect the reaction outcome by twisting the directing group out of plane. 44 4) Reaction solvent is chosen based on desired application. While DCE showed best overall performance, substitution by EtOAc and CPME is possible if the use of a greener solvent is desired. The use of NMP is ideal for applications in miniaturization, as well as for applications with polar drug-like molecules displaying low solubility in DCE. 5) The final step is the choice of deprotection conditions. The range of presented protocols should satisfy the needs of potential applications in LSF. Although we studied the reaction extensively, the predictive model has its limitations. Given the breadth of the chemical space of Lewis-basic DGs, it is likely that some DGs and their analogues remain uninvestigated. Selectivity between DGs, while observed in most cases, was not extensively investigated. Finally, while the functional group tolerance study provides basic guidelines, the combination effect of functional groups, or even combination effects with DGs are not possible to predict. Conclusion A directed iridium-catalyzed C−H amination methodology applicable to substrates with a wide range of directing groups and with outstanding functional group tolerance was developed. HTE applications facilitated not only rapid optimization of reaction conditions, but also allowed for reaction miniaturization to nanomolar scale and use of automation throughout the campaign. An important aspect of this study was exploring both the opportunities and limitations of the reaction, disclosing both successful and unsuccessful reactions. The directing group and LSF informer libraries, in combination with the functional group tolerance studies allowed for the generation of guidelines for predicting reaction applicability to complex substrates. In terms of demonstrated substrate scope, a broad range of building blocks with diverse directing groups were synthesized, and late-stage functionalization of a number of structurally complex drugs and natural products was demonstrated. The utility of the presented method for applications on complex substrates is further increased by access to a range of Moz deprotection protocols. We are confident that the presented method and associated techniques will find applications in other laboratories. Finally, it is our sincere hope that this work will inspire others to disclose the limitation of their methodologies, allowing users to save time and effort on unproductive reactions, but also eliminating material consumption for such reactions, and ultimately reducing the environmental impact of synthetic chemistry. |
Seizure-induced damage to the hippocampus is prevented by modulation of the GABAergic system. A variety of cerebral insults induce neuronal damage to the hippocampal formation. The somatostatin-immunoreactive (SOM-ir) neurones in the dentate hilus are particularly vulnerable. In the present study, we demonstrated that augmentation of hippocampal GABAergic inhibition by chronic infusion of gamma-vinyl GABA prevented the delayed seizure-induced damage to hilar SOM-ir neurones. Selective lesions of the cholinergic, serotonergic or noradrenergic pathways to the hippocampus did not attenuate the seizure-induced loss of SOM-ir neurones; rather, the damage was exacerbated by the cholinergic lesion. It is, therefore, the intrahippocampal GABAergic circuitries, rather than the selective subcortical pathways, that are critical for neuroprotection after seizures. Enhanced GABAergic inhibition in the hippocampus prevented damage to hilar SOM-ir neurones, even when started 2 days after status epilepticus. GABAergic agents may thus provide an alternative treatment for delayed neuronal damage caused by cerebral insults. |
Concentration of TiOSO4 on Rutile White via Short Sulfate Process Short sulfate process was developed to produce rutile TiO2 white pigment by using low concentration industrial TiOSO4 solution as raw material via self-generated seeded thermal hydrolysis route. The concentration of TiOSO4 solution had significantly influenced the structure and pigment properties of rutile TiO2 white pigment. The samples were characterized by XRD, particle size distribution and pigment properties test. Appropriate concentration of TiOSO4 was beneficial to promoting hydrolysis process in a proper way and obtaining favorable structure and high quality white pigment. The optimized concentration of TiOSO4 solution was of 191.20 g/L. |
Twins, Telomeres, and Aging-in Space! BACKGROUND The landmark National Aeronautics and Space Administration Twins Study represented an integrated effort to launch human space life science research into the modern age of molecular- and "omics"-based studies. As part of the first One-Year Mission aboard the International Space Station, identical twin astronauts Scott and Mark Kelly were the subjects of this "out of this world" research opportunity. Telomeres, the natural ends of chromosomes that shorten with cell division and a host of lifestyle factors and stresses, are key molecular determinants of aging and aging trajectories. METHODS We proposed that telomere length dynamics (changes over time) represent a particularly relevant and integrative biomarker for astronauts, as they reflect the combined experiences and environmental exposures encountered during spaceflight. Telomere length (quantitative polymerase chain reaction and telomere fluorescence in situ hybridization) and telomerase activity (quantitative polymerase chain reaction -telomere repeat amplification protocol) were longitudinally assessed in the space- and earth-bound twins. Chromosome aberrations (directional genomic hybridization), signatures of radiation exposure, were also evaluated. RESULTS The twins had relatively similar telomere lengths before spaceflight, and the earth-bound twins' telomeres remained relatively stable over the course of the study. Surprisingly, the space twins' telomeres were longer during spaceflight, and upon return to Earth shortened rapidly, resulting in many more short telomeres after spaceflight than before. Chromosomal signatures of space radiation exposure were also elevated during spaceflight, and increased inversion frequencies persisted after spaceflight, suggestive of ongoing genome instability. CONCLUSION Although the definitive mechanisms underlying such dramatic spaceflight-associated shifts in telomere length remain unclear, improved maintenance of telomere length has important implications for aging science and improving healthspan for those on Earth, as well. |
Faulting along the southern margin of Reelfoot Lake, Tennessee The Reelfoot Lake basin, Tennessee, is structurally complex and of great interest seismologically because it is located at the junction of two seismicity trends of the New Madrid seismic zone. To better understand the structure at this location, a 7.5-km-long seismic reflection profile was acquired on roads along the southern margin of Reelfoot Lake. The seismic line reveals a westerly dipping basin bounded on the west by the Reelfoot reverse fault zone, the Ridgely right-lateral transpressive fault zone on the east, and the Cottonwood Grove right-lateral strike-slip fault in the middle of the basin. The displacement history of the Reelfoot fault zone appears to be the same as the Ridgely fault zone, thus suggesting that movement on these fault zones has been synchronous, perhaps since the Cretaceous. Since the Reelfoot and Ridgely fault systems are believed responsible for two of the main-shocks of 1811-1812, the fault history revealed in the Reelfoot Lake profile suggests that multiple mainshocks may be typical of the New Madrid seismic zone. The Ridgely fault zone consists of two northeast-striking faults that lie at the base of and within the Mississippi Valley bluff line. This fault zone has 15 m of post-Eocene, up-to-the-east displacement and appears to locally control the eastern limit of Mississippi River migration. The Cottonwood Grove fault zone passes through the center of the seismic line and has approximately 5 m of up-to-the-east displacement. Correlation of the Cottonwood Grove fault with a possible fault scarp on the floor of Reelfoot Lake and the New Markham fault north of the lake suggests the Cottonwood Grove fault may change to a northerly strike at Reelfoot Lake, thereby linking the northeast-trending zones of seismicity in the New Madrid seismic zone. |
Tcellrich lymphoproliferative disorders morphologic and immunologic differential diagnoses To differentiate peripheral Tcell lymphomas (PTCL), the authors evaluated the results of T11 monoclonal antibody studies on consecutive cell suspensions prepared from 509 lymph nodes from various lymphoproliferative disorders (LPD). They used T11 (CD2) positivity to identify those LPD in which the content of T cells was high. There were 266 (52%) cell suspensions which contained more than 50% T11positive cells. More than 75% of the following nonHodgkin's lymphomas had over 50% T11positive cells: diffuse mixed cell (DM), diffuse atypical poorly differentiated lymphocytic and lymphoblastic lymphomas; mycosis fungoides; and true histiocytic lymphoma. Eleven cell suspensions had more than 90% T11positive cells; four were involved by Bcell lymphomas. The cell suspensions prepared from nine of 14 diffuse large cell lymphomas of the Tcell type had more than 50% T11positive cells. Of these, three of five cases of the polymorphous subtype had fewer than 50% T11 cells, but six of seven lymph nodes of the clearcell type had more than 50% T11positive cells. Each of seven DM samples of the Tcell type contained over 50% T11 cells; none had a polymorphous appearance. In the 112 cases of reactive LPD studied, more than 75% of cases of necrotizing lymphadenitis, dermatopathic lymphadenitis, angioimmunoblastic lymphadenopathy, and those with lymph nodes with no specific reactive pattern had more than 50% T11positive cells. The authors' findings indicate that T11 positivity is a reliable Tcell marker in reactive and neoplastic LPD except for those cases of PTCL with a polymorphous appearance; these tend to lose T11expression. A multiparameter diagnostic approach is required in the following LPD: PTCL which are T11negative; PTCL of small lymphocytic type having an unremarkable Tcell phenotype; SIgnegative Bcell lymphomas which are rich in nonneoplastic T cells; nonHodgkin's lymphomas with minimal disease which are rich in reactive T cells; and polymorphous large cell proliferations. |
Paralinguistic Discussion in an Online Educational Setting: A Preliminary Study One of the perceived drawbacks of e-learning is the absence of non-verbal communication. This leads one to conclude that e-learning in general, and fully online education in particular, is inferior to its on-campus counterpart in terms of its communicative capability. This paper challenges this viewpoint, arguing that not only is non-verbal communication 'alive and well' in an online educational setting, it is becoming more robust as the various information and communication technologies (ICTs) in common usage act to redefine non-verbal forms of communication. Reporting on a preliminary study conducted within a Master of Business Administration (MBA) programme at a completely online business school, the authors outline the importance of incorporating the opportunity for non-verbal communication in the learning environment, particularly in an international or cross-cultural setting. Then, focusing on the use of 'emoticons' in a longitudinal study of Organisational Behaviour classes, they analyse the frequency of use of specific categories of emoticons, and their significance for effective cross-cultural communication. The paper concludes that emoticons facilitate a depth and range of non-verbal communication which, in this preliminary study at least, appear comparable to that in the non-virtual world, enhancing the quality of interaction and minimising the potential for friction and misunderstanding between learners. |
Clinical and dermoscopic image of an intermediate stage of regressing seborrheic keratosis in a lichenoid keratosis. LICHENOID KERATOSIS (LK) is a rather frequent skin lesion that has some histologic features similar to lichen planus, and, because of this, LK is also called lichen planuslike keratosis. The main histologic findings include a segment of hyperplastic epidermis accompanied by lymphoid infiltrate in the papillary dermis. The precise nature of LK is uncertain, but it has been proposed that this lesion represents an immunologic or regressive response to a preexisting epidermal lesion. The frequent association of solar lentigines or seborrheic keratosis in the adjacent epithelium has been cited as evidence in favor of this hypothesis. Dermoscopy is a noninvasive technique that has greatly improved the diagnostic accuracy of pigmented skin lesions. We considered it worthwhile to communicate the dermoscopic characteristics of a lesion that presents features of regressing seborrheic keratosis in LK. |
How to Establish Digital Thread Using 3D Factory In a manufacturing assembly line scenario, factory layout is one of the most crucial information used by manufacturing, facility and factory automation engineers for planning purposes. It is important for manufacturing, facility and operations team to work with most up-to-date layout when product, process and operational information on the shop-floor is constantly changing. There are four elements which governs availability of a real-time layout, these are nothing but Product Design, Manufacturing Process Planning, Layout Planning and Shop-floor. The layout must accommodate these changes coming from product design, process updates and shop-floor modifications on real-time basis so that there is no confusion amongst the stakeholders while referring layout data for their planning purpose. If we talk about the impact on the layout because of product design and process design, it is hardly managed real-time due to the isolated systems to manage these data. The integration of product, process and plant (PPP) is becoming crucial to facilitate collaboration and shrink new product introduction lead time where as real-time update from the shop-floor changes is expected in the era of digital transformation. One of the reasons why the integration of product, process and plant (PPP) does not happen is multiple isolated systems used to maintain this data, there are also challenges to feed data back from the shop-floor because of the non-availability of the thread between these objects. The paper is about how factory layout can be developed integrating product, process and plant (PPP) in a single dynamic environment and establish a digital thread between the product design, manufacturing process planning and factory layout to trigger real-time changes and facilitate digital twin of the factory. The methodology adopted here is to develop bill of material for manufacturing resources and align it with the product data management. This approach not only provides ability to maintain change control over resource objects but also helps in configuration management of the resource bill of material. The resources are grouped together as layout structure for the plant with each object required to manufacture the product. The detailed layout developed for the plant while integrating with product and process is used to establish connection with objects on the shop-floor through sensors and IOT (Internet of Things) devices to form digital twin. Such details added in layout which is So far there are no efforts to digitalize every information on the factory floor and able to generate Digital Twin of the factory by connecting physical objects with the digital objects. Paper will elaborate the approach to establish digital thread between PPP and how this can become foundation to drive digital twin of the factory. |
Development of a Novel Multi-Isoform ALDH Inhibitor Effective as an Antimelanoma Agent The aldehyde dehydrogenases (ALDH) are a major family of detoxifying enzymes that contribute to cancer progression and therapy resistance. ALDH overexpression is associated with a poor prognosis in many cancer types. The use of multi-ALDH isoform or isoform-specific ALDH inhibitors as anticancer agents is currently hindered by the lack of viable candidates. Most multi-ALDH isoform inhibitors lack bioavailability and are nonspecific or toxic, whereas most isoform-specific inhibitors are not effective as monotherapy due to the overlapping functions of ALDH family members. The present study details the development of a novel, potent, multi-isoform ALDH inhibitor, called KS100. The rationale for drug development was that inhibition of multiple ALDH isoforms might be more efficacious for cancer compared with isoform-specific inhibition. Enzymatic IC50s of KS100 were 207, 1,410, and 240 nmol/L toward ALDH1A1, 2, and 3A1, respectively. Toxicity of KS100 was mitigated by development of a nanoliposomal formulation, called NanoKS100. NanoKS100 had a loading efficiency of approximately 69% and was stable long-term. NanoKS100 was 5-fold more selective for killing melanoma cells compared with normal human fibroblasts. NanoKS100 administered intravenously at a submaximal dose (3-fold lower) was effective at inhibiting xenografted melanoma tumor growth by approximately 65% without organ-related toxicity. Mechanistically, inhibition by KS100 significantly reduced total cellular ALDH activity to increase reactive oxygen species generation, lipid peroxidation, and accumulation of toxic aldehydes leading to apoptosis and autophagy. Collectively, these data suggest the successful preclinical development of a nontoxic, bioavailable, nanoliposomal formulation containing a novel multi-ALDH isoform inhibitor effective in the treatment of cancer. |
P2271Contemporary trends and outcomes of percutaneous vs. surgical aortic valve replacement in cancer patients Cancer patients with severe AS are often ineligible for surgical aortic valve replacement (SAVR). Transcatheter aortic valve replacement (TAVR) is an emerging non-invasive treatment option for severe AS. Cancer patients likely stand to benefit from TAVR given its non-invasive nature; however, there is a paucity of data regarding the comparative effectiveness of TAVR vs. SAVR in cancer. We sought to assess the relative utilization, outcomes, and dispositions associated with TAVR vs. SAVR in cancer and non-cancer patients. The US-based National Inpatient Sample was queried between 2012 and 2015 using ICD-9 codes for adults>18 years with comorbid AS and cancer without metastatic disease. Multiple in-hospital and disposition outcomes were evaluated. Comparison of TAVR vs SAVR required propensity score estimation using demographic, socio-economic, comorbidity, and hospital specific variables. A standardized morbidity ratio (SMR) weight was calculated by assigning TAVR a weight of 1, and those undergoing SAVR weight of PS/(1-PS). SMR-weighted generalized logistic regression was conducted to estimate the average effect of TAVR compared with SAVR. Finally, the CochranMantelHaenszel (CMH) test for propensity-matched data was utilized to compare the effect modification of cancer on these outcomes. A total of 979,912 out of 5,611,173 patients with AS were found to have non-metastatic cancer (17.5%). Average Elixhauser's mortality score of patients undergoing TAVR and SAVR was 8.9 vs. 8.1 and 8.5 vs. 7.1 for cancer vs. non-cancer respectively (p<0.0001). Over time, patients undergoing AVR increased in both groups, primarily driven by significantly increased rates of TAVR utilization in the cancer group. Over the study time period, an increase in the proportion of patients undergoing TAVR among all patients undergoing AVR was noted (figure) with 21.8% and 19.6% patient with prostate and breast cancer in 2015. TAVR in cancer patients was associated with lower odds of acute kidney injury, cardiogenic shock and major bleeding with no difference in in-hospital mortality and stroke compared to SAVR. Additionally, TAVR was associated with higher odds of home-discharge, and lower need for nursing facility transfer [OR: 0.7 (0.60.8) compared to SAVR among cancer patients. Similar outcomes are noted in the non-cancer cohort upon comparing TAVR to SAVR. However, favorable effect-modification of cancer was noted in regard to AKI (p=0.003), home discharge (p<0.0001), and less nursing facility transfer (p=0.0003), suggesting safety. Compared to patients without cancer, the utilization of AVR in cancer patients has steadily increased. The benefits of TAVR over SAVR appear to extend to patients, regardless of cancer status. TAVR might be a more suitable procedure for cancer patients with AS. |
Wind Farm Location Special Optimization Based on Grid GIS and Choquet Fuzzy Integral Method in Dalian City, China: Selecting an appropriate wind farm location must be specific to a particular administrative region, which involves restrictions balance and trade-offs. Multi-criteria decision making (MCDM) and GIS are widely used in wind energy planning, but have failed to achieve the selection of an optimal location and make it difficult to establish a set of independent factors. Fuzzy measurement is an effective method to evaluate intermediate synthesis and calculates the factor weight through fuzzy integrals. In this paper, optimal wind farm location is analyzed through coupling Grid GIS technique with fuzzy measure. Dalian City is selected as the study area for proving the feasibility of the proposed method. Typography, meteorological, transmission facilities, biological passage, and infrastructure are taken into the index system. All the indexes are specialized into victor grid cells which are taken as the base wind farm location alternative unit. The results indicate that the Grid GIS based fuzzy measure and Choquet fuzzy integral method could effectively deal with the special optimization problem and reflect optimal wind farm locations. Introduction Currently, the world primary energy consumption has increased from 3701 Mtoe in 1965 to 13,511 Mtoe in 2017. At the global consumption level of 2017, coal, oil, and natural gas reserves are barely sufficient for a further 52.6, 50.2, or 134 years, respectively. The international community is increasingly concerned about the influence of primary energy consumption on climate change and air pollution. As well known, wind energy is a kind of renewable energy which could be sustainably utilized and doesn't bring any air pollution, and is expected to achieve extensive commercial success. Selecting an appropriate wind farm location must be specific to a particular administrative region, which involves restrictions balance and trade-offs. The location of a wind farm is very important to the feasibility of wind turbine investment, and it is also related to environmental impacts such as wildlife and socio-economic factors. Consequently, local planners should face dual challenges because they have to make plans for economy growth and reduce the environmental risk. Therefore it is essential to indentify the optimal locations for wind farm development. Previously, many researchers analyzed offshore and onshore wind farm layout optimization. For example, Mytilinou developed an optimization methodology based on life cycle cost analysis and used this in the offshore wind farms. Nezhad assessed offshore wind farm sites in the Samothraki Islands and showed the OW energy potential per location. Multi-criteria decision making (MCDM) 2 of 13 has been widely used in energy planning. For example, a hybrid MCDM model was proposed by Mehmet and Metin based on BOCR (Benefits, Opportunities, Costs and Risks) and ANP (Analytic Network Process) to research renewable energy alternative priorities. A novel hybrid MCDM method was developed by Fetanat and Khorasaninejad based on the fuzzy analytic network method to find an offshore wind farm's best position in the southwest of Iran, and the results revealed robustness when the experts' opinions changed. In general, this research was effective for evaluating wind farm locations. In real-world problems, spatial data is very important for identifying optimal wind farm locations. The Geographic Information System (GIS) is an effective tool to deal with the problem of spatial planning and management. As a result, from the 2000s a number of studies made efforts to estimate wind farm location priorities, and built alternatives by combining MCDM with GIS in several countries. For example, Latinopoulos and Kechagia developed a multi-criterion evaluation method for wind farm location in Greece based on GIS, and the results showed that the method could provide suitable locations for future wind farm building. Baseer et al. conducted a wind farm site suitability analysis using a MCDM approach based on GIS. Konstantinos et al. presented a combination of AHP (Analytic Hierarchy Process) and GIS to determine the most suitable locations in Eastern Macedonia and the Thrace region, Greece. Spyridonidou and Vagiona used GIS and Statistical Design Institute software to plan offshore wind farms in Greece under the national spatial planning scale. However, the GIS based evaluation methods for wind farm site selection are mainly focused on spatial elements without considering a differentiated and precise evaluation of location suitability, and failed to achieve the optimal location. When attempting to evaluate suitable wind farm locations a common feature is that almost all the MCDM methods assume that the evaluation factors are independent of each other. Therefore, for a complex system it is difficult to establish a set of independent factors. For the MCDM method with interactive factors, fuzzy measure is an effective method to evaluate synthesizes intermediate and calculates the factor weight through fuzzy integral. Fuzzy measure is obtained by replacing additivity with a weaker monotonicity, and it is a form of non-additive measure. Previously, a number of studies were mainly focused on the theoretical development of fuzzy integrals and fuzzy measures. The results showed that the fuzzy measure can effectively deal with the interaction between various factors. However, the theory of fuzzy measure and its application in the integration of MCDM and GIS is very infrequent, especially in wind farm location optimization. The reason lies in that decides the fuzzy measure effectively is very complex. As a result, fuzzy measure is proposed to deal with this complex problem without expert opinion. In particular, fuzzy measure is simple to interpret and easy to calculate, therefore it has gained great popularity. Therefore, the research question of this paper is to develop a grid GIS based Choquet fuzzy integral method and find the optimal wind farm location in the Dalian City of China. All the data of the study area are specialized by Grid GIS which contains the raster forms and the victor attributes. The victor grid cell will be taken as the base wind farm alternative unit. The fuzzy measure will be used for weighting interactive factors and their coalitions. Marichal Entropy and Shapley Values will be used to determine the fuzzy measure. The Choquet fuzzy integral method will be used to find the optimal wind farm location in the study area. Compared with the traditional MCDM method, this paper could balance the trade-offs among the independent factors when finding an optimal wind farm location. Moreover, victor grids have an advantage over common raster data in former research. The obtained results of the paper will be useful for potential role planners when establishing effective wind farm building plans and improving local energy sustainability. Methodology In the context of finding an optimal wind farm location, there will be conflicts between different indexes. So as to deal with the complicated conflicts among the indexes, fuzzy Energies 2021, 14, 2454 3 of 13 measure is used to weight each index and the Choquet fuzzy integral method is used to combine the index values and weights. To deal with the wind farm location finding problem, the hybrids of GIS grids, Marichal Entropy and Shapley Values are put forward to calculate the fuzzy measures. Then, the optimal wind farm location is determined. The Concept of the Fuzzy Measure Sugeno first proposed the concept of the fuzzy measure. It is a modeling of index set, which can express the importance of one or more indexes and describe the relationship between multiple indexes. Let A = {a 1, a 2,..., a m } be a state space and X = {x 1, x 2,..., x n } be the space of evaluation state. Dalian's wind farm alternative areas are presented as x 1, x 2,..., x n ∈ X. The options n a 1, a 2,..., a n ∈ A would affect the states m, which define the m factors/index specific to the wind farm. For a given wind farm area x j ∈ X, each index a i ∈ A the assessment value is expressed as a i (x j ). Non-negative numbers are used to connect the state space a i (i = 1, 2,..., m) with its combination to calculate the weight of each index in the wind farm location optimization process. fuzzy measure is used to calculate the significance of index a i and its combination in respect of the potential interrelationship among constraints and indexes. P(a) implies the power set of A, and g: P(A) →IR implies the set function of fuzzy measure on set A which satisfy the following conditions: where IR is the power set of P(A), g (S) denotes the importance or weight of the set of index S ∈ P(A), or the capability of S to find out the wind farm optimal location without considering any remaining indexes; and g is the g fuzzy measure. If set A = {a 1, a 2,..., a m } is finite, the mapping a i →g i (a i ), i = 1, 2,..., m is the fuzzy density function, and can be formulated as follow: Choquet Fuzzy Integral Method The Choquet fuzzy integral method is a most commonly used aggregation operator when there is interaction between indexes. Consider the given wind farm alternative as x j, and the relevant stand of each index as a numerical value, shown as a 1 (x 1 ), a 2 (x 2 ),..., a m (x j ). For the wind farm alternative question, A is the index set with the corresponding fuzzy measure. In order to compare different wind farm location optimization schemes, the general index score a 1 (x 1 ), a 2 (x 2 ),..., a m (x j ) needs to be obtained. Assume that a 1 (x 1 ) ≤ a 2 (x 2 ) ≤... ≤ a m (x j ), then the Choquet fuzzy integral measuring on X is defined as: The transfer of vector x j (a i ) is i, and A i = {a i,..., a m }, a 0 (x j ) = 0, (A 0 ) = 0. The Determination of Fuzzy Measure Before applying the Choquet integral method to get the optimal alternative of the wind farm, it is necessary to calculate the fuzzy measure of each index. Shapley value is used to insure the weight of each index is calculated objectively. The role of index a i (a i ∈ A) played in the wind farm alternatives process can not only be described by g (a i ), but the weights of all set index S({S|a i ∈ S, S ∈ P(A) }) must also be examined. Grabisch defined the Shapley value based on the fuzzy measure of general finite discrete set. If g is the fuzzy measure of P(A), for any a i ∈ A, the Shapley value can be defined as: where I(a i ) is the contribution of index a i in wind farm alternative, indexes in set A are independent of each other, that g (a i ) = I(a i ). The contribution of a i (a i ∈ A) could then be calculated through the following formulation: where 0 ≤ w j ≤ 1 and ∑ m j=1 w j = 1. Marichal Entropy is used to calculate the fuzzy measure of indexes in this research. Marichal Entropy was defined by Marichal and proved that it is similar to the Shannon Entropy. Fuzzy measure of from Marichal Entropy satisfies the boundary conditions, maximization, decisiveness, expandability, symmetry and strictly monotonic increasing is the typical property of the effective entropy measure. When the Shapley value of the wind farm index is given, fuzzy measure can be computed through Equation., and |S| is the potential of index S. By calculating the model of Equation, the value and the fuzzy measure are acquired. Take g (a i ) and into the Equation, fuzzy measure of all the indexes of the wind farm can be solved. Natural Breaks The ranking of the results is based on the natural breaks method. The Natural breaks method finds "the optimal" way to cut apart the ranges, which means the ranges where like areas exist are grouped together. The variation is minimized by the Natural breaks method. Therefore, the areas within each color are as close as possible in value to each other. Study Area The study is carried out in Dalian City, which is located in the northeast of Liaoning province, China. The total area is 13,237 km 2, with 5,952,000 residents (census of 2018). The typography is dominated by hills with altitudes ranging from 0 m to 476 m. The study area is situated in a peninsula, and protrudes into the sea (as shown in Figure 1). Dalian City is situated in the transition zone from the Bohai Sea to the Mongolia plateau, and is affected by the humid air from the ocean and the cold air from the north. As a result it is characterized by rich wind resources. According to the 13th five-year plan of Dalian energy development, the installed capacity of wind energy in the city will reach 1.9 million kilowatts in 2020. Study Area The study is carried out in Dalian City, which is located in the northeast of Liaoning province, China. The total area is 13,237 km 2, with 5,952,000 residents (census of 2018). The typography is dominated by hills with altitudes ranging from 0 m to 476 m. The study area is situated in a peninsula, and protrudes into the sea (as shown in Figure 1). Dalian City is situated in the transition zone from the Bohai Sea to the Mongolia plateau, and is affected by the humid air from the ocean and the cold air from the north. As a result it is characterized by rich wind resources. According to the 13th five-year plan of Dalian energy development, the installed capacity of wind energy in the city will reach 1.9 million kilowatts in 2020. The importance of the study area lies in the fact that the potential of wind energy is very high and has potentially suitable alternatives for the development of a wind farm. As a famous harbor and industrial city in China there are many infrastructures in Dalian, such as shipbuilding, petrochemical, equipment manufacturing and high-tech industry. Therefore, the demand for electricity is very large in this area. Dalian is an important routine for migratory birds in northeast Asia. Routes for migratory birds through Dalian can be divided into three: the east, middle, and the west. Approximately 10 million The importance of the study area lies in the fact that the potential of wind energy is very high and has potentially suitable alternatives for the development of a wind farm. As a famous harbor and industrial city in China there are many infrastructures in Dalian, such as shipbuilding, petrochemical, equipment manufacturing and high-tech industry. Therefore, the demand for electricity is very large in this area. Dalian is an important routine for migratory birds in northeast Asia. Routes for migratory birds through Dalian can be divided into three: the east, middle, and the west. Approximately 10 million migratory birds are foraged, stopped, and replenished their physical strength in this area each year according to the statistics. Therefore, all kinds of natural conditions and socio-economic conditions will cause conflicts between wind energy exploration and environmental protection. To make full use of the considerable wind resources, it is a considerable challenge for the local government to management the wind energy more efficiently. The Frame Work of the Wind Farm Location Special Optimization Model In order to solve the wind farm location optimization problem, it is necessary to establish an index system to determine the wind farm alternatives. The Chinese government provided a number of the indexes that should be required when making efforts to find a suitable location for a wind farm. The indexes are categorized as natural factors such as typography, landform and geological conditions. Socio-economic factors should also be considered in the wind farm selection process as presented in the introduction section. Therefore, the index system taken into consideration can be summarized as typography, meteorological, transmission facilities, biological passage, and infrastructure (as presented in Table 1). The data of daily average wind speed in Dalian City from 1 January 2010 to 31 December 2018 are collected. The data's altitude and slope are calculated from DEM which is download from http://www.gscloud.cn/ accessed on 23 November 2019. Transmission lines, major roads, the infrastructures, bird ways and bird sanctuary are collected from the Bureau of Natural Resources, Dalian, China. Natural factors are spatial data and the research scale in previous studies is most based on administrative region. In this study the Dalian City is disaggregated into 1844 small victor grids based on Grid GIS. Ii is the integration of the grid and GIS, and it both has the raster forms and victor attributes. Natural factors and socio-economic factors are specialized into the grid cells, and take the victor grid cell as the base wind farm alternative unit. The resolving of the grid is 2000 m. For the typography data layers, a digital elevation model (DEM) is used to create the slope layer of the area in percent, and the original resolution of the DEM is 30 m. Since wind energy decreases with the elevations as air density decreases, DEM is also used to define suitable sites of the wind farm based on elevation. For the meteorological data, the kriging interpolation method is used to generate the layer map with the resolution of 2000 m. For the transmission facilities, biological passage and infrastructure, 500 m buffer zones were created individually. Finally all the indexes are specialized into each grid which is the base wind farm alternative unit. As the index value in each alternative farm alternative is expressed in different measurement units, the standardization process is used to render the index commensurate. The detailed processes for the wind farm alternatives optimization is generalized as the following step: Step 1: Build the grid cells in the study area. Step 2: Build the index system for wind farm location optimization system to the study area. Step 3: Calculate each index of each grid cell, and take the gird cell as the wind farm alternative. Step 4: Standardized the values of index. Step 6: Determine the fuzzy measure of the index based on Model. Step 7: Value wind farm potential location based on the Choquet integral according to Equation and determines the optimal grid for the wind farm location. Determine the Shapley Value and Fuzzy Measures All the index values are numerated from 0 to 1 according to the minimum index value and maximum index value. The index system for finding the wind farm optimal location can be classified as two categories: the positive and the negative due to the characteristics of each index. Positive means the bigger the better, and negative means the smaller the better. Meteorological and transmission facilities are classified as positive indexes, and typography, biological passage, infrastructure are classified as negative indexes. Based on Equation, the entropy weight of each index is obtained. Then, based on entropy weight of each index the Shapley value of each index is obtained. The optimization system involves three levels, and Table 2 presents the Shapley of the second level index. The Each index and its combined fuzzy measure g can be calculated through Equation Table 3. The result is finding that all the values are positive, which implies there exists a complementary relationship between the indexes except distances to roads and transmission lines. With the fuzzy measure and value of the third level indexes, the Choquet fuzzy integral method can act as a region group. The fuzzy measure and value and of the first level indexes and second level indexes can also be calculated. Wind Farm Location Based on Shapley Value According to the Shapley value shown in Table 2, all the indexes' aggregate values can be employed to each grid cell that serves as the basis for alternative wind farm units. The ranking of the results is based on natural breaks method. Figure 2 presents the different land suitability levels of the wind farm alternatives in the study area. The results are divided into five categories: very suitable, suitable, commonly, unsuitable, and very unsuitable. The very suitable regions are mainly located around the downtown of Dalian City, whereas most very unsuitable and unsuitable areas are those found in the urban area. The commonly and suitable areas are mainly in the middle of Dalian City. The reason the wind farm location is based on Shapley value is that the Shapley value of meteorological index is 0.875 and the Shapley value of natural factors is 0.8; the Shapley value of infrastructure index is 0.681 and the Shapley value of socio-economic factors is 0.8. Therefore, the meteorological index is the most important index in the entire optimization process based on Shapley value. The optimal grid cell for the wind farm location is a region where accumulated wind speed has a high value, and the socio-economic factors are neglected by this calculation process. The average accumulated wind speed of Dalian City from 2010-2018 is shown in Figure 3. The places with high accumulated wind speed are located around the downtown area which is consistent with very suitable areas. In contrast, the places with low accumulated wind speed are located in the north of the study area, which is consistent with unsuitable areas. Wind Farm Location Based on Shapley Value According to the Shapley value shown in Table 2, all the indexes' aggregate values can be employed to each grid cell that serves as the basis for alternative wind farm units. The ranking of the results is based on natural breaks method. Figure 2 presents the different land suitability levels of the wind farm alternatives in the study area. The results are divided into five categories: very suitable, suitable, commonly, unsuitable, and very unsuitable. The very suitable regions are mainly located around the downtown of Dalian City, whereas most very unsuitable and unsuitable areas are those found in the urban area. The commonly and suitable areas are mainly in the middle of Dalian City. Figure 3. The places with high accumulated wind speed are located around the downtown area which is consistent with very suitable areas. In contrast, the places with low accumulated wind speed are located in the north of the study area, which is consistent with unsuitable areas. The Optimal Wind Farm Location Based on Choquet Fuzzy Integral Method Based on the value and fuzzy measure shown in Table 3, all the indexes' aggregate values for each grid cell can be calculated through the Choquet fuzzy integral method. This procedure provides a general measurement standard for each grid cell, so that the specific alternative wind farms in Dalian City can be optimized accordingly. The ranking of the result is based on the natural breaks method. Similar to the result from the Shapley value, the result is divided into five categories: very suitable, suitable, commonly, unsuitable, and very unsuitable. Figure 4 shows the wind farm suitability areas The Optimal Wind Farm Location Based on Choquet Fuzzy Integral Method Based on the value and fuzzy measure shown in Table 3, all the indexes' aggregate values for each grid cell can be calculated through the Choquet fuzzy integral method. This procedure provides a general measurement standard for each grid cell, so that the specific Energies 2021, 14, 2454 9 of 13 alternative wind farms in Dalian City can be optimized accordingly. The ranking of the result is based on the natural breaks method. Similar to the result from the Shapley value, the result is divided into five categories: very suitable, suitable, commonly, unsuitable, and very unsuitable. Figure 4 shows the wind farm suitability areas based on the Choquet fuzzy integral method. The very suitable regions are mainly located in Changxing Island and Jinzhou near the Yellow Sea. The very unsuitable and unsuitable areas are mainly located in the north of Wafangdian. As presented in Figure 5, a large portion of the study area corresponds to the existing wind farms in the study area. From the result, 23.9% of the existing wind farm land is located in suitable or very suitable areas. The existing wind farm land located in the common area is 57.7%, and only 1.2% of the existing wind farm land is located in very unsuitable areas. This demonstrates the accuracy of the results from the Choquet fuzzy integral method. The wind farm projects with construction permits in Dalian City are far below the wind energy capacity of the region. The actual production capacity of the study area is only 23.2% on average, which corresponds to about 179.2 km 2. The total land area that is suitable for wind farm development is the "very suitable" area based on the Choquet fuzzy integral method, which is 771.7 km 2. This means that about 592.6 km 2 can be used for wind farm development. Therefore, the optimal selection of wind farm location is very important, especially in the downtown area where a significant part of their territory seems to be more appropriate for the construction of a wind farm. As presented in Figure 5, a large portion of the study area corresponds to the existing wind farms in the study area. From the result, 23.9% of the existing wind farm land is located in suitable or very suitable areas. The existing wind farm land located in the common area is 57.7%, and only 1.2% of the existing wind farm land is located in very unsuitable areas. This demonstrates the accuracy of the results from the Choquet fuzzy integral method. The wind farm projects with construction permits in Dalian City are far below the wind energy capacity of the region. The actual production capacity of the study area is only 23.2% on average, which corresponds to about 179.2 km 2. The total land area that is suitable for wind farm development is the "very suitable" area based on the Choquet fuzzy integral method, which is 771.7 km 2. This means that about 592.6 km 2 can be used for wind farm development. Therefore, the optimal selection of wind farm location is very important, especially in the downtown area where a significant part of their territory seems to be more appropriate for the construction of a wind farm. However, the existing wind farms in the study area do not all meet the crucial index systems. The existing wind farm in Wafangdian is located in the bird passage, which is not suitable. From the Choquet fuzzy integral method, there are quite a number of potential grids that can be used to build wind farms. Most very suitable grids are located at the southeast boundary of the study area, where the indexes exhibit high satisfaction degrees. The reason is that the transmission facilities and infrastructure are very excellent around the downtown region, which indicates that the selected socio-economic factors are more difficult to meet, especially for the mountain region in the north. This could be to the reason that the optimization results of wind farm spatial pattern presented in Figure 4 strongly represent the potential wind energy impaction. However, the existing wind farms in the study area do not all meet the crucial index systems. The existing wind farm in Wafangdian is located in the bird passage, which is not suitable. From the Choquet fuzzy integral method, there are quite a number of potential grids that can be used to build wind farms. Most very suitable grids are located at the southeast boundary of the study area, where the indexes exhibit high satisfaction degrees. The reason is that the transmission facilities and infrastructure are very excellent around the downtown region, which indicates that the selected socio-economic factors are more difficult to meet, especially for the mountain region in the north. This could be to the reason that the optimization results of wind farm spatial pattern presented in Figure 4 strongly represent the potential wind energy impaction. In addition, the influence of the other factors, such as typography and biological passage, are also apparent but not as important as accumulated wind speed. According to the results present in Figures 2-4, accumulated wind speed is the primary factor for any wind farm built in Dalian City. However, future wind farm development policies are also supposed to take all of the above wind farm optimization factors into account, as they play a significant role in determining potential grid priorities for future wind farm planning. As presented in Figure 6, significant differences can be found between the Shapley value and Choquet fuzzy integral method concerning very suitable grids in the study area. Additionally, the areas of the same category in the Choquet fuzzy integral method and Sharpley value are shown in Table 4. The results indicates the very suitable area is bigger than that from Shapley value, which implies that the Choquet fuzzy integral method obtained the optimized wind farm location. The result also shows that cumulative wind speed is not the unique critical factor and other factors play an important part in the final determination of the most suitable location. Furthermore, when the Sharpley value is quite different, it is able to deduce that the final results would change greatly, which reveals that the determination of weight of the factors ought to be regarded as a detailed and in-depth optimization process. In summary, the Choquet fuzzy integral method can maximize the area of potential wind farm locations for obtaining an optimized wind farm location. In addition, the influence of the other factors, such as typography and biological passage, are also apparent but not as important as accumulated wind speed. According to the results present in Figures 2-4, accumulated wind speed is the primary factor for any wind farm built in Dalian City. However, future wind farm development policies are also supposed to take all of the above wind farm optimization factors into account, as they play a significant role in determining potential grid priorities for future wind farm planning. As presented in Figure 6, significant differences can be found between the Shapley value and Choquet fuzzy integral method concerning very suitable grids in the study area. Additionally, the areas of the same category in the Choquet fuzzy integral method and Sharpley value are shown in Table 4. The results indicates the very suitable area is bigger than that from Shapley value, which implies that the Choquet fuzzy integral method obtained the optimized wind farm location. The result also shows that cumulative wind speed is not the unique critical factor and other factors play an important part in the final determination of the most suitable location. Furthermore, when the Sharpley value is quite different, it is able to deduce that the final results would change greatly, which reveals that the determination of weight of the factors ought to be regarded as a detailed and in-depth optimization process. In summary, the Choquet fuzzy integral method can maximize the area of potential wind farm locations for obtaining an optimized wind farm location. Conclusions Energy production in China is bringing more and more pressure on the environment, society, and economy. The topic of wind farm location selection can be considered as the decision making problem under complexity which includes all kind of indexes and various interactions between the indexes. This research has reported the design and application of a framework for the complex problem of wind farm location optimization through multiple mutually dependent indexes. A Grid GIS-based fuzzy measure and the Choquet fuzzy integral method were developed to optimize the location of a wind farm in Dalian City, China. The Shapley value was used to calculate the significance of the indexes. Furthermore, this system was good at handling multiple and usually conflicting planning objectives when selecting an optimized location for a new wind farm. Particularly, the traditional expert opinions were often inevitably affected by human preferences and difficult to obtain as demonstrated in this research. The optimization process was performed at a victor grid abided by the procurable map layers of the entire index. The final optimization map was obtained from the Choquet fuzzy integral method based on Grid GIS. The results defined the optimal location of a wind farm along with the applicability of existing word farm projects. The very suitable grids are most located along southeast borders of the study area; in contrast the very unsuitable and unsuitable areas are mainly located in the north of Wafangdian. Based on above results, the location of the existing wind farms are mainly demonstrated as acceptable. However, only 23.2% of the actual capacity is reached in the study area, which corresponds to about 179.2 km 2. The existing wind farms in Dalian City were below the capacity of the area which means about 592.6 km 2 is still available for wind farm development. The results obtained can be used for planners to establish effective wind farm build plans and improve the local energy sustainability. The research found that fuzzy measure is an effective method to evaluate intermediate synthesis and calculate the factor weight through fuzzy integral. Compared with the traditional MCDM method, the Grid GIS based fuzzy measure and Choquet fuzzy integral method could balance the trade-offs among independent factors when finding a wind farm location, and realize an optimal location. Furthermore, victor grids have an advantage over common raster data in the former research. The obtained results of the paper were useful for planners to establish wind farm built plans and improve local energy sustainability. When wind farm developers choose areas for new wind farm developments, it is better to use the results from the Choquet fuzzy integral method, which is more efficient at dealing with the optimization problem. The very suitable and suitable areas calculated by the Choquet fuzzy integral method are very suitable for the construction of a wind farm. On the contrary, the unsuitable and very unsuitable should be forbidden for wind farm construction. The results indicate that the Grid GIS-based fuzzy measure and Choquet fuzzy integral method could effectively deal with the special optimization problem and reflect optimal wind farm sites. However, there are still some extensive works to be done in future research. A number of important factors such as land use, geological conditions and archeological sites should be considered, and the long term energy plan of the government should also be considered. The study also involves a variety of indexes and their interaction, which is a potential future research topic. |
Magnetic forces in idealised saturable-pole configurations Conditions for the validity of scaling of nonlinear magnetic fields are presented and applied to an idealised form of the fundamentally important saturable overlapping rectangular-pole configuration. This allows the dependence of force on other problem variables to be expressed in a surprisingly simple and generalised way and economises very significantly on tedious numerical work. Results confirm and more extensively quantify the curious force-augmenting effects of saturation and the tendency to linear rather than square-law dependence of force on m.m.f. |
Neighborhood Socioeconomic Status and Homicides Among Children in Urban Canada OBJECTIVE. We sought to determine the influence of neighborhood income on homicides among children living in urban Canada. METHODS. Homicides among children <15 years of age living in any of Canada's census metropolitan areas in 1996, 1997, or 1998 were identified on the basis of vital statistics death registration data, by using International Classification of Diseases, Ninth Revision codes. Deaths were assigned to census tracts through postal codes, and the tracts were then assigned to neighborhood income quintiles on the basis of the proportions of the population below the Statistics Canada low-income cutoff values. Census population counts and intercensal population interpolations were used to estimate person-years at risk for rate calculations. Interquintile rate ratios and 95% confidence intervals were calculated. Poisson regression was used to model the effects of neighborhood income quintiles on homicide rates, after adjustment for age. RESULTS. During the 3-year study period, there were 87 homicides among children <15 years of age in Canada's census metropolitan areas (0.82 cases per 100 000; not statistically different according to gender). The age-adjusted relative risks for the lowest versus highest neighborhood income quintiles were 2.95 for all children <15 years of age and 3.39 for children <5 years of age. CONCLUSION. Effective child homicide-prevention strategies should be focused on children <5 years of age living in low-income areas. |
Development and application effect of polymeric surge ZnO arresters for 500 kV compact transmission line 500 kV polymeric ZnO surge arresters for a compact transmission line against lightning were developed and have been put into operation. The design of the arrester unit and the series gap are discussed. The calculated results states that the line surge arrester can highly improve the lightning withstand level of compact transmission line, and the line arrester can withstand the effects of lighting. |
Heads or Tails? Dung Beetle (Coleoptera: Scarabaeidae: Scarabaeinae and Aphodiinae) Attraction to Carrion Abstract Necrophilous insects occupy an ecologically interesting niche because carrion is a highly desirable but ephemeral food source. Dung beetles (Coleoptera: Scarabaeidae: Scarabaeinae and Aphodiinae) within temperate regions are frequently found at carrion, but little is known about their attraction to this resource. Are dung beetles attracted to the carrion itself or are they indirectly attracted due to the exposed gastrointestinal contents? We investigated the association between dung beetles and carrion by examining the distribution of dung beetles on the cranial and caudal end of rat carcasses, delimiting a resource more attractive to necrophagous insects (cranial end) from a resource more attractive to coprophagous insects (caudal end). Dung beetle distribution on rat carcasses was compared with the distribution of carrion beetles (Coleoptera: Silphidae), which serve as a null model of distribution patterns for a taxon known to directly target carrion. Results demonstrated that dung beetles show higher attraction to the cranial end of rat carrion. A similar distribution pattern was found in carrion beetles, suggesting that similar resources were targeted. When dung beetles were grouped by behavioral guilds, rollers and tunnelers also shared this pattern of greater abundance at the cranial end, but dwellers showed no discernible difference. |
EP.WE.791Assessing the use and effectiveness of catheter-directed thrombolysis in a single vascular centre In the UK, venous thromboembolism remains a leading cause of mortality and morbidity. This audit aimed to assess the use, success and cost of the average patient journey undergoing catheter-directed thrombolysis (CDT) following a diagnosis of a deep vein thrombosis (DVT) within a single vascular centre. A retrospective audit of all procedures (n=249) between 2010 and 2019 coded as 'angioplasty' were identified; only adults that had CDT for a confirmed iliofemoral DVT were included. Patient anthropological, biochemical and radiological data were collected. Cost of items were confirmed with appropriate hospital departments. In ten years, a total of 36 patients (21 female, 15 male, mean age 47 years) started CDT for iliofemoral DVT; one procedure was abandoned due to safety reasons. Almost half of DVTs were provoked. Ultrasound confirmed diagnosis in 92% of patients. CDT was successful in approximately 70% of patients. A quarter of patients developed another DVT following discharge. Average length of stay was 8 days at a cost of £11,235 per patient journey. All patients commenced an anticoagulant regime on discharge. This audit supports the use and effectiveness of CDT to treat patients with iliofemoral DVT, however there is a room for improvement regarding long term success given the significant patient and financial costs involved. |
Transition-Based Techniques for Non-Projective Dependency Parsing We present an empirical evaluation of three methods for the treatment of non-projective structures in transition-based dependency parsing: pseudo-projective parsing, non-adjacent arc transitions, and online reordering. We compare both the theoretical coverage and the empirical performance of these methods using data from Czech, English and German. The results show that although online reordering is the only method with complete theoretical coverage, all three techniques exhibit high precision but somewhat lower recall on non-projective dependencies and can all improve overall parsing accuracy provided that non-projective dependencies are frequent enough. We also find that the use of non-adjacent arc transitions may lead to a drop in accuracy on projective dependencies in the presence of long-distance non-projective dependencies, an effect that is not found for the two other techniques. |
XML Based Implementation of a Bibliographic Database and Recursive Queries The Structured Query Language (SQL) of relational database models does not have the expressive power to implement recursive queries. Consequently, recursive queries are implemented as an application program in the host language. The newly developed XML schema provides a different setting for database design and query implementation. In this paper, we design and implement an XML schema and a set of associated queries for a bibliographic database. We will investigate and demonstrate the capabilities of Xpath, Xquery, and XSLT as standard query languages for XML-based databases. We will also show efficient implementations of recursive queries in XSLT. |
One hundred twenty recurrences of herpes simplex virus in an immunocompetent patient. A 60-year-old woman presented to our clinic with a complaint of a dermatitis that had recurred 2 or 3 times a month for the past 5 years. No trigger episode or obvious pattern of recurrence was noted. She reported some itching or burning as a prodromal symptom. Recurrence has been at multiple sites, in a dermatome pattern, including areas around the arm, face, back, abdomen, knees, and ears. Each occurrence is at a different anatomic site (and she only has lesions at this 1 site during each occurrence. For example, after formation and resolution at 1 site (ie, the right medial forearm) the patient will then have a later recurrence at another site (ie, the upper left part of the back). The patient has no complaints of fevers, chills, or adenopathy. Medical history includes type 2 diabetes and hypertension. Results of a basic laboratory screening at the time were all within normal limits. No other contributory history was noted. On examination, the medial arm just proximal to the elbow had clusters of vesicles (Figure). Viral culture performed on the lesion confirmed our suspicion of the presence of herpes simplex virus (HSV). The patient was started on valacyclovir and asked to return if no improvement was noted. To date the patient has not returned to the clinic. |
Phenomenological study of exclusive binary light particle production from antiproton-proton annihilation at FAIR/PANDA Exclusive binary annihilation reactions induced by antiprotons of momentum from 1.5 to 15 GeV/c can be extensively investigated at FAIR/PANDA. We are especially interested in the channel of charged pion pairs. Whereas this very probable channel constitutes the major background for other processes of interest in the PANDA experiment, it carries unique physical information on the quark content of proton, allowing to test different models (quark counting rules, statistical models,..). To study the binary reactions of light meson formation, we are developing an effective Lagrangian model based on Feynman diagrams which takes into account the virtuality of the exchanged particles. Regge factors and form factors are introduced with parameters which may be adjusted on the existing data. We present preliminary results of our formalism for different reactions of light meson production leading to reliable predictions of cross sections, energy and angular dependencies in the PANDA kinematical range. Introduction Large experimental and theoretical efforts have been going on since decades in order to understand and classify high energy processes driven by strong interaction. We revisit here hadronic reactions at incident energies above 1 GeV, and focus in particular on two body processes. Antiprotons are a very peculiar probe, due to the fact that scattering and annihilation reactions may occur in the same process, with definite kinematical characteristics. We discuss the annihilation reaction of antiproton-proton into two charged pions and the crossed channels of pion-proton elastic scattering. These reactions have been studied in the past at FermiLab and at LEAR at lower energies. Charged pion production data are scarce, and do not fill continuously in a large angular or energy range. According to the foreseen performances of the PANDA experiment at FAIR, a large amount of data related to light meson pair production frompp annihilation is expected in the next future. The best possible knowledge of light meson production is also requested prior to the experiment, as pions constitute an important background for many other channels making a timely development of a reliable model. We develop here an effective Lagrangian model (ELM), with meson and baryon exchanges in s, t, and u channels, applicable in the energy range 2 ≤ √ s ≤ 15 GeV, that is the accessible domain for the PANDA experiment at FAIR. It is known that first order Born diagrams give cross sections much larger than measured, as Feynman diagrams assume on-shell point-like particles. Form factors are added in order to take into account the composite nature of the interacting particles at vertices. Their form is, however, somehow arbitrary, and parameters for masses of the exchanged particles, coupling constants or cutoff are adjusted to reproduce the data. Therefore these models should be considered in an effective way to take into account microscopic degrees of freedom and quark exchange diagrams. A "Reggeization" of the trajectories is added to reproduce the very forward and very backward scattering angles. To get maximum profit from the available data, we consider also existing elastic scattering ± p data, and apply crossing symmetry in order to compare the predictions based on the annihilation channel, at least in a limited kinematical range. Formalism and Comparison with data The annihilation reactions are best described in the centre of mass (CMS) frame, whereas the kinematics of elastic scattering is simpler in Laboratory (Lab) frame. We consider the reaction: The notation of four momenta is indicated in the parenthesis. The following notations are used: q t = (−p 1 + k 1 ), q 2 t = t, q u = (−p 1 + k 2 ), q 2 u = u and q s = (p 1 + p 2 ), q 2 s = s. where s, t and u are the Mandelstam variables, s + t + u = 2M 2 p + 2m 2, M p is the proton mass, m is the pion mass. The general expression for the differential cross section in CMS of reaction is: where p ( ) is the velocity of the the proton(pion), and E is the energy in CMS. d = 2 dcos due to the azimuthal symmetry of binary reactions. Crossing symmetry relates annihilation and scattering cross sections. Crossing symmetry states that the amplitudes of the crossed processes are the same, i.e., the matrix element M(s, t) for the scattering (s) process (−k 1 ) + p(p 2 ) → (k 2 ) + p(−p 1 ) and the annihilation (a), is the same, at corresponding s and t values. In order to find the correspondence, kinematical replacements should be done, as s ↔ t. The cross sections are related by: where p a is the CM momentum forpp annihilation and |k s | is the CM momentum for − p scattering, evaluated at the same s value: If the scattering cross section is measured at a value s s = s 1 different from s a = s, at small t values one can rescale the cross section, using the empirical dependence: s ≃ const s −2. An example of cross section for annihilation and scattering processes at similar incident momenta are reported in Fig. 1. In order to calculate M, one needs to specify a model for the reaction. In this work we consider the formalism of effective meson Lagrangians. The following contributions to the cross section for reaction are calculated: baryon exchange: t-channel nucleon (neutron) exchange, t-channel ∆ 0 exchange, u-channel ∆ ++ exchange, s-channel -meson exchange. The total amplitude is written as a coherent sum of all the amplitudes: Figure 1. Data forp + p → − + +, from Ref. (blue circles), for + + p → + + p from Ref. (green solid circles), and from Ref. (black empty circles) -for + emission at small t-values (or large u-values). The + p data have been scaled by the crossing constant factor 0.589 according to formula. In case of charged pions, the dominant contribution in forward direction is N exchange, whereas ∆ ++ mostly contribute to backward scattering. We neglect the difference of masses between different charge states of particles, like nucleons, pion and ∆. Central scattering is driven by s-channel exchange of vector mesons, with the same quantum numbers as the photon. We limit our considerations to -meson exchange. The expressions for the amplitudes and their interferences follow the Feynman rules. The coupling constants are fixed from the known decays of the particles, or we use the values from effective potentials like in ref.. Masses and widths are taken from. The effects of strong interaction in the initial state coming from the exchange of vector and (pseudo)scalar mesons between proton-antiproton are essential and effectively lead to the Regge form of the amplitude. The t and u diagrams are modified by adding a general Regge factor R x (where x = t, u) with the following form: where s 0 ≃ 1 GeV 2 can be considered a fitting parameter and r s / ≃ 0.7 is fixed by the slope of the Regge trajectory. And M here is the mass of exchange particle. In the present model the values have been set at s 0 = 1.4 GeV 2 and r s / = 0.7 for the nucleon. A form factor of the form: F = 1/(x − p 2 N,∆ ) 2, was introduced in the N pp and N p∆ vertices, with p N = 0.8 GeV and p ∆ =5 GeV. The angular dependence for the reactionp + p → − + + is shown in Fig. 2 (a-d) with satisfactory agreement. The results for the crossed channels ± elastic scattering are also reported in Fig. 2 (e-f), where data for the differential cross section span a small very forward or very backward angular region, bringing an additional test of the model. The angular distribution for √ s= 3.680 GeV is shown in Fig. 3. The total result (black, solid line) gives a good description of the data (red open circles) from Ref. for charged pion production. All components and their interferences are illustrated. The main contribution at central angles is given by s-channel exchange, whereas n exchange dominates forward angles (t channel) followed by ∆ 0 exchange. ∆ ++ represents the largest contribution for backward angles (u channel). The interferences are also shown. Their contribution affects the shape of the angular distribution, some of them being negative in part of the angular region. Conclusions A model, based on effective meson Lagrangian, has been built in order to reproduce the existing data for two pion production in proton-antiproton annihilation at moderate and large energies. Form factors and Regge factors are implemented and parameters adjusted to the existing data for charged pion pair production. Coupling constants are fixed from the known properties of the corresponding decay channels. The agreement with a large set of data is satisfactory for the angular dependence as well as the energy dependence of the cross section. A comparison with data from elastic ± p → ± p, using crossing symmetry prescriptions shows a good agreement within the uncertainty, which verifies that crossing symmetry works well at backward angles, where one diagram is dominant. This model can be extended to other binary channels, with appropriate changes of constants. The implementation to Monte Carlo simulations for predictions and optimization future experiments is also foreseen. Acknowledgments Thanks are due to E. Tomasi-Gustafsson, D. Marchand, Y. Bystritskiy and A. Dorokhov, for supervision and useful discussion. The author is supported by the China Scholarship Council. |
A Practical Security Risk Analysis Process and Tool for Information System While conventional business administration-based information technology management methods are applied to the risk analysis of information systems, no security risk analysis techniques have been used in relation to information protection. In particular, given the rapid diffusion of information systems and the demand for information protection, it is vital to develop security risk analysis techniques. Therefore, this paper will suggest an ideal risk analysis process for information systems. To prove the usefulness of this security risk analysis process, this paper will show the results of managed, physical and technical security risk analysis that are derived from investigating and analyzing the conventional information protection items of an information system. |
The Role of Sense of Place in the Revitalisation of Heritage Street: the case of George Town, Penang, Malaysia. In revitalising heritage sites, understanding sense of place is important as it represents a layering of histories, tangible heritage, and intangible heritage. This study examines the relationship between local communities and the cultural heritage in George Town World Heritage Site, Malaysia. Semi-structured interviews with local communities, observations and digital photo analysis were conducted. It is in the intricacies of intangible heritage practices and their authentic expression, the local communities feel attached to and claim ownership of the place. Understanding this and how it translates into the site's stewardship is critical in protecting its value, management, and ongoing revitalisation. Keywords: Sense of Place; Urban Revitalisation; Heritage Street; World Heritage Site eISSN: 2398-4287© 2021. The Authors. Published for AMER ABRA cE-Bs by e-International Publishing House, Ltd., UK. This is an open access article under the CC BYNC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peerreview under responsibility of AMER (Association of Malaysian Environment-Behaviour Researchers), ABRA (Association of Behavioural Researchers on Asians/Africans/Arabians) and cE-Bs (Centre for Environment-Behaviour Studies), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia. DOI: https://doi.org/10.21834/ebpj.v6i18.3080 |
Nanoparticle Loading Induced Morphological Transitions and Size Fractionation of Coassemblies from PS-b-PAA with Quantum Dots. Inorganic nanoparticles play a very important role in the fabrication and regulation of desirable hybrid structures with block copolymers. In this study, polystyrene-b-poly(acrylic acid) (PS48-b-PAA67) and oleic acid-capped CdSe/CdS core/shell quantum dots (QDs) are coassembled in tetrahydrofuran (THF) through gradual water addition. QDs are incorporated into the hydrophilic PAA blocks because of the strong coordination between PAA blocks and the surface of QDs. Increasing the weight fraction of QDs ( = 0-0.44) leads to morphological transitions from hybrid spherical micelles to large compound micelles (LCMs) and then to bowl-shaped structures. The coassembly process is monitored using transmission electron microscopy (TEM). Formation mechanism of different morphologies is further proposed in which the PAA blocks bridging QDs manipulates the polymer chain mobility and the resulting morphology. Furthermore, the size and size distribution of assemblies serving as drug carriers will influence the circulation time, organ distribution and cell entry pathway of assemblies. Therefore, it is important to prepare or isolate assemblies with monodisperse or narrow size distribution for biomedical applications. Here, the centrifugation and membrane filtration techniques are applied to fractionate polydisperse coassemblies, and the results indicate that both techniques provide effective size fractionation. |
How Internet technologies can help hospitals to curb COVID-19: PUMCH experience from China Dear Editor, From the end of 2019 to early 2020, an outbreak of COVID-19 spread throughout China and soon became a global concern (The Lancet, 2020). Currently, it has spread to over 9 million people in over 210 countries and territories around the world. Peking Union Medical College Hospital (PUMCH) has been ranked first for 10 consecutive years according to the Best Hospital ranking in China. PUMCH initiated the prevention and control of COVID-19 immediately after the outbreak and dispatched a total of 186 medics in three batches to support the fight against outbreak of COVID-19 in Wuhan. After months of fighting COVID-19, we found that Internet technologies are one of the most pivotal measures. Here we would like to summarise the current work and share our experience. First, for the healthy public who are nervous about the epidemic, we made home quarantine tips, self-isolation guidance, personnel prevention guideline and medical education videos for COVID-19, sending them regularly through our epidemic information dissemination platforms, including official mobile Application, WeChat official accounts and other social medias (https://weibo.com/pumch doctor, etc.). We also provided online mental counselling to comfort nervous people and release their mental anxiety and stress (). These measures played a significantly auxiliary role in assisting the Chinese government and National Health Commission channels to reduce the social panic and promote social distancing during the pandemic, and reduced misinformation in some sense. Second, for those patients with various chronic diseases requiring constant medical services, we adopted a free online clinic, which provided regular follow-up, medication prescriptions and contactless drug delivery. In China, both public hospitals and community health services take joint responsibility for continuing management of chronic conditions. However, quarantines and restrictions on the movement of people and social gatherings inevitably created barriers to treatment for these patients, most of whom were susceptible populations. Over 50,000 patients received free online remote consultations (see Figure 1), which was not only convenient for patients but also reduced the risk of cross infection, compared to seeing doctors offline. Recently, through cooperation with express services, we are working on delivering drugs directly to patients homes. Third, for patients who actually developed fevers or coughs, we conducted online counselling to acquire necessary information, including the epidemiological history, present history, symptom characteristics to identify and stratify the possible risks for COVID-19 virus infection. For those low-risk patients, we gave professional advice on self-management of care and treatment and conducted follow-ups. For high-risk patients, we strongly recommended they attend the offline fever clinic immediately, where they would first be screened for COVID-19. Fourth, for patients whose conditions genuinely required offline clinic visits, we designed the Intelligent Pre-sorting Electronic Pass System (Figure 2). This new system was mainly used for information registration and epidemiological screening of patients, patients families, accompanying persons and other visitors entering hospital. The collected information included health status, travel history and whether they had contacted people from high-risk areas like Wuhan. These pieces of information, with present |
Effects of the dual sodiumglucose linked transporter inhibitor, licogliflozin vs placebo or empagliflozin in patients with type 2 diabetes and heart failure Aims Explore the efficacy, safety and tolerability of the dual sodiumglucose cotransporter (SGLT) 1 and 2 inhibitor, licogliflozin in patients with type2 diabetes mellitus (T2DM) and heart failure. Methods This multicentre, parallelgroup phase IIA study randomized 125 patients with T2DM and heart failure (New York Heart Association IIIV; plasma Nterminal pro btype natriuretic peptide >300 pg/mL) to licogliflozin (2.5 mg, 10 mg, 50 mg) taken at bedtime, empagliflozin (25 mg) or placebo (44 patients completed the study). The primary endpoint was change from baseline in NTproBNP after 12 weeks. Secondary endpoints included change from baseline in glycated haemoglobin, fasting plasma glucose, weight, blood pressure, fasting lipid profile, highsensitivity creactive protein, and safety and tolerability. Results Licogliflozin 10 mg for 12 weeks significantly reduced NTproBNP vs placebo (Geometric mean ratio 0.56, P =.033). A trend was observed with 50 mg licogliflozin (0.64, P =.064), with no difference between licogliflozin and empagliflozin. The largest numerical decreases in glycated haemoglobin were with licogliflozin 50 mg (−0.58 ± 0.34%) and empagliflozin (−0.44 ± 1.18%) vs placebo (−0.04 ± 0.91%). The reduction in body weight was similar with licogliflozin 50 mg (−2.15 ± 2.40 kg) and empagliflozin (−2.25 ± 1.89 kg). A numerical reduction in systolic blood pressure was seen with licogliflozin 50 mg (−9.54 ± 16.88 mmHg) and empagliflozin (−6.98 ± 15.03 mmHg) vs placebo (−2.85 ± 11.97 mmHg). Adverse events (AEs) were mild, including hypotension (6.5%), hypoglycaemia (8.1%) and inadequate diabetes control (1.6%). The incidence of diarrhoea (4.9%) was lower than previously reported. Conclusion The reduction in NTproBNP with licogliflozin suggests a potential benefit of SGLT1 and 2 inhibition in patients with T2DM and heart failure. | INTRODUCTION Type-2 diabetes mellitus (T2DM) is associated with a high risk of cardiovascular (CV) disease and related complications, such as heart failure (HF). T2DM is associated with an increased incidence of HF and the risk of HF hospitalizations/mortality is higher in patients with the condition compared to those without. HF is among the most common CV complications of T2DM, with an incidence greater than that of myocardial infarction (MI) or stroke. 4 Selective sodium-glucose cotransporter (SGLT2) inhibitors have been developed as antidiabetes drugs and lead to a reduction in glycated haemoglobin (HbA 1c ) of up to 1%. 5,6 A striking CV benefit of SGLT2 inhibitors has recently been demonstrated in patients with T2DM at high risk for CV events, where a significant reduction in the major adverse cardiac events endpoint (MACE, a composite of CV death, nonfatal MI and nonfatal stroke) and a reduction in HF hospitalizations was seen with empagliflozin and canagliflozin. 7,8 Further evidence was provided in a more recent study, which demonstrated a reduced risk of the composite of CV death or HF hospitalizations with dapagliflozin treatment, 9 a benefit driven by a reduction in HF hospitalizations. These findings are supported by the results of a recent, real-world evidence study. 10 The specific mechanisms underlying the benefit associated with SGLT2 inhibitors are unclear, but may be attributed to specific effects of SGLT2 inhibition on renal sodium and glucose handling, 11 which include the switch of cardiac metabolism from free fatty acid oxidation to -hydroxybutyrate oxidation, enhanced oxygen supply due to haemoconcentration, 12 and inhibition of sodium-hydrogen exchange. 13 Since HF is the most frequent CV complication of T2DM, several large-scale trials have been designed to determine a potential benefit of SGLT2 inhibitors in patients with HF. Licogliflozin is a combined inhibitor of SGLT1 and SGLT2 and is hypothesized to further enhance the effects on renal sodium and glucose handling via inhibition of both cotransporter subtypes in the proximal renal tubule. 18 SGLT1 is also expressed in the small intestine, where it is required for glucose and galactose absorption. Enteric inhibition of SGLT1 has the potential of achieving weight loss through glucose and galactose malabsorption, 19 calorie wasting and other potential endocrine-based mechanisms. 18 Dual SGLT1 and 2 inhibitors have been shown to improve HbA1c in patients with T2DM 20 and to have beneficial effects on body weight in both patients with T2DM and patients with obesity. 18,20 SGLT1 receptors are also specifically expressed in the human heart, although the role of their expression in this tissue is not fully understood. 21,22 The aim of this study was to assess the efficacy (including Nterminal pro b-type natriuretic peptide measurement as a surrogate parameter for HF severity), safety and tolerability of licogliflozin in patients with T2DM, cardiac disease and HF. | Study design and oversight This multicentre, double-blind, double-dummy, parallel-group phase II study randomized patients to 1 of 3 doses of licogliflozin, placebo or empagliflozin ( Figure 1). The trial was conducted in 55 centres across 21 countries. Patients meeting all the eligibility criteria at screening were entered into the placebo run-in period, where they received single blind placebo medication for 2 weeks (to familiarize with the study-drug intake schedule and to allow correction of any hypovolaemia). Eligible patients were then randomized to either licogliflozin (2.5, 10 or 50 mg once daily, taken at bedtime), empagliflozin (up-titrated from 10 to 25 mg qd after 2 weeks to minimize potential adverse effects-taken in the morning) or their What is already known about this subject Sodium-glucose cotransporter (SGLT2) inhibitors have been associated with reduced cardiovascular risk, including reduction in heart failure hospitalizations. However, the mechanism underlying these effects remains unclear. There are also limited data on the effect on N-terminal pro b-type natriuretic peptide (NT-proBNP), a biomarker of cardiac wall stress that is commonly elevated in patients with heart failure. SGLT1 and 2 inhibition with licogliflozin has shown beneficial effects on glucose handling in patients with type-2 diabetes mellitus (T2DM) and on body weight in patients with obesity However, the effects of SGLT1 and 2 inhibition in patients with T2DM and heart failure are unknown What this study adds This is the first study to evaluate the effects of SGLT1 and 2 inhibition on NT-proBNP in patients with T2DM and heart failure, with results showing significant reductions in NT-proBNP with licogliflozin vs placebo Secondary analyses suggest reductions in glycated haemoglobin, body weight and systolic blood pressure following treatment with licogliflozin, in line with previously published data Licogliflozin treatment was safe and well-tolerated, with no new safety findings reported corresponding placebo (morning or night). Licogliflozin 50 mg was chosen as the highest dose in this study, based on the previous proof of concept study, in which a urinary glucose excretion (UGE) over 24 hours of~100 g was observed following once daily dosing with licogliflozin 15 mg in patients with T2DM. 18 Gastrointestinal tolerability was also better with lower doses of licogliflozin (30 mg qd vs 150 mg qd). 18 Empagliflozin was included as a comparator due to its known CV benefit in patients with T2DM. 8 Following randomization, patients attended the study site again at 12 weeks for the evaluation of efficacy (change in NT-proBNP), safety and tolerability. Following the last study visit at week 12, patients continued with the same assigned treatment for a further 24 weeks. Long-term efficacy, tolerability and safety were planned for evaluation. This study was prematurely discontinued due to slow enrolment. Only a limited number of patients had completed the core 12-week period of the study when the study was stopped (n = 44), with just 1 patient completing the originally planned 24-week followup period. Therefore, the interpretation of the data presented is mainly descriptive and limited to the main study period, i.e. the first 12 weeks. This study was designed and implemented in accordance with ICH Harmonized Tripartite Guidelines for Good Clinical Practice and according to the ethical principles of the Declaration of Helsinki. Ethical approval was obtained from the Institutional Review Board/Independent Ethics Committee of each centre where patients were recruited. All patients provided written informed consent for participation prior to randomization. Site monitoring was carried out by Novartis. The study investigator (or a designated staff member) was responsible for data collection and reporting. The study sponsor had access to the trial database and performed statistical analyses. All authors had full access to the study data and had the final responsibility for the decision to submit this manuscript for publication. | Participants The goal was to randomize approximately 496 patients, with 125 randomized before early study termination. Patients (≥ 18 years) with T2DM, with HbA1c ≥ 6.5% and ≤ 10%, and a body mass index of ≥22 kg/m 2 at screening were included in this study. Eligible patients were also required to have an estimated glomerular filtration rate ≥45 mL/min/1.73m 2, plasma NT-proBNP >300 pg/mL and documented symptomatic chronic HF (New York Heart Association II-IV) at screening. Those receiving angiotensin-converting enzyme inhibitors, angiotensin II receptor blockers, mineralocorticoid receptor antagonists, angiotensin receptor-neprilysin inhibitors and/or -blockers were required to be on stable doses. Patients with type 1 diabetes, monogenic diabetes, diabetes resulting from pancreatic injury or secondary forms of diabetes were excluded from this study. Other key exclusion criteria included a history of ketoacidosis, recent MI or CV intervention, or low blood pressure (BP; systolic BP ≤ 100 mmHg). The full list of inclusion and exclusion criteria can be found in the Appendix. | Study procedures At the end of the run-in period, participants were randomized to either licogliflozin (2.5 mg, 10 mg or 50 mg qd in a 1:1:2 ratio), empagliflozin (:2 ratio) or placebo (:2 ratio). Randomization was performed with the help of a centralized computer system (Interactive F I G U R E 1 Study design. qd, once a day Response Technology) with patients stratified according to geographical region and left ventricular ejection fraction (LVEF: <45% vs ≥45%). All doses of licogliflozin (tablets), empagliflozin (over-encapsulated tablet) or corresponding placebo were administered orally twice daily. In the licogliflozin treatment arm, 1 licogliflozin tablet was taken at bedtime and the corresponding empagliflozin placebo (capsule) was taken in the morning (with or without food). In the empagliflozin arm, 1 empagliflozin capsule was taken in the morning and the corresponding licogliflozin placebo was taken at bedtime. Patients in the corresponding placebo arm took 1 capsule in the morning and 1 tablet at bedtime (double-dummy design). For assessment of efficacy, NT-proBNP was evaluated at baseline and following 12 weeks of treatment. Other efficacy parameters included HbA1c, fasting plasma glucose (FPG), lipids, high-sensitivity C-reactive protein (hsCRP), body weight, body mass index, blood pressure (SBP, DBP) and NYHA class. Left atrial size and volume were assessed by echocardiography at week −2 (run-in) and week 12. All assessments were completed and analysed at a central laboratory. Safety assessments included collection of all adverse events (AEs) and serious AEs along with their severity and relationship to study drug, and pregnancies. Haematology, blood chemistry and urine as well as vital signs, physical condition and body weight were regularly monitored. Suspected cases of ketoacidosis were reviewed by a Ketoacidosis Adjudication Committee. | Study endpoints The primary endpoint was the change from baseline in NT-proBNP relative to placebo following 12-weeks of treatment. Secondary endpoints included the effects of licogliflozin vs placebo at 12 weeks on HbA1c, FPG, weight, BP, lipids, hsCRP, urinary glucose and sodium excretion, echocardiography and NYHA class, and the effects of licogliflozin vs empagliflozin on the same. Safety and tolerability over 12-weeks were also assessed. Key exploratory endpoints included comparison of licogliflozin vs empagliflozin at 12 weeks on change from baseline in NT-proBNP, echocardiographic parameters and NYHA class. | Statistical analysis The study was designed to randomize 496 patients in total, aiming to provide sufficient power to detect a dose response signal in NT-proBNP (based on log-transformed ratio of NT-proBNP at week 12 compared to baseline), using Multiple Comparison Procedure-Modelling (MCP-MOD). 23,24 Due to early study termination and a smaller sample size than originally planned, a mixed effect model of repeated measures was performed in place of the MCP-MOD, as an exploratory analysis for NT-proBNP. The change from baseline in logtransformed NT-proBNP was used as the outcome variable. The model included LVEF at baseline (<45% vs ≥45%), treatment group (licogliflozin 2.5, 10 or 50 mg qd, empagliflozin, or placebo), visit and treatment group-by-visit interaction as fixed-effect factors, baseline log-transformed NT-proBNP as a covariate, and an unstructured, within-subject covariance. NT-proBNP data up to week 12 were included in the model. The adjusted mean differences (backtransformed as ratios) for each treatment group at week 4 and week 12 were estimated from this model. A P-value <.05 (2-sided) was considered statistically significant. Statistical comparisons between the secondary endpoint data were not tested due to the limited sample sizes. Patient disposition, demographics, and primary and secondary efficacy analyses are described using summary statistics. | Role of the funding source Novartis sponsored the study, designed the study and analysed the data. | Data sharing statement Novartis is committed to sharing, with qualified external researchers, access to patient-level data and supporting clinical documents from eligible studies. These requests are reviewed and approved by an independent review panel on the basis of scientific merit. All data provided are anonymized to respect the privacy of patients who have participated in the trial in line with applicable laws and regulations. This trial data availability is according to the criteria and process described on www.clinicalstudydatarequest.com. | Nomenclature of targets and ligands Key protein targets and ligands in this article are hyperlinked to corresponding entries in http://www.guidetopharmacology.org, the common portal for data from the IUPHAR/BPS Guide to PHARMA-COLOGY, 25 and are permanently archived in the Concise Guide to PHARMACOLOGY 2019/20. 26 3 | RESULTS Of the 125 patients randomized in the study, 75 were discontinued due to early study termination, with 44 patients completing the 12-week study. Three patients permanently discontinued from study treatment due to AEs. Two patients died (one death in each of the licogliflozin 10 mg and placebo groups-not considered to be study-drug related) while a third patient discontinued from the empagliflozin group due to increased blood creatinine levels. The median age of the patients was 70.0 years (interquartile range: 62.0-74.0) and most were male (71.8%), Caucasian (91.1%) and enrolled in Europe (70.2%; Table 1). An LVEF of <45% was 3.2 | Effect of licogliflozin, placebo or empagliflozin on NT-proBNP following 12-weeks of treatment A numerical reduction in NT-proBNP from baseline was seen over time in both licogliflozin and empagliflozin groups vs placebo, which was apparent at week 4 and continued up to week 12 (Table S3). Due to early study termination, limited data were available. The greatest overall effect on NT-proBNP was observed at week 12 for all licogliflozin groups vs placebo or empagliflozin | Fasting lipid profile and hsCRP No consistent pattern was observed for change from baseline to week 12 for any of the lipid parameters or hsCRP across the active treatment groups. Triglyceride levels were numerically increased from baseline at week 12 across all groups with the exception of licogliflozin 2.5 mg and placebo (Table S4). With the exception of the licogliflozin 10 mg group, total cholesterol increased across all groups at week 12. Highdensity lipoprotein-cholesterol increased in all treatment groups at week 12, except for the licogliflozin 10 mg group and placebo. Lowdensity lipoprotein-cholesterol also increased in all groups with the exception of the placebo group, which showed a small decrease at week 12 (Table S4). | Echocardiography and change in NYHA class Changes in LVEF from baseline at week 12 were small and inconsistent, while no significant changes in left atrial size and volume were observed at week 12 (Table S4). (Table S4), although the sample size was smaller compared to baseline or week 4. | Safety The safety profile of licogliflozin in this study is in line with previous reports, with the exception that the rate of diarrhoea (4.9% in the pooled licogliflozin groups) was lower than previously observed. 18 The overall incidence of AEs was comparable between the licogliflozin and placebo groups, with a numerically higher incidence of AEs reported (Table 3). No obvious changes in biochemistry or urinalysis markers were seen between licogliflozin, empagliflozin and placebo treatment arms. The change from baseline in key laboratory evaluations at week 12 is shown in Table S5. confirm these findings. The beneficial effects of SGLT2 inhibitors on NT-proBNP have previously been reported in patients with T2DM following treatment with canagliflozin 30 and dapagliflozin, 31 although the treatment durations were longer than in the current study (104 and 24 weeks respectively) and the patient population was predominantly free from cardiac disease. Furthermore, canagliflozin did not lead to a reduction in NT-proBNP but prevented an increase in NT-proBNP that was seen in the placebo group at 2 years. 30 The effects on NT-proBNP levels observed with licogliflozin-associated SGLT1 and 2 inhibition is in line with these assertions. The results of other ongoing studies in HF (EMPEROR-Reduced, EMPEROR-Preserved and DELIVER) will provide further evidence on the potential efficacy of SGLT2 inhibitors in patients with this condition. 14,15,17 CV outcomes associated with SGLT1 and 2 inhibitorassociated reductions in NT-proBNP have not yet been assessed. However, ongoing trials are evaluating the effects of the SGLT1/2 inhibitor, sotagliflozin, on CV outcomes in high-risk patients with T2DM and renal impairment (SCORED) 33 and in patients with T2DM and worsening HF (SOLOIST-WHF). 34 Observations from phase II studies of NT-proBNP in HF suggest that a 12-week treatment duration is sufficient to reveal a significant change in this biomarker. 35 While the low patient numbers in our study precluded any assessment of dose response, the significant reduction from baseline in NT-proBNP at 12 weeks following treatment with licogliflozin 10 mg suggests that SGLT1/2 inhibitors could lead to potential CV benefits. The numerical reduction in SBP with licogliflozin 50 mg also has potential benefit in this patient population and is consistent with the findings of a recent meta-analysis study with SGLT2 inhibitors, showing a 4 mmHg reduction in SBP and a 1.7 mmHg reduction in DBP. 41 The SBP reduction observed with licogliflozin 50 mg was numerically greater than that with empagliflozin, which is noteworthy. SBP was also reduced (~5 mmHg) in the EMPA-REG OUTCOME study, which could at least partly explain the beneficial CV outcome in this study. 8 The observation of SGLT1 expression in the heart, suggests detailed studies are needed to rule out any cardiac adverse effects of dual inhibition. 21,24 Human SGLT1 has more recently been associated with several extra-renal effects (including entero-endocrine and cardiac effects), which may provide CV benefit. However, the role of SGLT1 in these tissues remains to be determined. 20 The most common AEs associated with SGLT2 inhibition or dual SGLT1 and 2 inhibition are mycotic infections (only reported in 1 patient in this study). 18,20, Gastrointestinal AEs are commonly reported following treatment with both sotagliflozin and licogliflozin, 18,20 while clinical trials with sotagliflozin have also raised concerns around the risk of hypoglycaemia and diabetic ketoacidosis. 43 No new safety signals were reported in this study, with most AEs limited and mild in nature. The licogliflozin dose was not taken around mealtime to minimize the risk of gastrointestinal adverse effects of SGLT1 inhibition in the gut, such as diarrhoea, as previously reported. 18 Clinically significant hypoglycaemic events were only reported in 4 patients, while no ketoacidosis events were reported. SGLT2 inhibitors are also associated with an increased risk of urinary tract infections (UTIs), volume depletion, fractures and amputations. 37 The incidence of hypotension, bone fractures and UTIs in the current study was low and numerically similar between treatment groups. The 2 deaths reported in the study were evaluated as not related to the study drug. Longer-term studies with larger groups are required to confirm these preliminary observations. One of the major limitations of this study is the small sample size, which was caused by study early termination due to slow enrolment. A second limitation for a study of this size is patient randomization into 5 groups, with early study termination resulting in a mostly descriptive presentation of the results and preventing direct comparison with the SGLT2 inhibitor, empagliflozin. For many outcome measures, the sample sizes at weeks 4 and 12 are significantly (up to 50%) smaller than those at baseline. Our findings should therefore be interpreted with caution. The early termination of this study also means that there is extremely limited data available at the longerduration 36-week time point, which is therefore not reported. In conclusion, treatment with licogliflozin, an SGLT1 and 2 inhibitor may have a positive impact on NT-proBNP in patients with T2DM and HF. Clearly, larger and longer trials with dual SGLT1 and 2 inhibitors would be required to validate if such drugs may have benefits in patients with T2DM and HF. |
Classification of L2 Vocabulary Learning Strategies: Evidence from Exploratory and Confirmatory Factor Analyses This research presents a classification theory for the L2 vocabulary learning strategies. Based on the exploratory and confirmatory factor analyses of strategies that adult Chinese English learners used, this theory identifies six categories, four of which are related to the cognitive process in lexical acquisition and the other two are metacognitive and affective factors. Compared to other theories on language learning strategies, the uniqueness of this theory lies in that the cognitive factors correspond to the essential steps learners take in acquiring new words. This research does not support that a memory factor or a social factor exists independently in vocabulary learning strategies. |
Length of the follicular phase, time of insemination, coital rate and the sex of offspring. The penetrability of cervical mucus improves over the follicular phase. When the length of the follicular phase varies due to variation in the timing of the luteinizing hormone surge, mucus penetrability will also improve as the phase lengthens. As selection for Y spermatozoa decreases with improvements in mucus penetrability, sex ratios at conception should decline in longer follicular phases. Sex ratios should also decline as the time of insemination approaches ovulation unless hormonally-induced improvements in penetrability are reduced by the debris left by earlier inseminations. |
Do immigrants cause crime We examine the empirical relationship between immigration and crime across Italian provinces during the period 1990-2003. Drawing on police administrative records, we first document that the size of the immigrant population is positively correlated with the incidence of property crimes and with the overall crime rate. Then, we use instrumental variables based on immigration toward destination countries other than Italy to identify the causal impact of exogenous changes in Italy's immigrant population. According to these estimates, immigration increases only the incidence of robberies, while leaving unaffected all other types of crime. Since robberies represent a very minor fraction of all criminal offenses, the effect on the overall crime rate is not significantly different from zero. Introduction Immigration is a contentious issue in all destination countries for at least two reasons. First, worker flows from countries characterized by a different composition of the labor force may have significant redistributive consequences for the native population. Second, there are widespread concerns that immigrants increase crime rates. While the economic literature has * Contact information: bianchi@pse.ens.fr, paolo.buonanno@unibg.it, paolo.pinotti@bancaditalia.it (corresponding author). We want to thank Giuseppe Casamassima of the Italian Ministry of Interior for the data on residence permits. We also thank Massimiliano Bratti, Matteo Cervellati, Antonio Ciccone, Federico Cingano, Francesco Drago, Giovanni Mastrobuoni, Ugo Melchionda, Franco Peracchi, Alfonso Rosolia, Andrea Tiseno and seminar participants at the Bank of Italy, CEIS Tor Vergata, Paris School of Economics, ESPE (London), NASM (Pittsburgh), FEMES (Singapore), EEA (Milan) and AIEL (Brescia) for many useful comments. All errors are our responsibility. Financial support from CEPREMAP and from Region Ile-de-France (Milo Bianchi) is gratefully acknowledged. The opinions expressed herein are those of the authors and do not necessarily represent those of the Bank of Italy. 1 devoted much attention to the first issue (Borjas, 1994;Friedberg and Hunt, 1995;Bauer and Zimmermann, 2002;Card, 2005) the second one has remained largely unexplored. 1 At the same time, citizens and policymakers in several host countries seem more concerned about the impact of immigrants on crime. Figure 1 shows the results of the National Identity survey carried on in 1995 and 2003 by the International Social Survey Programme. It emerges clearly that the majority of population in OECD countries is worried that immigrants increase crime rates. In most cases this fraction is greater than that of people afraid of being displaced from the labor market. These perceptions may have far-reaching consequences for immigration policies (). Moreover, standard economic theories of crime provide several reasons why immigration could be possibly related to crime. For example, immigrants and natives may have different propensities to commit crime because they face different legitimate earnings opportunities, different probabilities to be convicted and different costs of conviction. However, from a theoretical viewpoint, the direction of such effects is unclear. For example, immigrants may experience worse labor market conditions (LaLonde and Topel, 1991;Borjas, 1998) but higher costs of conviction (Butcher and Piehl, 2005). Hence, identifying such relation is ultimately an empirical issue. In this paper we estimate the causal effect of immigration on crime across Italian provinces during the period 1990-2003. For this purpose, we draw on police administrative records to document the patterns of criminal offenses, disaggregated along various typologies, and of immigration, both in its regular and irregular component. As we discuss in the next Section, Italy displays several interesting features for our analysis. First, during the last few years Italy has experienced a considerable increase in migration pressures, mostly as a consequence of political turmoil in neighboring countries. Similarly to many other receiving countries, this phenomenon resulted in substantial concerns at the social and political level, mainly because of the alleged relationship between immigration and crime. Second, during our sample period Italian authorities have implemented several massive regularizations of previously unofficial immigrants, which allow for an estimate of the irregular component of migration. In Section 3 we start our econometric analysis with an OLS estimation in which we control extensively for other determinants of criminal activity, as well as for province-and year-specific unobserved heterogeneity. According to these estimates, a 1% increase in the total number of immigrants is associated with a 0.1% increase in the total number of criminal offenses. Once we distinguish among categories of crime, the effect seems particularly strong for property crimes, and in particular for robberies and thefts. We go on in Section 4 by asking whether this evidence can be attributed to a causal effect of immigration on crime. Any interpretation in this sense must take into account that the location choice of immigrants within the destination country may respond to unobserved 1 Some notable exceptions are considered at the end of this Section. 2 demand-pull factors that are also correlated with crime. As a result, OLS estimates may be biased. In order to solve this problem, we exploit differences in the intensity of migration by origin country as a source of exogenous variation in the distribution of immigrants across Italian provinces. In particular, we use changes of immigrant population by nationality in the rest of Europe as an instrument for changes of immigrant population in Italy. Our identification strategy relies on the fact that the supply-push component of migration by nationality is common to flows toward all destination countries. At the same time, flows toward the rest of Europe are exogenous to demand-pull factors in Italian provinces. Variation across provinces of supply-driven shifts of immigrant population results from differences in the beginning-ofperiod distribution of immigrants by origin country. Indeed, first stage estimates confirm that our instrument provides a strongly statistically significant prediction of migration to Italy. Once we take into account the endogeneity of immigrants' distribution across provinces, the estimated effect of immigration on neither total nor property crimes is significantly different from zero. Distinguishing among different types of property crime, the estimated coefficient is still statistically significant for robberies. However, the latter represent only a very minor fraction of all crimes in our sample, which explains why the effect on the total crime rate is not statistically significant. As discussed in Section 5, these results seem robust with respect to measurement error of immigrant population, spatial correlation of provincial crime data and heterogeneous effects across different nationalities. This paper contributes to the empirical literature on immigration and crime. As pointed out, very few studies have explored such issue. Piehl (1998b, 2005) find that current U.S. immigrants have lower incarceration rates than natives, while the pattern seems reversed for immigrants in the early 1900s (Moehling and Piehl, 2007). At the aggregate level, Butcher and Piehl (1998a) look at a sample of U.S. metropolitan areas over the 1980s and conclude that new immigrants' inflows had no significant impact on crime rates. 2 Immigration and crime in Italy: measurement and characteristics Immigration to Italy displays several interesting features for the purpose of our analysis. First, it is a very recent phenomenon, which basically started in the early 1980s and took off during the 1990s. The first law regulating the inflows of foreigners was approved in 1990, later amended in 1998 and 2002. Throughout this period, Italian migration policy has remained grounded on the residence permit, which allows the holder to stay legally in the country for a given period of time. We have drawn directly on police administrative records for recovering the number of valid residence permits by province and nationality during the 3 period 1990-2003. These data serve as our measure of legal immigration. Second, immigration has increased dramatically over this period. The number of residence permits rose by a factor of 5, from 436,000 in 1990 (less than 1% of total population) to over 2.2 millions in 2003 (4% of population). Such growth was significantly driven by push factors in neighboring countries, like the collapse of Soviet Union and the Balkan Wars (see Del Boca and Venturini, 2003). Overall, immigration from Eastern Europe grew at a rate of 537% during the period 1990-2003, as compared to 134% from Northern Africa and 170% from Asia. Accordingly, our estimating strategy will exploit such push factors to identify the causal effect of immigration on crime. Third, during this period Italy implemented several regularizations, which offered irregular immigrants the possibility to obtain a residence permit. In particular, regularizations in 1995In particular, regularizations in, 1998In particular, regularizations in, and 2002 and 700 thousand individuals, respectively. For our purposes, regularizations are important as they provide snapshots of irregular migration. During these episodes, in fact, immigrants had clear incentives to report their irregular status. Hence, underreporting may be less serious and less correlated with other variables than in survey data and in apprehension statistics. 2 Therefore, we obtained from police administrative records also the demands for regularization presented in 1995, 1998 and 2002. As it turns out, the distribution of regular and irregular immigrants are tightly related. In particular, the ratio of the two is very stable within provinces and (regularization) years. In order to see this, let M IGR it and IRR it be the number of regular and irregular immigrants in province i and year t, respectively. Then, we predict the latter based on the OLS regression where i and t are province-and year-specific estimated coefficients, respectively, and it is the estimated residual. Figure 2 shows that the difference between IRR it and IRR it at the province-year level is almost negligible. Actually, the variance of it is less than 2% of total variance. It follows that where M IGR * it and P OP it are total immigrants and population in each province-year, respectively. Taking logarithms on both sides delivers where migr * it and migr it are, respectively, the logarithms of total and regular immigrants over population, and it is an error term. The OLS estimated coefficient of migr it is 0.92 (R 2 = 99%), which confirms that (after controlling for province and year fixed effects) regular immigrants are approximately proportional to total immigrant population in each province- year. Since total immigrants would be unobserved out of regularization years, we will use the (log of) regular immigrants instead. Turning to measures of criminal activity, we look at crimes reported by the police to the judiciary authority, which are published yearly by the Italian Statistics Institute (ISTAT). These data allow to distinguish among several types of criminal offenses: violent crimes, property crimes (robbery, common theft, car theft) and drug-related crimes. Availability of these data determined our sample period, 1990period, -2003period,. In 2004, in fact, a new national crime recording standard has been adopted, which implies a lack of comparability of data before and after that year (ISTAT, 2004, p.27). In general, a major drawback of crime data is measurement error, caused for instance by under-reporting, heterogeneous law enforcement, and so on. Following a standard approach, we assume that, first, the number of reported crimes, CRIM E it, is proportional to the true (unobserved) number of committed crimes, CRIM E * it; second, the constant of proportionality does not vary within provinces and years. It follows that where crime * it and crime it are, respectively, the logarithms of actual and reported crimes over total population, and i and t are province and year fixed effects. Therefore, we will use crime it as a proxy for the true (unobserved) crime rate. Accordingly, total, violent, property and drug will denote the logarithms of reported crimes over total population for each category of criminal offenses. At a first glance, criminal activity and immigration are not systematically correlated over time (see Figure 3). On the other hand, immigration appears to be positively associated with crime across provinces; in particular, both tend to be higher in the North ( Figure 4). However, both variables could respond to other (omitted) factors. For instance, higher wealth in Northern Italy could encourage both immigration and property crimes, which represent 83% of all criminal offenses in our sample. Therefore, in the next section we move beyond simple correlations and into multivariate econometric analysis. Panel Analysis Identifying the effect of migration on crime is complicated by the fact that both variables are simultaneously determined in equilibrium. To address this issue, we start by controlling for other variables that may affect both immigration and crime, along with province-and year-specific unobserved heterogeneity. We thus assembled annual observations for all 95 Italian provinces during the period 1990-2003. 3 Our main estimating equation is where crime it is the log of the crime rate reported by the police in province i during year t; migr it is the log of immigrants over population; X it is a set of control variables; finally, i and t are province-and year-specific unobserved fixed effect, while it is an error term. We are mainly interested in identifying coefficient. The set of observables X it comprises demographic, socioeconomic and politico-institutional determinants of crime. 4 Demographic variables include the log of resident population in the province, pop. Since equation includes province fixed effects, pop implicitly controls for population density, which is considered a key determinant of the level of criminal activity (Glaeser and Sacerdote, 1999). For the same reason, we control for the share of population living in cities with more than 100,000 inhabitants, urban. Finally, since young men are said to be more prone to engage in criminal activities than the rest of the population, we add the percentage of men aged 15-39, male1539. Turning to the socioeconomic variables, we include the (log of) real GDP per capita, gdp, and the unemployment rate, unemp. These factors proxy for the legitimate and illegitimate earning opportunities (Ehrlich, 1973;Raphael and Winter-Ember, 2001;). The probability of apprehension captures instead the expected costs of crime. As a proxy for such a probability, we use the clear − up rate, defined as the ratio of crimes cleared up by the police over the total number of reported crimes, for each category of crime. The political orientation of the local government may affect the amount of resources devoted to crime deterrence and at the same time immigration restrictions at the local level. 5 We measure the ideology of the local government with the variable partisan, which takes higher values the more the local government leans toward the right of the political spectrum. Finally, fixed effects control for other unobserved factors that do not vary within provinces or years, including constants 's and 's in equations and, respectively. All variables' detailed definitions and sources are presented in the Appendix. Table 1 shows some descriptive statistics and Table 2 reports the correlation matrix among all dependent and explanatory variables. The univariate correlation between the log of immigrants and crimes over population is positive for all types of crime. OLS estimates on equation are presented in Table 3 and suggest that the total crime 3 Italian provinces correspond to level 3 in the Eurostat classification (Nomenclature of Territorial Units for Statistics); they are comparable in size to U.S. counties. In 1995, 8 new provinces were created by secession. In order to keep our series consistent, we attribute their post-1995 data to the corresponding pre-1995 province. 4 Freeman, Eide et al. and Dills et al. review the empirical literature on the determinants of crime. 5 The distribution of residence permits across provinces is decided on a yearly basis by the government in accordance with provincial authorities. 6 rate is significantly correlated with the incidence of immigrants in the population. Such relationship is robust to controlling for other determinants of crime. According to these findings, a 1% increase in immigrant population is associated with a 0.1% increase of total crime. Distinguishing among types of crime, the effect is driven by property crimes, while violent and drug-related crimes are unaffected by immigration. In order to better uncover this relationship, in Table 4 we disaggregate property crimes further. It turns out that immigration increases the incidence of robberies and thefts. Since the latter represent about 60% of total crimes in our sample, the relationship between immigration and property crimes may be the main channel through which immigrants increase the crime rate. However, there could be several reasons why immigrant population is systematically correlated with property crimes, some of which may not be adequately captured by control variables. Therefore, identifying causality requires a source of exogenous variation in immigrant population, an issue that we tackle in the next Section. Causality Even after controlling for other determinants of crime and for fixed effects, the distribution of immigrant population across provinces could be correlated with the error term for at least two reasons. First, our set of controls could neglect some time-varying, possibly unobserved demand-pull factors that are also correlated with crime. For instance, improvements in labor market conditions that are not adequately captured by changes in official unemployment and income could increase immigration and decrease crime, which would bias OLS estimates downward. On the other hand, economic decline could attract immigrants to some areas (e.g. because of declining housing prices) where crime is on the rise, which would bias OLS estimates upward. Finally, changes in crime rates across provinces could have a direct effect on immigrants' location. In order to take these concerns into account, we adopt a Two-Stage-Least-Squares (2SLS) approach that uses the (exogenous) supply-push component of migration by nationality as an instrument for shifts in immigrants' population across Italian provinces. Supply-push factors are all events in origin countries that increase the propensity of population to emigrate; examples include economic crises, political turmoil, wars and natural disasters (see, for instance, Card, 1990;Friedberg, 2001;Angrist and Kugler, 2003;Munshi, 2003;Saiz, 2007). Since these are both important in determining migration outflows and independent of regional differences within the host country, they have often been used as a source of exogenous variation in the distribution of immigrant population. In particular, several papers have constructed outcome-based measures of supply-push factors using total migration flows by nationality toward the destination country of interest. 6 7 In principle, however, since new immigrants of a given nationality tend to settle into the same areas as previous immigrants from the same country (see e.g. Munshi, 2003;Jaeger, 2006;McKenzie and Rapoport, 2007), total flows by nationality could be still correlated with local demand-pull factors. 7 For this reason, our instrument will be based on bilateral migration flows toward European countries other than Italy. Specifically, we first take within-province differences of equation and decompose ∆migr it = migr it − migr it−1 as follows: where the superscript n denotes nationalities and n it−1 = M IGR n it−1 /M IGR it−1. The first term on the right-hand side is the sum of log-changes of immigrants from country n into destination province i, weighted at beginning-of-period nationality shares within each province. These depend on both supply-push factors in each origin country (which affect that nationality in all provinces) and demand-pull factors in each province (which affect all nationalities in that province. In order to exclude the latter, we substitute ∆ ln M IGR n it with the log-change of immigrants of nationality n in the rest of Europe, ∆ ln M IGR n. Hence, we define the predicted log-change of immigrants over population in each province as Since demand-pull factors in other European countries can be reasonably thought as exogenous to variation between Italian provinces, the correlation between ∆migr it and ∆migr it must be due solely to supply-push factors in origin countries. To construct our instrument we use the log changes of immigrant population from 13 origin countries in 11 European countries using decennial census data in the host countries. 8 Figure 5 shows that the patterns of immigration toward the rest of Europe resemble those observed in Italy, which points at the importance of supply-push factors. Indeed, the univariate regression confirms that our instrument fits well the actual changes of immigrant population across provinces over the 1990s, where the numbers in parenthesis are the standard errors of the estimated coefficients. The F-statistic of the regression is equal to 14.24, which is above the lower bounds indicated by the literature on weak instruments (see;Stock and Yogo, 2002). Once equipped with this instrument for immigrant population, we turn to examine its effect on crime rates in the second stage. The results are reported in Tables 5 and 6. For the sake of comparability between OLS and 2SLS, in each table we also present OLS estimates on the cross section of log changes between 1991 and 2001. While the OLS estimates on 10-years changes are broadly consistent with panel estimates using all years, the 2SLS estimates present significant differences. First, the effect of immigration on the total number of criminal offenses is smaller and not statistically significant anymore, and the same is true for property crimes. Once we distinguish among different typologies of property crimes ( Overall, these results suggest that the causal effect of immigration on either violent, property or drug-related crimes is not significantly different from zero. Robberies are the only type of criminal activity that we found to be positively and significantly affected by immigration. According to our estimates, the incidence robberies varies approximately oneto-one (in percentage) with the ratio of immigrants over population. Yet, within our sample robberies represent only 1.8% and 1.5% of property and total crimes, respectively, which explains why the incidence of neither property nor total crimes is significantly related to immigration. Robustness Our findings may be subject to several caveats, the most significant of which concern the measurement of immigrant population. A first issue relates to its composition by nationality. In order to avoid arbitrary classifications, our measure includes all residence permits, regardless of immigrants' origin countries. On the other hand, most crime concerns are directed toward immigrants from developing countries. While it is beyond the scope of this paper to investigate the relationship between nationality and propensity to crime, one may wonder whether adopting this broader definition introduces error in the measurement of those immigrants that could be more at risk of committing crime. 10 Therefore, we checked the 9 An alternative explanation could be that OLS estimates suffer from attenuation bias due to measurement errors in immigrant population. However, if this was the reason, we should observe an analogous bias for all types of crime, which does not seem to be the case. 10 This measurement issue is particularly relevant for Italy. In our sample, about 85% of all immigrants from outside developing countries came from U.S. and Switzerland. These are very peculiar groups: the first includes mostly U.S. military servants, the second Swiss citizens that commute daily between Switzerland and Italy. 9 robustness of our estimates to using only residence permits awarded to immigrants from developing countries, migr dc it. The results are presented in Tables 7 and 8, and are remarkably similar to those obtained using all residence permits. Also, differences among nationalities could explain the differences between OLS and 2SLS estimates. The latter are based on a subset of nationalities (those for which we found Census data for other European countries). Therefore, if the excluded nationalities had higher propensity to crime than those included in the instrument, that would cause the observed drop in magnitude and significance from OLS to 2SLS (Imbens and Angrist, 1994). In order to check whether that is the case, we run again OLS regressions including in the measure of immigration only those nationalities included in the instrument. Results are reported in Tables 9 and are not significantly different from those in Tables 5 and 6. Hence, basing our instrument on a subset of nationalities does not drive the difference between 2SLS and OLS estimates. Another issue relates to the dimension of irregular immigration in Italy. As discussed in Section 3, we used demands for regularization to infer the distribution of irregular immigrants, arguing that this approach minimizes under-reporting. In principle, however, one can not exclude that immigrants self-select into regularization, which would introduce measurement error into equation. In particular, if immigrants who are more at risk of committing crime are also less likely to apply for a regular permit, we would be understating immigrants exactly where they contribute the most to crime, which in turn would bias the coefficient of migr downward. For this reason, we looked also at apprehensions of irregular immigrants (as recorded by Ministero dell'Interno ) which do not depend on self-selection. Indeed, after controlling for province-and year-specific constants (which are always included in our specifications) the log of apprehensions is positively and significantly related to the log of demands for regularization. In particular, the OLS estimated coefficient of the univariate regression is 0.35, the t-ratio is 3.87 and the R 2 is 85%. Therefore, apprehension-and regularizationbased measures of irregular immigration seem consistent with each other. At the same time, regularizations provide a more representative picture of the phenomenon. 11 In addition, the 2SLS approach adopted in Section 5 would attenuate any bias due to under-reporting of irregular immigrants. In fact, if both regular and irregular immigrants of the same nationality cluster into the same areas, then our instrument provides a measure for the predicted log-change of total immigrants that depends solely on geographic distribution and supply-push factors by nationality. Finally, mobility across the borders of different provinces may give rise to spatial correlation in provincial crime data. In line with the literature on spatial econometrics and crime, we thus control for spatially lagged crime rates. 11 In 1995 there were less than 64,000 apprehensions and 260,000 demands for regularization; this ratio was 61,000 over 250,000 in 1998 and 106,000 over 700,000 in 2002 These consist of weighted averages of crime rates in neighboring provinces. In particular, crime in province i is assumed to depend also on crime observed in any other province j, weighted by the inverse of the distance between their capital cities. The results, presented in Table 10, are consistent with those in our baseline specification. Hence, spatial correlation does not affect affect our results. This is probably due to the fact that provinces are rather large geographical areas, so that crime trips occur within rather than across provinces. Conclusions In this paper, we investigated the causal impact of immigration on crime across Italian provinces during the 1990s. According to our estimates, total criminal offenses as well as most types of crime are not related to the size of immigrant population once endogeneity is taken into account. We view our contribution as a first step towards a better understanding of this relationship. There are several ways in which our analysis can be extended in search of more detailed mechanisms, and we sketch only a few here. First, one can explore natives' response to an increase in immigration. Our result, in fact, could be due to the fact that immigrants and natives have similar propensities to commit crime and/or there is substitution between immigrants' and natives' crime. 12 Moving in this direction would require more detailed criminal statistics, which allow to distinguish the nationality of the offender. 13 Second, we estimate the average effect of immigration conditional on its current composition. However, this effect is probably different for regular and irregular immigrants. Indeed, it would be extremely interesting to estimate separately the effect of the two. But while the strong correlation between the two is useful for recovering the variation in total immigrants using only the regular ones, it does not allow to disentangle their separate effects. A better understanding of such mechanisms seems crucial also for policy prescriptions. In fact, any change in migration restrictions is likely to affect both the size and composition of immigrant population (Bianchi, 2007;Giordani and Ruta, 2008). Therefore, its impact may differ from the one estimated by keeping immigrants' composition constant. This effect has to be considered before arguing in favor or against tighter immigration restrictions. partisan: ideology of the provincial government. This variable is constructed as follows. Variables: definitions and sources First, a score between 0 (extreme left) and 20 (extreme right) is attached to each political party according to the expert surveys presented in Benoit and Laver (these data are available at http://www.tcd.ie/Political Science/ppmd/). Then, the score of the local government is computed as the average score of all parties entering executive cabinet weighted by the number of seats held by each party in the local council (the composition of Italian local councils is available at http://amministratori.interno.it/). national Social Survey Programme. The vertical axis is the percentage of interviewed in each country that declared to "Strongly Agree"or "Agree"that "Immigrants increase crime rates ". The horizontal axis is the percentage of interviewed in each country that declared to "Strongly Agree"or "Agree"that "Immigrants take jobs away from natives ". 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 number, per and year fixed-effects are included in all specifications. Robust standard errors are presented in parenthesis. *, ** and *** denote rejection of the null hypothesis of the coefficient being equal to 0 at 10%, 5% and 1% significance level, respectively. Tables 3 and 4 are always included, both in the first and second stage. The sources of data for residence permits and reported crimes are ISTAT and the Italian Ministry of Interior, respectively. Immigrant population in other European countries is measured using the 1991 and 2001 rounds of national census. The F-statistic for excluded instruments refers to the null hypothesis that the coefficient of the excluded instrument is equal to zero in the first stage. Robust standard errors are presented in parenthesis. *, ** and *** denote rejection of the null hypothesis of the coefficient being equal to 0 at 10%, 5% and 1% significance level, respectively. Tables 3 and 4 are always included, both in the first and second stage. The sources of data for residence permits and reported crimes are ISTAT and the Italian Ministry of Interior, respectively. Immigrant population in other European countries is measured using the 1991 and 2001 rounds of national census. The F-statistic for excluded instruments refers to the null hypothesis that the coefficient of the excluded instrument is equal to zero in the first stage. Robust standard errors are presented in parenthesis. *, ** and *** denote rejection of the null hypothesis of the coefficient being equal to 0 at 10%, 5% and 1% significance level, respectively. Tables 3 and 4 are always included, both in the first and second stage. The sources of data for residence permits and reported crimes are ISTAT and the Italian Ministry of Interior, respectively. Immigrant population in other European countries is measured using the 1991 and 2001 rounds of national census. The F-statistic for excluded instruments refers to the null hypothesis that the coefficient of the excluded instrument is equal to zero in the first stage. Robust standard errors are presented in parenthesis. *, ** and *** denote rejection of the null hypothesis of the coefficient being equal to 0 at 10%, 5% and 1% significance level, respectively. Tables 3 and 4 are always included, both in the first and second stage. The sources of data for residence permits and reported crimes are ISTAT and the Italian Ministry of Interior, respectively. Immigrant population in other European countries is measured using the 1991 and 2001 rounds of national census. The F-statistic for excluded instruments refers to the null hypothesis that the coefficient of the excluded instrument is equal to zero in the first stage. Robust standard errors are presented in parenthesis. *, ** and *** denote rejection of the null hypothesis of the coefficient being equal to 0 at 10%, 5% and 1% significance level, respectively. is the log-change of immigrants of those nationalities included in ∆migr (listed in Section 4) over province population. The log-changes of all control variables in Tables 3 and 4 are always included, both in the first and second stage. The sources of data for residence permits and reported crimes are ISTAT and the Italian Ministry of Interior, respectively. Robust standard errors are presented in parenthesis. *, ** and *** denote rejection of the null hypothesis of the coefficient being equal to 0 at 10%, 5% and 1% significance level, respectively. Notes: This table presents the results of IV (second-stage) estimates on the cross-section of ten-year differences between 1991 and 2001 across all 95 Italian provinces. The dependent variable is the log-change of the number of crimes reported by the police over total population, for each category of criminal offenses. The variable ∆migr is the log-change of immigrants over province population and is instrumented in the first stage by ∆migr it (see equation 5 in the main text). The spatial lag is the weighted sum of the log of crimes over ppopulation in all other provinces, with weighting matrix based on the inverse of road travelling distance between provinces. The log-changes of all control variables in Tables 3 and 4 are always included, both in the first and second stage. The sources of data for residence permits and reported crimes are ISTAT and the Italian Ministry of Interior, respectively. Robust standard errors are presented in parenthesis. *, ** and *** denote rejection of the null hypothesis of the coefficient being equal to 0 at 10%, 5% and 1% significance level, respectively. |
Activities of the Catholic Church in Poland Against Pedophilia in 2018 The aim of the article is to determine the type of activities undertaken by the Catholic Church towards clergymen committing sexual offenses, and more specifically: pedophilia. The research problem is a question: what actions does the Catholic Church take against pedophilia? In order to realize a research project, it was first determined how the offense is defined in the doctrine of church criminal law. Then, there was made an analysis of the activities undertaken by the hierarchs of the Catholic Church. On its basis, a typology of the forms of the Churchs influence at various levels was reconstructed in the field of both preventive and sanctioning actions against the clergy. In the article there was adopted a time restriction covering only 2018. It can be described as a breakthrough, first of all due to the verdict that was made in Pozna, the accusations that appeared at the end of the year against the deceased chaplain of Solidarity, Fr. Henryk Jankowski and initiatives taken by both citizens and politicians, such as the first anti-clerical happening of Baby Shoes Remember in Poland or the creation of a pedophile map. In the cinemas, a movie entitled Kler showed up. It moved the topic of pedophilia in the Church. Results: the Catholic Church in Poland, apart from symbolic activities, i.e. oral and written declarations, assurances, and prayers, undertakes also substantial actions, such as personal changes, cooperation with the state or the meetings of hierarchs centered around pedophilia. |
Bullets in a Core-Collapse Supernova Remnant: The Vela Remnant We use two-dimensional hydrodynamical simulations to investigate the properties of dense ejecta clumps (bullets) in a core-collapse supernova remnant, motivated by the observation of protrusions probably caused by clumps in the Vela supernova remnant. The ejecta, with an inner flat and an outer steep power-law density distribution, were assumed to freely expand into an ambient medium with a constant density, ~0.1 H atoms cm-3 for the case of Vela. At an age of 104 yr, the reverse shock front is expected to have moved back to the center of the remnant. Ejecta clumps with an initial density contrast ~ 100 relative to their surroundings are found to be rapidly fragmented and decelerated. In order to cause a pronounced protrusion on the blast wave, as observed in the Vela remnant, ~ 1000 may be required. In this case, the clump should be near the inflection point in the ejecta density profile, at an ejecta velocity ~3000 km s-1. These results apply to moderately large clumps; smaller clumps would require an even larger density contrast. Clumps can create ring structure in the shell of the Vela remnant, and we investigate the possibility that RX J0852-4622, an apparent supernova remnant superposed on Vela, is actually part of the Vela shell. Radio observations support this picture, but the possible presence of a compact object argues against it. The Ni bubble effect or compression in a pulsar wind nebula are possible mechanisms to produce the clumping. |
Flexible pension take-up in social security This paper studies the redistribution and welfare effects of increasing the flexibility of individual pension take-up. We use an overlapping-generations model with Beveridgean pay-as-you-go pensions and heterogeneous individuals who differ in ability and lifespan. We find that introducing flexible pension take-up can induce a Pareto improvement when the initial pension scheme contains within-cohort redistribution and induces early retirement. Such a Pareto improving reform entails the application of uniform actuarial adjustment of pension entitlements based on average lifespan. Introducing actuarial non-neutrality that stimulates later retirement further improves such a flexibility reform. Introduction Since the 1970s, the effective retirement age has declined in almost all Western countries, while at the same time life expectancy has increased substantially. These developments led to an increase in the average retirement period relative to the working period, thereby eroding the fiscal sustainability of pension schemes. To reverse this trend, in recent years more attention has been given to pension reforms that improve labour supply incentives and encourage people to work longer. Countries like the UK and Australia, for example, introduced a flexible retirement age and increased the reward to continue working. The advantage of this type of reforms is that it not only reduces the labour market distortions caused by incentives to retire early but can also increase the sustainability of pension systems. A potential disadvantage is, however, that these flexibility reforms are typically implemented in a uniform way, i.e. applied to all participants in the same way, while individuals have heterogeneous characteristics (e.g., in terms of life expectancy or income level). Uniformly implemented reforms therefore probably have different welfare effects at the individual level and may affect certain types of individuals negatively. 1 Indeed, it is well known that pension schemes based on uniform policy rules contain large redistribution effects within and across generations, some intentional, and others unintentional (see, e.g. Brsch-Supan and Reil-Held 2001 andBonenkamp 2009). For example, unfunded pension schemes, especially those of the Beveridgean type, often contain redistribution from high to low incomes. Apart from this, these pension schemes typically also redistribute from short-lived to long-lived agents because they are based on collective annuities which do not depend on individual life expectancy. This makes collective annuities subject to the objection that they lead to more regressive pension schemes because it is well known that average longevity tends to increase with income (see e.g.;;). Pension reforms that introduce more flexibility in pension take-up will affect these redistribution effects. It is therefore important to take into account the redistribution in existing pension schemes and the fact that individuals are heterogeneous when analysing the welfare effects of pension flexibility reforms. This paper explores the redistribution and welfare effects of the introduction of a flexible starting date for pension benefits in the context of an unfunded pension scheme with an explicit redistribution motive. That means, we consider a change from a payout scheme in which benefits start at the fixed statutory retirement age to a scheme where benefits start at the flexible effective retirement age. This flexible pension take-up is combined with actuarial adjustments of pension benefits for early or late retirement. To analyse the economic implications of this reform, we use a twoperiod overlapping-generations model populated with agents who differ in ability and lifespan. It is assumed that the lifespan of an individual is positively linked to his productivity. The pay-as-you-go (PAYG) social security system is of the Beveridgean type and is characterized by lifetime annuities and proportional contributions. In this way, the pension scheme includes two types of intragenerational redistribution, from highincome earners to low-income earners and from short-lived to long-lived agents. Note that, in contrast to the former, the latter type of redistribution is regressive due to the positive link between productivity and longevity. The fact that individuals are heterogeneous implies that introducing pension flexibility will affect individuals differently. In this paper, we take a positive perspective and observe that in many countries, PAYG pension schemes do redistribute in practice. That is, we follow the literature that takes the existence of a PAYG social security scheme as a redistribution device as given (see e.g. Galasso and Profeta 2002;Cremer and Pestieau 2003;). Of course, there are many good reasons for redistributing income via pensions, for example, protection against myopia or the fact that individual differences in income could manifest themselves only later in the career. However, the normative question why pension schemes should be redistributive within cohorts remains outside the scope of this paper. Implementing pension contracts with a variable starting date for benefits, as analysed in this paper, is important for various reasons. It helps individuals to adjust the timing of pension income according to their own preferences and circumstances. This is particularly relevant for people who have a preference to retire early but who are prevented to do that because of liquidity or borrowing constraints. Flexible pensions can also function as a hedge against all types of risks, like disability risks (Diamond and Mirrlees 1978), stock market risks () or productivity risks (Pestieau and Possen 2010). This paper adds some other arguments. We will illustrate that flexible pensions can stimulate people to postpone retirement voluntary. In that case, flexible pension take-up may help to bear the increasing fiscal burden of ageing. We also show that flexible pension take-up could be used to reduce the element of regressive redistribution in social security schemes. The main results are as follows. First, introducing a flexible pension take-up cannot be Pareto improving if the government conditions the adjustment factor of benefits on individual characteristics like lifespan. Individual actuarial adjustment eliminates the unintended redistribution from short-lived to long-lived agents. The low-skilled therefore benefit from this reform at the expense of the high-skilled. Second, introducing a flexible pension take-up can be Pareto improving if the actuarial adjustment of benefits occurs in a uniform way (i.e. based on the average lifespan). Uniform benefit adjustment leads to selection effects in the retirement decision which may reduce initial tax distortions. For the high-skilled individuals, the uniform reward rate for later retirement is too high from an actuarial point of view because they live longer, which reduces their implicit tax and stimulates them to continue working. If the contribution rate is sufficiently high, the low-skilled also gain because they receive higher pensions, enabled by the additional tax payments of the high-skilled. Third, combining uniform adjustment with actuarial non-neutrality to induce people to postpone retirement can further improve the reform, i.e. a Pareto improvement can be achieved at a lower contribution rate, or for a given contribution rate, the welfare effects are more positive for all individuals. It is important to note that our benchmark PAYG scheme is of the Beveridgean type and characterized by inflexible pension take-up and lifetime annuities. Countries like the UK, the Netherlands and Denmark indeed follow this tradition. Other countries, like Germany, Italy and France have Bismarckian pension schemes where pension benefits are linked to former contributions. In general, Bismarckian pension systems still contain intragenerational redistribution from short-to long-lived agents but have considerably less redistribution from the rich to the poor. As a consequence, with this type of pension scheme, the labour market distortions caused by incentives to retire early, and therefore also the potential for welfare gains of introducing flexible retirement with an increased reward to continue working as studied in this paper, will be much smaller than with a Beveridgean pension system. This paper is related to studies that analyse the interaction between pension schemes and retirement decisions (see, e.g. Hougaard ) and to a growing literature that focuses on the role of alternative pension systems when income and lifespan are correlated (see, e.g. Borck 2007;Hachon 2008;). In addition, our paper is also related to Fisher and Keuschnigg and Jaag et al. who investigate the labour market impact of pension reforms towards more actuarial neutrality. Most of these aforementioned studies focus on pension reforms that strengthen the link between contributions and benefits. Our study, in contrast, deals with the implementation of a flexible pension take-up. This paper is most closely related to Cremer and Pestieau. They analyse the implementation of age-dependent tax rates in an economy with a redistributive PAYG pension scheme. This policy generates the same 'double dividend' as the flexibility reform of the pension scheme considered in this study: it not only generates additional revenues but also fosters redistribution from high to low incomes. An important difference with Cremer and Pestieau is that in our model, people have heterogeneous lifespans. Heterogeneous lifespans play a crucial role in our analysis as this leads endogenously to the 'right' retirement incentives (i.e. the high-skilled will work longer and the low-skilled shorter) when a flexible retirement scheme with uniform actuarial adjustment is introduced. Cremer and Pestieau need age-dependent taxes to achieve a similar result. The flexible retirement reform we study differs in two important aspects from the introduction of age-dependent taxation. Firstly, in contrast to age-dependent taxation, it is directly targeted at the retirement distortion caused by the PAYG pension scheme. That is, it only affects the retirement decision (i.e. the extensive margin of labour supply), not the decision how many hours to work (the intensive margin). 2 Secondly, the introduction of flexible pension take-up as considered in this paper is more often observed in practice than age-dependent taxation. 3 Our main contribution in this respect is that we provide a rationale for the empirically observed introduction of pension flexibility with actuarial adjustment of benefits in a Beveridgean pension scheme. In particular, we elucidate why these pension flexibility reforms are typically implemented in a uniform way instead of making the adjustment of benefits dependent on individual characteristics; we show that the only way to induce a Pareto improvement is by adjusting the benefits in a uniform way, even though individuals are heterogeneous. This paper is organized as follows. In Sect. 2, we introduce the benchmark model. This model contains a PAYG social security scheme with inflexible pension takeup and lifetime annuities. Section 3 analyses the redistribution and welfare effects of reforms aimed at increasing the flexibility of individual pension take-up. In Sect. 4, we elaborate on these flexibility reforms by introducing non-neutral actuarial adjustment of benefits. Section 5 concludes the paper. The benchmark model We consider a two-period overlapping-generations model of a small open economy populated with heterogeneous agents who differ in terms of ability and lifespan. Agents decide upon the amount of savings in the first period and upon the length of the working period in the second period of life. The individual ability level determines whether an agent supplies labour as a low-skilled worker or as a high-skilled worker. Highskilled workers earn a higher wage rate than low-skilled workers. The model includes a Beveridgean social security scheme which offers a lifetime annuity that starts paying out from the statutory retirement age until the end of life. Agents are allowed to continue their working life after the statutory retirement age or to advance retirement and stop working before the statutory retirement age. So the statutory retirement age is related to the date agents receiving their pension benefit, which is not necessarily equal to their effective retirement date. Preferences Preferences over first-period and second-period consumption are represented by the following utility function: with u > 0 and u < 0; c is first-period consumption; x is second-period consumption; and ≤ 1 is the length of the second period. To keep the analysis as simple as possible, we assume that the interest rate and the discount rate are zero. 4 Second-period consumption is defined net of the disutility of labour: where d is total consumption of goods when old yielding a consumption stream of d/, z denotes the working period, and is the preference parameter for leisure. Following Casamatta et al. and Cremer and Pestieau, we assume a quadratic specification for the disutility of work. This specification makes the problem more tractable, but comes with the cost that there are no income effects in labour supply. Income effects in the retirement decision are found to be small compared to substitution effects, however, see, e.g. Krueger and Pischke or French. Observe that the disutility of working is related to the fraction of the second period spent on working (i.e. z/ ). This implies that for a given retirement age, an agent with a short lifespan experiences a higher disutility of work than an agent with a long lifespan because this short-lived agent works a relatively larger share of his remaining lifetime. Innate ability and skill level There are two levels of work skill, denoted by 'low' (L) and 'high' (H ). Born lowskilled, an agent can acquire extra skills and become a high-skilled worker by investing 1 − a units of time in schooling in the first period. The rest of the time, a, is devoted to working as a high-skilled worker. The individual-specific parameter a reflects the ability of individuals to acquire high working skills. The higher is a, the more able is the individual, and the less time a worker needs to become high-skilled for acquiring a work skill. The parameter a ranges between 0 and 1, and its cumulative distribution function is denoted by G(), i.e. G(a) is the number of individuals with an innate ability parameter below or equal to a. We henceforth refer to an individual with an innate ability parameter of a as an a-individual. For the sake of simplicity, we normalize the total number of individuals born in each period to be one, i.e. G = 1. A high-skilled worker provides an effective labour supply of one unit per unit of working time, while a low-skilled worker provides only q < 1 units of effective labour for each unit of working time. This difference in effective labour supply also applies to the second period of life. Let w denote the wage rate per unit of effective labour, then the maximum amount of income agents can earn in the first period, denoted by W y (a), is given by: where a * is the cut-off ability level to become high-skilled. It is assumed that a * is exogenous. 5 For the second period of life, the maximum labour income, W o (a), equals: Individual lifespan Each individual lives completely the first period of life (with a length normalized to unity), but only a fraction (a) ≤ 1 of the second period. We assume that (a) ≥ 0: the higher the innate ability of an agent, the longer the length of life. As a consequence, our model contains a positive association between longevity and skill level. Since highskilled agents earn a higher wage rate than low-skilled workers, the model is in line with the empirical evidence that income positively co-moves with life expectancy. 6 Whenever necessary to parameterize the function (a), we will use the following specification: where ≡ 1 0 a dG denotes the average ability level. This simple function has the following appealing properties. First, represents the average duration of the second phase of life. Second, there is a positive link between ability and the length of life as long as > 0. Indeed, Cov(, a) = Var(a) ≥ 0. Third, consistent with empirical findings (;;), the relative differences in individual lifespans remain constant if the average lifespan increases. In absolute terms, this means that the socioeconomic gap in longevity gets larger if the average life span increases, i.e. (a = 1) − (a = 0) =; the lifespan of more able individuals increases more when average longevity rises. Consumption and retirement An individual faces the following intertemporal budget constraint: where is the social security contribution (tax) rate and P denotes total pension entitlements received during old age. 7 Maximizing lifetime utility over c, d and z, subject to the lifetime budget constraint yields the following first-order conditions: Expression is the standard consumption Euler equation. Equation is the optimality condition regarding retirement and states that the marginal benefit of working (net wage rate) should be equal to the marginal cost of working (disutility of labour). From these first-order conditions, we obtain the following expressions for c, x and z for the benchmark model: where P denotes total pension entitlements in the benchmark model. Note that the social security tax distorts the retirement decision: the larger the contribution rate, the earlier agents leave the labour market, i.e. the lower z, because it reduces the net wage (and thus the price of leisure). Notice further that our disutility specification ensures that the retirement period is proportional to longevity, i.e. Hence, a longer lifespan is split between later retirement and a longer retirement period. Low-skilled workers retire earlier than high-skilled workers for two reasons. First, since it is assumed that q < 1, low-skilled people have a lower wage rate (substitution effect). Second, low-skilled workers will generally have a shorter lifespan which induces them to leave the labour force earlier (disutility of labour effect). Social security The PAYG social security scheme is of the Beveridgean type. In the benchmark model, agents receive a flat pension benefit b per retirement period which starts at the statutory retirement age h and lasts until the end of the individual old-age period. Total pension entitlements P are then: 8 The fact that the pension benefit is flat but social security contributions are proportional to the wage rate implies that the pension scheme redistributes income from high-income to low-income individuals. The pension scheme also redistributes from short-lived to long-lived individuals, however, as individuals receive the flat pension benefit until their death. The positive link between ability, wages and lifespan in our model then implies that there is also some redistribution from low incomes to high incomes, as the latter group typically has a longer lifespan. A feasible social security pension scheme must satisfy the following resource constraint: 9 Using Eqs. and, we can rewrite this equation as: This condition states that the total amount of pension benefits paid out (left-hand side) equals the total amount of tax contributions received (right-hand side). The first term on the right-hand side is the tax payments of the low-skilled workers, and the second term is the payments of the high-skilled workers. As a measure for redistribution, we calculate the net benefit of participating in the pension scheme. The net benefit is the difference between the total pension benefits received and tax contributions paid: An agent is a net beneficiary if total pension benefits received exceed contributions paid (i.e. N B > 0). Otherwise, the agent is a net contributor (i.e. N B < 0). A priori it is not immediately clear whether the low-skilled agents are the net beneficiaries of this Beveridgean pension system. On the one hand, low-skilled agents benefit from this pension scheme as they have a lower wage rate and generally retire earlier than highskilled agents. On the other hand, low-skilled agents also die earlier than high-skilled agents, which implies that low-skilled agents are negatively affected by the pension scheme. Using the definition of net benefits, Eq., the budget constraint of the pension scheme implies: The net benefits of all (young) individuals are equal to zero, reflecting the zero-sum game nature of the pension scheme. 10 Pension flexibility reforms In recent years, many countries have taken measures to increase work incentives and to stimulate people voluntarily to continue working. In this section, we consider the welfare and redistribution effects of a pension reform that allows for a flexible starting date of social security benefits, as recently implemented in e.g. the UK, Finland and Denmark. Introducing a variable starting date for benefits may help individuals to adjust the timing of pension income according to their own preferences. We will show that flexible pensions can also help to bear the costs of ageing or to reduce unintended transfers from short-lived to long-lived individuals. In the benchmark model, we have assumed that social security benefits start at the statutory retirement date, irrespective of the individual's effective retirement date. In this section, we impose that the benefits start at the time the individual actually leaves the labour market. If a person then retires later than the statutory retirement age, he receives an increment to his benefits for later retirement, and when this person retires earlier, he receives a decrement. The imposed coincidence of pension take-up and retirement is a realistic assumption because in practice flexible pension schemes often contain legal restrictions to continue work after a person has opted for benefits. 11 We will first discuss the actuarial adjustment of benefits in general. The specific cases of individual actuarial adjustment and uniform actuarial adjustment of benefits will be discussed in Sects. 3.1 and 3.2, respectively. Actuarial adjustment of benefits Suppose the government pays benefits p to an individual over his whole effective retirement period. Total pension entitlements are then equal to P = ( − z) p. Pension earnings per retirement period p are given by: where b is the reference flat pension benefit independent of contributions and labour history. The factor m() is the actuarial adjustment factor which determines to what extent the reference benefit b will be adjusted when agents retire later or earlier than the statutory retirement age and is given by: where we impose − z > 0 to make sure that m() > 0 to rule out negative pension benefits. The adjustment factor is equal to the ratio between the average retirement period and the individual retirement period measured by the reference lifespan parameter which will be specified below. At the individual level, actuarial non-neutrality arises when differs from. The function m() is an increasing function in the individual retirement decision z; when an agent decides to continue to work after the statutory retirement age, the pension benefit in the remaining retirement periods will be adjusted upwards. We consider two scenarios for the lifespan to be used in the adjustment factor which differ with respect to the information set available to the government. In the first scenario, the government can observe individual longevity and uses adjustment factors based on individual lifespans ( = ). The government can then get rid of the adverse redistribution from short-to long-lived individuals. The implication of this is, however, that the high-skilled will be harmed by this reform while the low-skilled gain, and a Pareto improvement is not possible. In the second scenario, we assume that the government applies a uniform actuarial adjustment factor, based on the average lifespan of the population ( = ). This uniform actuarial adjustment introduces selection effects in the retirement decision, long-lived agents have an incentive to postpone retirement, while short-lived agents have an incentive to advance retirement. We show that in this reform scenario, a Pareto improvement is possible. Individual actuarial adjustment of benefits To set the scene, we assume that the government can observe individual lifespans (or individual abilities) 12 and uses this information to assess the adjustment of benefits. This complete actuarial adjustment is a rather extreme position as it seems quite unrealistic that the government can observe individual lifespans. Moreover, it removes the raison d'tre of our pension scheme as a redistribution device. Still, we think it is useful to present this case as a benchmark to demonstrate that it is impossible to generate a Pareto improvement with complete actuarial adjustment of benefits. It illustrates that a degree of incompleteness in the actuarial adjustment is necessary to generate a Pareto improvement, as in the case with uniform actuarial adjustment presented in Sect. 3.2. Moreover, we show in the online Appendix B that the same result applies in the more realistic case where the government cannot observe the individual lifespan, but only the skill level, i.e. the education level (which in general is correlated with individual lifespans, see e.g. van ) and uses that information to adjust the benefits; also in that case, actuarial adjustment of benefits cannot result in a Pareto improvement. Actuarial adjustment factor With individual adjustment, =, the individual-specific adjustment factor m and the pension entitlements P become: Note from Eq. that m = 1 for an agent with an average ability level (a =) who retires at the statutory retirement age h. For this so-called average individual, the pension benefit per retirement period is equal to the reference benefit, i.e. p = b. In case this person retires later than the statutory retirement age, then m > 1, implying that the per-period benefit is adjusted upwards, i.e. p > b. On the other hand, when the person retires earlier than the statutory retirement age, we have m < 1 and p < b. The retirement decision is actuarially neutral because the effective retirement age has no effect on the total pension entitlements P, i.e. ∂ P/∂z = 0. Agents cannot increase their total pension entitlements by postponing or advancing retirement. Any individual, irrespective of lifespan, income or skill level, receives exactly the same amount of lifetime pension benefits. Consumption and welfare effects The retirement decisions are the same as in the benchmark social security model (z ben = z ind ). 13 The aggregate budget constraint of the pension contract also does not change, implying that the pension benefit per retirement period stays the same as well (b ben = b ind ). Only consumption changes: With individual actuarial adjustment, the redistribution in the PAYG scheme related to differences in lifespan (i.e. from short-lived to long-lived agents) that is present in the benchmark model is removed, but the retirement decision is not changed. As a result, lifetime income, and therefore consumption, is higher for short-lived individuals (i.e. with a lifespan below the average lifespan ) and vice versa for long-lived individuals ( > ). From this, we can immediately infer the following result: Proposition 1 Introducing retirement flexibility using individual actuarial adjustment of pension benefits implies that the welfare of the short-lived agents ( < ) increases while the welfare of the long-lived agents ( > ) decreases. This reform therefore cannot be a Pareto improvement. As the retirement decisions are the same as in the benchmark model, introducing flexible retirement with individual actuarial adjustment does not generate an efficiency gain, only pure redistribution. As a consequence, an improvement according to the Kaldor-Hicks criterion is not possible either. Uniform actuarial adjustment of benefits Individual lifespans are difficult to observe in practice. Therefore, real-world pension schemes with a flexible starting date for benefits always rely on uniform actuarial adjustment factors based on some average life expectancy index. In this section, we show that this uniform adjustment of benefits can increase welfare of all individuals, i.e. induce a Pareto improvement, although individuals are heterogeneous. Actuarial adjustment factor With uniform adjustment, the reference lifespan index is the same for each agent, =, so the adjustment factor and pension entitlements are: The actuarial adjustment factor m equals one for each individual who retires at the statutory retirement age, i.e. if z = h, so that p = b. Agents who retire later than h receive a higher benefit, p > b, and agents who retire earlier receive less, p < b. From Eq., we observe that, ceteris paribus, total pension entitlements of agents with long lifespans are higher than the entitlements of agents with short lifespans. This redistribution implies that the pension scheme is not actuarially neutral at the individual level. As the amount of pension entitlements depends on the individual retirement age, uniform actuarial adjustment introduces selection effects in the retirement decision. To show this, we derive from Eq.: For agents with above-average lifespans ( > ), > 0, implying that these agents have an incentive to postpone retirement as this will increase their lifetime pension income. From an actuarial point of view, the conversion factor of these agents is too high. For short-lived people (with < ) it is just the opposite; for these agents, the conversion factor of continued activity is too low which stimulates early retirement. For these people, postponing retirement would simply mean that total pension entitlements decrease ( < 0). Consumption and retirement With flexible pension take-up and uniform actuarial adjustment, the lifetime budget constraint of the a-individual is still equal to Eq., but now P is defined as in Eq.. Only the first-order condition regarding retirement changes: with (z) given by Eq.. Consumption and retirement are then equal to: 14 Equation shows that there is an extra distortion in retirement behaviour. Like before, we have that the contribution rate induces early retirement (through its impact on z ben ). The redistribution effects, represented by, imply an additional distortion in the retirement decision. This redistribution distortion can either stimulate retirement or depress retirement, depending on the individual lifespan. For individuals with below-average lifespans ( < ), < 0, which implies that these people retire earlier as a result of uniform actuarial adjustment. If individuals have above-average lifespans ( > ), then > 0, and these people will postpone retirement. Consumption can either be higher or lower compared to consumption in the benchmark case. The last term in Eq. is negative and reflects the utility loss resulting from the redistribution distortion in the retirement decision. Of course, flexibility can also induce a utility gain because an agent can choose the retirement age which gives him the highest entitlements. This potential gain is captured by the term P uni − P ben. Note from Eqs. and that total pension benefits are generally not the same in the benchmark scheme and in the flexibility reform with uniform adjustment. 15 Welfare effects The welfare effects are not trivial because, compared to the benchmark model, uniform adjustment introduces another distortion in the retirement decision which can work into the opposite direction of the existing distortion related to the contribution tax. We will show that under certain conditions, this reform can lead to a Pareto improvement. Suppose that the reform takes place unexpectedly. First we will analyse how this reform affects utility of the current old generation. In the benchmark, second-period consumption is equal to: where savings are equal to s = (1 − )W y − c. After the reform, the first-order condition for the retirement decision of the old generation is given by Eq.. Using this condition, old-age consumption after the reform is: The old generation is not worse off after the reform when u(x uni ) − u(x ben ) ≥ 0, implying: The current young generation and future generations are better off if U (c uni, x uni ) ≥ U (c ben, x ben ) for each ability level, which implies, using Eq., c uni ≥ c ben. From Eq., we can see that the condition for young and future generations is exactly the same as that for the current old generation. This is due to the fact that there are no income effects in the retirement decision. Consequently, for a given ability level, the transition generation and all future young generations retire at the same age and thus have the same amount of lifetime income. Hence, when condition is satisfied and is strictly positive for at least one a-individual, the reform is Pareto improving. To analyse the possibility of a Pareto improvement, we make the following assumption: Assumption 1 The statutory retirement age is set equal to the retirement age of the individual with the average ability level, i.e. h = z(). This assumption implies that individuals with below-average life span have an incentive to advance retirement as from an actuarial point of view the adjustment factor of retirement postponement is too low for them. Therefore, for these people, retiring after the statutory retirement age is not in their interest, ceteris paribus, as it reduces pension entitlements compared to the benchmark. For individuals with above-average life span, exactly the opposite holds. These individuals have an incentive to postpone retirement because the actuarial adjustment factor is too high for them. Hence, retiring before the statutory retirement is not in their interest. Suppose Assumption 1 is satisfied, we can then derive the following result: Proposition 2 A pension reform from inflexible Beveridgean pensions towards flexible Beveridgean pensions with the same tax rate and uniform actuarial adjustment of pension benefits is a Pareto improvement if and only if ≥ *, with * equal to: The intuition for this result is as follows. High-skilled workers certainly gain from this reform because the adjustment factor is too high for them from an actuarial perspective because they live longer. This leads to a lower implicit tax on continued activity and thus later retirement. The welfare of low-skilled workers in principle declines because they are confronted with higher implicit taxation as their actuarial adjustment factor is too low. The only way to compensate for this loss is to give the low-skilled more social security benefits. If the contribution tax rate is sufficiently high, it is indeed possible that the continued activity of the more able generates enough resources to compensate the less able so that ultimately the welfare of all agents is higher. Instead of keeping the tax rate constant as assumed in Proposition 2 and making everyone better of, the government may also use the additional resources generated by reducing the distortion of the retirement decision to lower the tax rate without making The crucial factor allowing for this is that the reform generates a double dividend: it not only generates additional revenues but also fosters redistribution from high to low incomes. Similar to Cremer and Pestieau, this 'double dividend' hinges on two conditions. First, the retirement decision in the benchmark pension scheme needs to have a downward distortion, i.e. retirement is too early, and the removal of this distortion therefore brings additional resources. Second, the pension contract needs to be redistributive from rich to poor individuals so that most of the cost of the reform is borne by the high-income people. 16 In Fig. 1, we show a numerical illustration of the welfare (left graph) and redistribution effects (right graph) of a switch to a flexible scheme based on uniform actuarial adjustment. The underlying parameterization is as follows. The tax rate is 0.3, 17 16 Instead of assuming a fixed tax rate as in Proposition 2 or an ad hoc decrease in the tax rate, one could also assume a government that optimally sets the tax rate so as to maximize a social welfare function weighing the welfare of the various groups in society. This would not change our result: a Pareto improvement results if the initial optimal tax rate is sufficiently high. The intuition for this is that, as stated in the proposition, the introduction of flexible retirement with uniform actuarial adjustment with a given tax rate leads to a welfare gain that allows for a Pareto improvement. If an optimizing government adjusts the tax rate jointly with the introduction of flexible retirement, this will affect the allocation of this welfare gain, but a welfare maximizing government will always allocate the welfare gain in such a way that no group is worse off compared to the initial situation. 17 This might seem a rather high number for a tax rate primarily used for old-age pensions. However, in reality redistribution from high to low incomes also occurs in other parts of the economy, like the tax and public health care system. What is crucial for our results is not so much the exact level of the tax for the pension scheme as such, but the distortion determined by the marginal tax rate that results from all redistributive taxes together. The contribution rate for the Dutch Beveridgean pension scheme (the AOW) is currently 17.9 %. And if we take the contribution rates for the other national insurance schemes (mainly insurance against special health care expenditures) into account, the contribution rate is 31.15 %. w = 1 and = 2. We further assume h = 1/6 and = 0.7, which implies an official retirement age of 65 and an average lifespan of 81 years. 18 The heterogeneity parameter is calibrated such that the difference between the lifespan of high-skilled and low-skilled agents is at most 3.5 years which is consistent with recent Dutch estimates, this gives = 1/6. We interpret the high skill level as the highest attainable education levels in the Netherlands (i.e. higher vocational training and university) and the low skill level as the collective term of all remaining education levels. According to recent figures of Statistics Netherlands, about two-third of the Dutch population is low-skilled (a * = 2/3) and these people earn about 40 % less than high-skilled agents (q = 0.6). Finally, we assume that ability a follows a uniform distribution, 19 i.e. G(a) = a, and that the utility function is logarithmic, i.e. u() = ln(). Figure 1a shows that the welfare effects of introducing flexible retirement with uniform adjustment are positive for all high-ability agents. These agents benefit from a lower implicit tax on continued activity due to the attractive actuarial adjustment factor and therefore choose to work longer. With these parameter settings, however, the additional tax contributions are not sufficient to compensate all lowskilled agents for the higher implicit tax they are confronted with, although most of them experience an increase in the net benefit from the scheme (see Fig. 1b). To achieve a Pareto improvement, the contribution rate needs to be at least 40 %, that is, There are good reasons to argue that in practice the tax critical rate is lower than presumed in our analysis. First, as explained in Footnote 17, income redistribution from rich to poor runs through more channels than the pension scheme, like the tax system or public health care. Hence, when high-skilled agents are stimulated to work longer with a flexible pension take-up, the low-skilled may also be compensated through these other types of redistribution. Second, in reality the contribution tax is added to other sources of distortionary taxation. As the deadweight loss is roughly quadratic in the total tax rate, the marginal welfare improvement of introducing flexible retirement (and lowering implicit taxation) might be larger than our analysis suggests. In the next section, we show that a reduction in the tax critical rate can also be obtained by reformulating the pension reform to some extent, i.e. by setting the reward rate of retirement postponement above the actuarially neutral level. Footnote 17 continued Moreover, the marginal tax rate is about 50 % for most Dutch citizens (see CPB 2012). Therefore we think that assuming a tax rate of 30 % is not so unrealistic. 18 We assume that lifetime consists of 30 years of childhood that are not accounted for, 30 years of full potential working time (which can partly be used for tertiary education) and a last period of 30 years. The official retirement age is therefore 60 + 30h and the average lifespan is 60 + 30. The average lifespan at birth of 81 years is taken from the online population projection 2012-2060 of Statistics Netherlands (statline.cbs.nl). 19 The assumption of a uniform distribution used in the example is not crucial for our results. Other distributions will lead to the same results, provided that the mass of high-skilled individuals in the distribution is sufficiently large. This is important as the extra tax revenues generated by this group should be sufficiently large to compensate the low-skilled. Introducing actuarial non-neutrality In recent years, an increasing number of countries introduced penalties and rewards for earlier and later retirement. To stimulate work continuation, the penalty rate is typically not as high as the reward rate, i.e. the adjustment is asymmetric. In the USA, for example, for each year of retirement before the statutory retirement age, the annual benefit is reduced by 6.75 %. The actuarial increment for those retiring after the statutory retirement age amounts to 8 %. In Japan, the difference is even larger, where the penalty rate of early retirement is 6 % per year while the reward rate of later retirement is 8.4 % (OECD 2011). In this final section, we therefore consider a pension flexibility reform where pension benefits are adjusted in an actuarially nonneutral way to induce people to postpone retirement. We show that under such a reform, a Pareto improvement can be achieved at a lower contribution rate or that for a given contribution rate, it leads to more positive welfare effects for all individuals. Actuarial adjustment factor To make our point as clear as possible, we abstract from lifespan heterogeneity in the analytical analysis. Hence, each agent, irrespective of his ability level, lives a fraction ≤ 1 of the second period. In the simulation graphs, however, we have heterogeneous lifespans. The actuarial adjustment factor is specified as follows: where the parameter governs the degree of actuarial non-neutrality of the adjustment factor, this is also shown in Fig. 2. In case = 1, the adjustment is completely actuarially neutral with respect to the retirement decision (see Sect. 3.1). For > 1, the adjustment factor is higher than the actuarially neutral level if agents retire later than the statutory retirement age (z > h). On the contrary, the adjustment factor is lower than the actuarially neutral level if agents retire earlier than the statutory retirement age (z < h). In other words, specification rewards delaying retirement and discourages early retirement as long as > 1. Given Eq., the pension entitlements P are equal to: Taking the derivative of P with respect to z gives: Hence, if > 1 then > 0, i.e. introducing actuarial non-neutrality gives all agents an incentive to continue working as this will increase pension entitlements. Consumption and retirement The consumption decision and retirement decision are equal to: 20 where P and are defined by Eqs. and, respectively. Taking the derivative of the retirement choice with respect to the parameter that governs the degree of actuarial non-neutrality gives (evaluated at = 1): An increase in the parameter (starting from actuarial neutrality, i.e. = 1) leads to later retirement. The introduction of this kind of non-neutrality in the retirement decision can undo (at least to some extent) the distortionary effect of the social security tax. This result is comparable with the situation in the flexibility reform with uniform actuarial adjustment and heterogeneous lifespans. With uniform actuarial adjustment, however, the pension scheme is still actuarially neutral on average: high-skilled workers (with a long lifespan) receive a subsidy on continuing work, whereas low-skilled workers (with a short lifespan) experience a tax on delaying retirement. The current reform is different because now the pension scheme subsidizes work continuation for all agents, irrespective of skill level. Welfare effects Introducing actuarial non-neutrality does not only stimulate labour supply, it also leads to a Pareto improvement if the tax rate is sufficiently high. Proposition 3 With a given tax rate, introducing actuarial non-neutrality aimed at stimulating work effort makes high-skilled workers strictly better off. In addition, the reform is Pareto improving if and only if >, with: This implicit equation has a unique solution. Proof See Appendix A.2 The intuition for this result is similar as in the reform with uniform actuarial adjustment (see Sect. 3.2). The government can apply non-neutral actuarial conversion of benefits for late retirement as an instrument to increase the total efficiency of the economy. This subsidy reduces the existing labour supply distortion on the retirement decision related to the contribution tax rate. With actuarial non-neutrality, however, the reward rate of retirement postponement is relatively more attractive for agents who retire later (i.e. the high-skilled), as can also be seen from Fig. 2. Therefore, to ensure that the welfare of the low-skilled also improves, the contribution rate needs to be sufficiently high so that the additional tax payments of the high-skilled lead to higher pension benefits. Figure 3 compares the welfare effects of a uniform adjustment under actuarial neutrality (dashed line) and actuarial non-neutrality (solid line). Contrary to the analytical exposition discussed above, this graph is based on heterogeneous lifespans (see also Eq. ). All parameter values are the same as those used in the previous graphs. As we have shown before, a contribution rate of 30 % is not sufficient to ensure that an actuarially neutral and a uniform adjustment of benefits is Pareto improving. Figure 3 shows, however, that when a uniform adjustment is combined with actuarial non-neutrality, this has strictly positive welfare effects for all individuals under a contribution rate of 30 %, i.e. the reform is Pareto improving. This implies that by introducing actuarial non-neutrality in the pension scheme, it is possible to achieve a Pareto improvement for a lower contribution tax rate. The reason for this result is that an actuarial nonneutral uniform adjustment gives more incentives for the high-skilled to retire later and that their labour supply in the second period will be higher than under the actuarially neutral reform; this will generate more resources to compensate the low-skilled. Conclusion In this paper, we have studied the intragenerational redistribution and welfare effects of a pension reform that introduces a flexible take-up of pension benefits. To analyse the economic implications of such a pension reform, we have developed a stylized two-period overlapping-generations model populated with heterogeneous agents who differ in ability and lifespan. The model includes a Beveridgean social security scheme with lifetime annuities. In this way, we take into account the empirically most important channels of intragenerational redistribution: income redistribution from rich to poor people and lifespan redistribution from short-lived to long-lived agents. Our results suggest that introducing a flexible pension take-up with uniform adjustments can induce a Pareto improvement. This reform can collect additional resources without diminishing the welfare of low-skilled agents and increasing that of highskilled agents. In that way, it can also help to bear the costs of ageing in a Beveridgean pension scheme. The selection effects of uniform actuarial adjustment increase the implicit tax of the low-skilled, but decrease the implicit tax of the high-skilled, who in turn decide to work longer and therefore pay more pension contributions. A necessary condition for such a Pareto improvement is that the contribution tax is sufficiently high so that the continued activity of the high-skilled generates enough tax revenues to compensate the low-skilled with higher benefits. Increasing the reward and penalty rates of later and earlier retirement in an actuarially non-neutral way can help to reduce this tax critical rate. This policy reduces the implicit tax not only of the high-skilled agents but also of the low-skilled, implying that the less-skilled agents need less compensation through the redistributive pension scheme. In real-world pension schemes that have actuarial adjustment of pension entitlements, this adjustment is indeed independent of individual characteristics, like life expectancy or skill level. The results of this paper give a rationale for this kind of uniform flexibility reforms. In recent years, penalties and rewards for earlier or later retirement have increased in a number of countries (OECD 2011). However, in most countries, the implemented reductions in early pension benefits do still not fully correspond both to the lower amount of contributions paid by the worker and to the increase in the period over which the worker will receive pension payments (Queisser and Whitehouse 2006). This implies that there is still room to improve the pension systems by going into the direction of complete actuarial neutrality or by moving even beyond that level, as our analysis of non-actuarial neutral adjustment suggests. Other important elements to which we have not paid attention, but that might be important when analysing pension flexibility, are the role of income effects in the retirement decision or social norms. Especially in the short run, flexibility in the pension age could lead to only small changes in retirement behaviour if agents are used to retire at some socially accepted retirement age. In the long run, however, norms may change and the effects described in this paper may still apply. To what extent these kinds of issues would affect our main results is left for future research. Our paper, however, provides a rationale why countries with Beveridgean pension schemes should use uniform rules for the adjustment of pension benefits when they introduce flexible pension take-up even though people have different skill levels and life expectancies. It is sometimes argued that it would be preferable to base the actuarial adjustment factor on individual life expectancy or skill level. This paper shows that even in a very simple setting, the latter type of pension flexibility reform cannot be Pareto improving as some of the redistribution in the initial pension scheme (from the short-to the long-lived) is removed. It is therefore important to take all types of redistribution in the initial pension scheme into account when discussing the implementation of flexible pension take-up. Applying uniform actuarial adjustment, possibly combined with non-neutral elements to increase the incentives to postpone retirement, could increase the economic efficiency of the pension system. In that way, this reform generates extra resources to cope with the costs of ageing and makes some people better off while not hurting other people. Proof of Proposition 3 Proof With actuarial non-neutrality, the Pareto improving condition is: 21 where for at least one a-individual, this inequality should hold strictly. Suppose we start from a situation of actuarial neutrality, = 1, which means = 0. Then we derive the following derivative, evaluated in the initial position = 1: To prove that the reform is Pareto improving, we have to show that ∂ /∂ ≥ 0, where for at least one individual, this inequality strictly holds. Write the budget constraint of the pension scheme in the usual way: where X is already defined by Eq. and with equal to: To prove that is a unique solution, we have to show that the derivative ∂ /∂ is monotonically increasing in at = 1. Rewrite Eq. in ∂ /∂ = X A, with A equal to: Since X > 0 the necessary and sufficient condition for ∂ /∂ ≥ 0 is A ≥ 0. This implies that is a unique solution if and only if A is monotonically increasing in. Taking the derivative of A with respect to gives, after some algebraic manipulations: 22 This completes the proof. |