Download e-book The Background of Social Reality: Selected Contributions from the Inaugural Meeting of ENSO

Free download. Book file PDF easily for everyone and every device. You can download and read online The Background of Social Reality: Selected Contributions from the Inaugural Meeting of ENSO file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with The Background of Social Reality: Selected Contributions from the Inaugural Meeting of ENSO book. Happy reading The Background of Social Reality: Selected Contributions from the Inaugural Meeting of ENSO Bookeveryone. Download file Free Book PDF The Background of Social Reality: Selected Contributions from the Inaugural Meeting of ENSO at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF The Background of Social Reality: Selected Contributions from the Inaugural Meeting of ENSO Pocket Guide.

While i is a case of conceptual mistake, ii is just a case of absence of application. The crucial difference lies in the fact that while in the former case the concept in question is relevant to the evaluation of the action, i. Following our previous constraints, to account for the normative constraint specified in 1 and 2 above, it is necessary to be able to account for the abilities that underlie the attribution to a subject that she is committing a mistake in the use of a concept conceptual mistake , and to distinguish that case from a case in which the subject is simply not applying the concept absence of application , i.

How should we then understand self-correction in application of concepts? Self-correction in the relevant sense seems to involve three dimensions of performance:. As it will be shown in the following sections, both causalist and interpretationist accounts of conceptual abilities fail when accounting for the distinction between cases of misapplication or conceptual mistakes and cases of absence of application and the consequence of this failure is their inability to meet NC.

The way in which competence regarding a specific concept X can be defined in causal terms is the following:. John is competent with respect to concept X iff given certain conditions C, John is disposed to apply X to y iff X y is true 6. In this framework, conceptual mistakes are modeled in terms of the failure of a mechanism: conditions C are not given. The reason for this failure might be internal to the mechanism, that is, that the mechanism is malfunctioning or it might be the absence of one of the enabling conditions required for the mechanism to work. I claim that when assuming such way of understanding conceptual competence, there is no non-question-begging way of distinguishing between conceptual mistakes and absence of application.

If an account of conceptual capacities could not distinguish between both cases, it would fail to explain what is for John to have any conceptual ability and to distinguish this from the case where this ability is merely absent. The causalist model fails to provide a plausible distinction between conceptual mistakes and absence of application at least for two reasons:. So in this model it is not possible to distinguish between cases of conceptual mistakes and cases of lack of application. As Boghossian 7 famously pointed out, the same reactions can be described using different concepts.

This further requires for the model to distinguish different responses as appropriate or not in specific contexts, and in order to identify the proper set of responses we need to distinguish the good cases from the bad ones, conceiving these as cases in which conditions C fail, in the example at issue conditions C would include John cognitive mechanisms working fine, including normal functioning of attention, memory, etc. But this means that we have to presuppose its content without accounting for it in terms of reactions, opening an explanatory gap. Importantly, there is no distinction between absence of application and misapplication that does not depend on stipulating the concept at issue and thus presupposing the pertinence of that very distinction.

It is important to bear in mind that this problem rises independently of whether the account takes these processes to occur at the subpersonal level or at the personal one. In either case, there is no non-question-begging way of distinguishing that the behavior accords with one concept and thus is a case of conceptual mistake and not mere absence of application of that concept 8.

Thus, the proposal fails to meet NC 9. The second reason why this view fails to make the distinction between misapplication and absence of application is that this account does not give a proper account of self-correction. According to this kind of theory, the source of error is a failure in conditions C, but this kind of error is independent of the subjects being able to identify it in practice.

The mistakes are of such a nature that the subject may be unable to identify them direct access to them could even be impossible for the subject and modify his use of concepts according to the identification of error and its sources. In fact, conditions C are not conceptually linked to the concepts the subject is applying or trying to learn.

But self-correction seems to be a key ability to account for the process of learning new conceptual contents through training. Can this theory account for the connection between the identification of mistakes and conceptual abilities that seem constitutive of the process of learning conceptual contents and linguistic terms associated with them? As shown before, they cannot. However, someone may hold that there are second order dispositions to evaluate reactions corresponding to the component b of self-correction described above.

The idea would then be that by positing them it is possible to account for self-correction and still defend a purely dispositional account of conceptual competence But a similar problem arises: if those second-order dispositions were fallible and learnt, they would require dispositions of higher order to be learnt.

This involves a vicious regress. If, on the contrary, those dispositions are not fallible and learnt, they are some kind of sui generis dispositions. This leaves their nature unexplained: are they to be conceived in causal terms? It seems that they must not be, in order to avoid the previous difficulties, but then another notion of conceptual ability must do the work here.

This leads to an explanatory gap. Thus, the theory fails to account for NC 2 since it cannot explain the learning and acquiring of conceptual contents in a naturalist way it fails by opening an explanatory gap when introducing the sui generis dispositions involved in self-correction. And it also fails to account for NC 1 since its inability to account for self-correction shows a corresponding failure to draw crucial distinctions between the capabilities of artifacts and other sorts of entities, some of them capable of self-correcting in ways that others are not.

There is, according to this model, only one basic kind of mechanism that explains all of these prima facie different phenomena. But then the proposal fails in explaining the nature and complexity of different abilities in terms of more basic or previous ones, and so fails in drawing the relevant distinctions between abilities and capabilities of different complexity in a natural and gradual scale I have presented three dimensions that are involved in self-correction:. If causalism thinks of level b by analogy with a and fails to account for c , interpretationism stresses level b.

Briefly sketched, according to this model to be a conceptual creature is to be a language user. To interpret someone is to attribute meaning to their conduct conceiving it as oriented by wishes and beliefs in the context of a common perceived world. In sum, to interpret someone is to implicitly construct a theory about the content of their beliefs, wishes and the like, in the context of a world where both the interpreter and the interpretee are commonly situated. The emphasis in this view lies then on component b , the evaluation of the actions of a subject according to concepts.

Accordingly, the model defines conceptual competence as follows:. The attribution of error — in the sense of conceptual mistakes — is captured as a difference between the perspective of the interpreter and the perspective of the interpretee regarding a special case of application. This may happen in a number of ways. It might be the case that the subject makes a perceptual judgment about something that is openly accessible to both the interpreter and the speaker or it might be that the claim involves a judgment that is not immediately connected to the commonly available perceptual evidence for both speaker and interpreter.

While the former constitutes the beginning of the interpretational process, the latter depends on previous judgments concerning what the speaker is taken to believe, intend and desire. Because of the general theory about what the speaker is trying to convey at that particular moment, the interpreter can then attribute local mistakes to what is asserted. The difference between the two cases is then that in order for the interpreter to make sense of what is being asserted she would start by attributing to the speaker that he is related to the same environment that she is and by that token that he perceives and holds to be true beliefs about that environment that are the same as those she herself holds.

It is only with specific evidence to the contrary that the interpreter will withdraw this particular attribution and then attribute to the speaker an error of judgment regarding what both are commonly perceiving. Error will then be explained as a matter of difference between what the interpreter takes to be the case and what she can make sense of the speaker trying to convey, taking into account all the other evidence she has about his beliefs, desires, and the like.

The cost of attributing error to commonly held judgments is so vast that rationality constraints on the interpretation dictate to attribute a difference between her perspective and the one of the speaker regarding some other judgment. This is all left on the hands of the interpreter who can then make sense of the behavior in different ways, all compatible with the evidence. The rule is always to attribute the less possible mistake, which is just the content of the principle of charity that governs interpretation.

This model turns out to be problematic when trying to distinguish between conceptual mistakes and absence of application — and hence to account for conceptual abilities. There are at least three difficulties worth mentioning:. Hence, this theoretical reconstruction does not distinguish between conceptual mistakes and absence of application. To be an interpreter is to have the concept of belief: to be able to interact with somebody else is to be able to attribute beliefs to him.

The concept of belief in turns presupposes having the concept of error, of falsehood. But the theory does not explain how this concept is gained but rather presupposes the need of such a tool; and thus produces an explanatory gap in accounting for the mastery of conceptual abilities. Moreover, the acquisition of thought, i. The model then fails to meet both NC 1 and NC 2.

In sum, the model fails to meet NC 2 , since it cannot explain the learning of conceptual abilities as a gradual process.

What is ENSO, El nino, La nina, Southern Oscillation, Walker Circulation - UPSC / IAS

This implies an explanatory gap regarding the acquisition of language, in particular in the acquisition of the concept of error to be attributed to oneself and others. For these reasons, the model cannot account either for continuity in nature, i. This leaves unexplained the nature of their capacities and the connection between their ways in the world and ours. The above considerations have shown that both causalist and interpretationist accounts fail when accounting for component b of self-correction, i. Thus, in order to overcome their difficulties we need to offer an explanation of level b of the self-correction dimensions that i is not reduced to mere causal reactions, as in the case of causalist models.

The strategy is to include an evaluative component that is not conceived in terms of level a. Second, the account of b , must ii not presuppose articulated contentful thought, as is the case of interpretationists account. As in the previous cases, the account of b needs to iii have the relevant consequences for c. Before presenting my strategy, there are some distinctions and precisions that are worth making. The aim to give an account of conceptual competence seems to be a highly ambitious one and there are of course a number of different proposals all of which would deserve to be seriously taken into account when analyzing what the correct answer to NC might be.

One issue that is of particular relevance in this domain is the distinction between conceptual and non-conceptual content. As it is known, many current theories of conceptual competence attempt to address what I am calling the NC precisely by drawing that distinction. Nevertheless, I neither address this specific topic in this paper nor I explore alternative attempts to bridge the gap between the conceptual and non-conceptual domains I can dispense of doing that since what I would be arguing for is neutral to those further worries.

It should be noted that my claim is not that all cognition should be conceptual but rather that to account for conceptual abilities while meeting NC, the account needs to meet the normativity constraint. So my point is the following: no matter where you draw the line between the conceptual and the non-conceptual, meeting NC requires giving an account of some sort of basic cognition that cannot be reduced to mere dispositions but that, at the same time, can be accounted for in terms that do not presuppose the grasping of propositional fine-grained thoughts.

My proposal is to think of this more basic competence as a normative one and to model the minimal conceptual ability at issue as an ability to respond to standards of correct behavior in a way that suffices to distinguish between cases of absence of application and cases of misapplications of the standard The proposal is then to describe that behavior as a behavior of responding to specific standards of correction hence being assessable as right or wrong according to those standards.

Such an account must be one that conceives conceptual abilities in terms of more than mere causal mechanisms without thus committing to an explanatory gap concerning the emergence of propositional fine-grained articulated thought. We can now define more precisely our question concerning the possibility of accounting for the normative constraint on conceptual abilities accommodating NC in the following terms: what features must a behavior have in order to count as a conduct that is sensitive to correctness patterns unlike a behavior describable in merely dispositional terms without thereby committing to it being explained as depending on propositionally articulated thought, thus leading to an evolutionary and explanatory gap.

Surprising as it might appear as first glance, I suggest that the crucial move to answer this question is to focus our attention into the kinds of interactions that basic intelligent creatures are able to deploy. This move is not completely novel in the literature. The crucial point to get clear about though is what kind of interaction we are referring to.

In particular, we need to specify what features of the behavior at stake, if any 1 display sensitivity to standards of correction and 2 are both basic and at the same time sophisticated enough to meet NC. A final further constraint on a proposal of this sort is for it to accommodate the available empirical evidence concerning language and concept acquisition. A first step could then be to take a look at the available evidence concerning language acquisition.

The empirical study of the way in which such abilities are learned and deployed may help us identify the nature of the capacities involved. This claim still needs to gain support from empirical as well as conceptual grounds and I do try to provide such support in the remaining sections of this paper. Available evidence from developmental psychology will also provide some interesting cases of how this second-personal interaction can be conceived.

Hence, while taking a look at empirical evidence, I expect to back up both my claim that a middle path between dispositionalism and interpretationism is in order and that such middle path is to be thought of in terms of a second-personal kind of interaction. As I said, one natural place to look for an answer to this question, framed with NC in mind, is the way children learn concepts. Csibra and Gergely have argued that adults—children interaction is essential to the learning of conceptual content. They have conducted a number of experiments that suggests that there is a crucial difference in the subsequent behavior of the infants if they have learnt merely by observation — when the children are just observing the behavior of adults — or through being explicitly taught — i.

What they noted is that only in the latter case children generalize the result to all similar cases, while in the former they conceive of the case as contextually and situationally bound. This provides us a first indication that interaction plays a crucial role in learning and displaying conceptual abilities as opposed to other kind of learning, where no language is involved. A second indication that the sort of interaction that humans are capable of might be key to the development of their conceptual abilities comes from primatology. Tomasello and Tennie et al.

This means that while they are capable of imitating the use of tools in performing a specific task governed by their own interests and goals, they do not grasp the general meaning of the object nor of the end that is displayed in the behavior in a way that can be detached from the context and the objects they are observing and using in that specific occasion.

This means that children are ready to understand normative standards of behavior and to teach them to others at a very early stage of the development of their conceptual capacities and that they generalize the appropriateness of what they tend to do to all others with whom they are interacting, expecting them to act as they do and complaining if they refuse to do so. How can this then help us to address NC, considering such behavior is exhibited by young children but not by other primates? As I said before, there are a number of philosophical theories that have focused on the nature of human intersubjective exchanges to account for our capacity to grasp linguistic meanings.

Haugeland and Brandom for example, have suggested that it is our attitude of treating a performance as right or wrong in particular contexts what makes that conduct right or wrong, and that this is a socially structured practice, in which we treat each other as committed and entitled or not to further actions as if we were playing a social game, the rules of which get specified by us treating the different moves as appropriate or not. Wittgenstein has also been read as defending a view according to which language should be thought of as a cluster of games that we play together and that it is internal to those games that certain moves are allowed or forbidden.

The moves would then be correct or incorrect according to the game in the context of which they are assessed. As I have argued before, such positions, if taken to be the whole story, turn out to be unable to meet NC. So I suggest that the right place to look at for is not the domain of interpretational theory but rather a different kind of interactionism, in particular interactionist phenomenologically based theories Such theories start from one basic insight about the nature of social cognition: the fact that we are able to understand directly and correctly emotions on the face of others and their behavior as intentional and goal-oriented from the very first experiences of encountering others.

That notwithstanding, it involves more than just mere reactions to stimuli. Phenomenology then provides us with a different route to understand the empirical findings of developmental psychology on the nature of normative behavior. It allows us to understand in what sense we are able to grasp the rightness or wrongness of what we are doing without committing us to think of this in a propositionally loaded way.

Having taken a brief look at some recent works on Phenomenology and Developmental Psychology, we have found concurring support for the need to abandon the third-person perspective characteristic of interpretationism, but also the confinement within the first person perspective, characteristic of causalism. It is in this domain, I argue, that we find the kind of behavior that allows distinguishing between conceptual mistakes and absence of application in a way that does not imply yet the reflective and explicit grasping of the standard to which we are nevertheless responding.

In particular, I argue that it is our emotional response to approval and disapproval attitudes expressed in the interlocutors emotional behavior what allows us to learn from others language and criteria of correct use for words in contexts of use. How this allows us to accommodate the normative constraint answering at the same time to NC will be the topic of the next and final section.

As I have claimed, if the problems of interpretationism and causalism are taken seriously what we need to find is a form of behavior that is not reduced to causal reactions but does not presuppose the ability to entertain articulated thoughts. Furthermore, I have shown that taking into consideration the evidence from developmental psychology regarding the learning of language and norms, the right kind of behavior seems to be essentially interactive.

It is a capability that is primary, not acquired, but innate. The conduct of others is recognized as intentional, as directed toward an end. It involves temporal, auditive, and visual coordination with someone else with whom the baby is interacting. It is not substituted by other types of interaction but coexists with them, as a precondition for other abilities and as a complement of them. Later on 20 , children engage in secondary intersubjectivity, a kind of interaction that is characterized by the ability to identify objects and events in pragmatically meaningful contexts by shared attention mechanisms based on the abilities gained through engaging in the previous kind of intersubjectivity.

In this stage, children refer to the adults gaze when the meaning of an object is ambiguous or unclear. It is in the context of this kind of engagement with others that children learn a natural language by being taught and exposed to it in all sort of interactions My suggestion is that the right place to look for the ability of self-correction is in the context of the capability of engaging in primary intersubjectivity It is in that domain that children display a disposition to respond to others, characterized by an attunement to their expectations and an ability to shape their behavior as a way of responding and satisfying the demands of others, paying special attention to the kind of response that their behavior elicits in the adult.

This kind of exchanges is possible through common engagements in face-to-face encounters where the emotions of both are directly perceptible for each other. The common contexts in which those interactions take place include objects and their properties, which, as the interaction evolves and the answers become more stable, begin to be understood as independent standing qualities and objects.

Throughout this process, joint attention mechanisms among other capacities come into stage and help to develop an early stage conceptual understanding and a primitive form of using concepts that will later became much more sophisticated, gaining independence from particular assessments and responses. Nevertheless, they will never lose their connection with actual uses and assessments of others.

How can we then distinguish between conceptual mistakes and absence of application in this early stage of development? In the previous section, I have examined some relevant work in developmental psychology on the nature of normative behavior and learning. Those studies suggest that interactions are key in that they elicit and display normatively informed behavior that is exhibited in the way in which children respond to adults in learning through two basic attitudes: generalizing what they take to be correct and enforcing on others the norm actively correcting each other, showing that they are not only passively responding to the environment but spontaneously conceiving of what they are doing as an standard of correction to which themselves and all others are supposed to conform.

Accordingly, in the context of the kind of interaction just described, I suggest there is a specific ability that constitutes a better candidate than mere reactions or articulated thought to meet NC. I call such ability sensitivity to correction. Sensitivity to correction so defined is precisely the feature of human behavior that allows us to accommodate the normativity constraint without abandoning the naturalistic conditions of adequacy that constitute NC.

When characterizing the different levels involved in self-correction a pervasive feature of normative behavior , I mentioned: a the application of concepts the actions of applying or misapplying a concept , b The ability to evaluate a and c the modification of a according to the results of b.

My proposal, on the contrary, is to think of level b as constituted by sensitivity to correction , that is the ability to correct and monitor our own action in the light of the reactions of others toward those very actions In this case a corresponds to a kind of behavior that displays intentionality, being directed toward an object to which the behavior is responding and b corresponds to the dimension in which we self-monitor our reaction to the object by tuning it to the way other reacts to us and our directed behavior.

Sensitivity to correction is a social disposition, that is, a disposition to tune our behavior to the assessments and normative feedbacks we get from others in particular interactions. It is then an evaluative attitude that involves the perceiving and attunement to the approval or disapproval from others. Finally, corresponding to c , the way in which we apply concepts is of course modified through the assessments involved in b : actually, we may say, assessing our conduct amounts — at least in the most early stages of the acquisition of language and conceptual abilities — to modifying it according to the approval or disapproval of others.

We may now characterize the difference between conceptual mistakes and absence of application given the framework I have just presented. This distinction will take different shapes along the different stages involved in learning and grasping concepts. The concept in question would be poor in content at this point and its boundaries blurry. Thus conceptual competence at this stage is understood as a minimum conceptual understanding: but that minimum is exhibited precisely by the fact that the behavior is sensitive to a distinction between right and wrong ways of acting according to specific standards of correction concepts , and this in turn is equivalent to there being a right way of acting in the world that the other and I share.


  • Interaction and self-correction.
  • Search history function requires JavaScript..
  • University of Melbourne / All Parkvi;
  • Ecological Liberation Theology: Faith-Based Approaches to Poverty and Climate Change in the Philippines;
  • Red-N-Gold: Issue 30.
  • The Butterfly Clues!

Sensitivity to correction is, we may say, the phenomenological exhibition of the normativity of concepts. We can thus distinguish conceptual mistakes from cases of absence of application in that the subject is responding to the assessment of his behavior by modifying it accordingly as will not be the case if it were a case of absence of application.

So, what makes the crucial difference is sensitivity to correction, a sensitivity that is displayed in actual interactions. Now, as learning progresses, self-correction gains independence from the presence of actual assessors. And then the subject self-corrects herself according to different actual or imagined scenarios and perspectives that she can reenact.

Sociability is still a pervasive and crucial element of self-correcting behavior but is now exhibited as the very idea that I can be wrong according to different standards which equates to the idea that there are other perspectives Finally, it is time to consider whether the tools just introduced are capable of properly meeting NC when accounting for the normative dimension involved in concept use.

I cannot provide in this paper a detailed and all-encompassing answer to NC but, as it will be shown next, this proposal can give a proper general strategy to meet NC. This general strategy consists in identifying sensitivity to correction as the middle step between mere causal responses to the environment and contentful propositional attitudes. While the latter imply complete independence, flexibility, detachability, and general inferential articulation; the former, on the contrary, only amounts to nomological covariances between states and objects that may fail given an open number of contextual variations.

The important point is that between these two ends of the invisible line of development and evolution there are as well different intermediate stages. Following this strategy, we can then give a general outline of the evolutionary path from creatures without language or thought to creatures with both abilities. In a first very elemental level there may only be reactions to stimuli, being error just a failure in causal mechanisms. The true normative dimension emerges precisely when sensitivity to correction enters into stage, displaying the ability to interact with others same species, interspecies in a primary interaction sort of exchange.

The precipitation regime in Vanuatu at 6. These small changes, although in the right direction, are, however, not sufficient to simulate an increase in the SST seasonality in the SW Pacific region at 6 ka BP. The modeled insolation-driven hemispheric change in seasonality is not reflected in the SW Pacific proxy data. This suggests the models have difficulty in reproducing mid-Holocene changes in coupled ocean-atmosphere circulation in this region. On the other hand, corals are shallow water organisms and the SST and SSS they record may not be valid for open oceans.

Clearly, more data and new model runs are needed to understand the amplitude and geographical pattern of western Pacific mid-Holocene changes. Interannual variability in the tropical Pacific and associated atmospheric teleconnections during the last glacial period Ute Merkel, M. Prange and M. Simulations of the climate of Marine Isotope Stages 2 and 3 suggest pronounced ENSO variability during the Heinrich Stadial 1 period when the Atlantic overturning circulation was weaker.

Our model results also highlight the nonstationarity of ENSO teleconnections through time. Consensus is still lacking about how ENSO will behave under future climate conditions, even in the latest generation of comprehensive climate models Guilyardi et al. Major goals of paleoclimatic research are to provide constraints on the possible range of changes in response to modified boundary conditions and to identify possible feedback and amplification mechanisms in the climate system.

In this context, climate models are valuable tools for investigating different climate scenarios that have occurred in the past. Zheng et al. However, proxy data from the tropical Pacific e. Stott et al. Therefore, we consider the simulated 35 ka BP climate as a stadial climate state. The counterpart of an interstadial climate state is induced in the model by a 0. Our set of experiments also includes a simulation of a Heinrich Stadial 1 scenario.

This is set up by imposing a freshwater perturbation of about 0. This is motivated by earlier studies which mimic past Heinrich. Timmermann et al. One of our major findings was that interannual about 1.

My Library

Our model results show that these relationships also hold for the different simulated glacial climate states. In particular, our Heinrich Stadial 1 simulation exhibits a much weaker north-south contrast in eastern tropical Pacific SST than under. The first modeling studies that addressed MIS3 were limited to intermediate complexity models e. Ganopolski and Rahmstorf ; van Meerbeeck et al. Barron and Pollard , or an atmosphere-only setup Sima et al. The study used a timeslice. In particular, the teleconnections to the North American continent and the North Atlantic region seem to be strongly altered in terms of amplitude and spatial structure in the LGM and MIS3 simulations.

This difference is probably caused by the presence of the glacial continental ice sheets and the glacial cooling of the North Atlantic, which both affect the position of the upper-tropospheric jetstream and atmospheric storm tracks, and thus the tropical-extratropical signal propagation. The MIS3 stadial conditions Fig. This points to a complex interplay of atmospheric dynamics with the various forcings in the different climatic states. Figure modified from Merkel et al. This is attributed to an atmospheric signal communication from the strongly cooled North Atlantic into the tropical Pacific.

Model-data comparison Further insights into tropical Pacific variability can be achieved through modeldata intercomparison. Felis et al. The coral has been dated to Heinrich Stadial 1. Its fast growth rate allows sampling at monthly resolution and provides a unique opportunity to investigate interannual SST variability in the southwestern tropical Pacific during that period. Understanding how teleconnections operate, both in the atmosphere and the ocean, is particularly relevant for the validity of paleoclimatic reconstructions, as they generally assume that atmospheric teleconnection patterns are stable.

This may be particularly critical in the interpretation of proxy records not stemming from the core ENSO region. A composite analysis of atmospheric patterns e. In summary, our modeling study confirms that ENSO variability responds to various glacial climatic states. This calls for more detailed analyses, for instance in the form of glacial hosing studies in a multi-model approach Kageyama et al. Likewise, we emphasize that the concept of stationary teleconnections should only be applied to past climatic states with caution as they may be altered by different past boundary conditions and forcings internal and external to the climate system.

This uncertainty is rooted in a fundamental lack of evidence. However, several, recent studies focusing on past warm climates are beginning to address this issue. Pre-Pliocene ENSO Detection of interannual variability requires paleoclimate indicators that monitor changes over short timescales such as the thickness of varved sediments and isotope ratios in long-lived fossil mollusks or corals.

Once a record spanning sufficient years has been recovered, its power spectra can be analyzed for frequencies representative of ENSO. ENSO has been detected. It has also been seen in the Eocene 50 Ma; Ivany et al. All of these analyses have used records gathered in locations far away from the tropical Pacific, such as Antarctica. The plausibility of assumed teleconnections of that time can be confirmed with climate model simulations of the period Galeotti et al.

Oxygen isotope records from fossil corals MacGregor et al. Analyses of individual foraminifera from the Eastern Equatorial Pacific Scroxton et al. This has been interpreted as showing an active ENSO cycle. Unfortunately, a foraminifer does not live through an annual cycle unlike mollusks; e. Despite the complications associated with each individual study, a picture is emerging in which ENSO is a pervasive feature of past climate.

However, a systematic effort will be needed to provide quantitative information from these prePliostocene intervals that could qualify for data-model comparisons. The period described as lacking ENSO variability i. Figure modified after Fedorov et al. The shaded area represents four standard deviations from a year running window Fedorov et al.

Subsequent work shows similar results for the equatorial SST gradient in the early Pliocene. Reconstructions of the SST gradient during older periods need further work, but preliminary data suggest that a reduced SST gradient is not solely a feature of the early Pliocene LaRiviere et al. Although Wara et al. The term was propagated by Fedorov et al. This simile has been read as an assertion that there. This period is one million years later than the minimal SST gradient identified by Wara et al. Haywood et al. However, the equatorial temperature gradient of the mid-Pliocene simulation was hardly smaller than in the modern simulation.

Subsequent simulations performed with updated boundary conditions Dowsett et al. Attempts have been made to force coupled models to replicate a mean state with a weak SST gradient in the equatorial Pacific. One approach has been to increase the background ocean vertical diffusivity Brierley et al. These simulations Fig. This result could easily be model dependent, but offers a scenario for a weak ENSO around 4. However, the relationship between a Pacific mean state with a minimal equatorial SST gradient and related ENSO properties merits further investigation.

We have made progress towards uncovering ENSO behavior on geologic timescales, but there is still a long way to go. Overview of data assimilation methods Gregory J. Hakim1, J. Annan2, S. Crucifix4, T. Edwards5, H. Goosse4, A. Paul6, G. We present the data assimilation approach, which provides a framework for combining observations and model simulations of the climate system, and has led to a new field of applications for paleoclimatology.

The three subsequent articles explore specific applications in more detail. It has played a central role in the improvement of weather forecasts and, through reanalysis, provides gridded datasets for use in climate research. There is growing interest in applying data assimilation to problems in paleoclimate research. Our goal here is to provide an overview of the methods and the potential implications of their application.

Understanding of past climate variability provides a crucial benchmark reference for current and predicted climate change. Primary resources for deriving past understanding include paleo-proxy. Data assimilation provides a mathematical framework that combines these resources to improve the insight derivable from either resource independently. Here we provide an overview of these methods and how they relate to existing practices in the paleoclimate community.

In weather prediction, data assimilation uses observations to initialize a forecast Lorenc ; Kalnay ; Wunsch ; Wikle and Berliner Since the shortterm forecast typically starts from an accurate analysis at an earlier time, called the prior estimate, the model provides relatively accurate estimates of the weather observations. Data assimilation involves optimizing the use of these independent estimates to arrive at an analysis i.

For Gaussian distributed errors, the result for a single scalar variable singlydimensioned variable of one size , x, given prior estimate of the analysis value, xp, and observation y is 1 where xa is the analysis value. The innovation, xxxxxxxxx, represents the information from the observation that differs from the prior estimate. For example, in a paleoclimate application, xxxxx may estimate tree-ring width derived temperature data from a climate model Fig.

The weight applied to the innovation is determined by the Kalman gain, K,. Figure 1: Schematic illustration of how the innovation is determined in data assimilation for a tree-ring example. Proxy measurements are illustrated on the left, and model estimates of the proxy on the right. The observation operator provides the map from gridded model data, such as temperature, to tree-ring width, which is used to compute the innovation.

Images credit: Wikipedia. Equation 1 represents a linear regression of the prior on the innovation. Figure 2: Data assimilation for scalar variable x assuming Gaussian error statistics. Prior estimate, given by the dashed blue line, has mean Observation y, given by the dashed green line, has mean 1. The analysis, given by the thick red line, has mean 0. The parabolic gray curve denotes a cost function, J, which measures the misfit to both the observation and prior; it takes a minimum at the mean value of xa.

From Holton and Hakim Equivalently, the Kalman gain weights the innovation against the prior, resulting in an analysis probability density function with less variance, and higher density, than either the observation or the prior Fig. These covariance matrices provide the information that spreads the innovation in space and to all variables through a Kalman gain matrix. Application of data assimilation to the paleoclimate reconstruction problem involves determining the state of the climate system on the basis of sparse and noisy proxy data, and a prior estimate from a numerical model Widmann et al.

These data are weighted according to their error statistics and may also be used to calibrate parameters in a climate model Annan et al. Relationship to established methods While there are similarities between the application of data assimilation to weather and paleoclimate, there are also important differences. In weather prediction, observations are assimilated every 6 hours, which is a short time period compared to the roughly day predictability limit of the model.

However, transient. Consequently, relative errors in the model estimate of the proxy are usually much larger in paleoclimate applications. Hoever, data assimilation reconstruction may still be performed, at great cost savings, since the model no longer requires integration and each assimilation time may be considered independently Bhend et al. Paleoclimate data assimilation attempts to improve upon climate field reconstructions that use purely statistical methods.

One well-known statistical approach for climate field reconstruction Mann et al. Data assimilation, on the other hand, retains the spatial correlations for locations near proxies, which may be lost in a small set of spatial patterns, and also spreads information from observations in time through the dynamics of the climate model. Another distinction between data assimilation and field reconstruction approaches concerns the observation operator, xx, which often involves biological quantities of proxy data that have uncertain relationships to climate.

Research on paleoclimate data assimilation is rapidly developing in many areas. Ensemble approaches involve many realizations of climate model simulations, each of which is weighted according to their match to the proxy data, either in the selection of members Goosse et al. Among the important obstacles to progress in paleoclimate data assimilation, some challenges are generic, such as improving the chronological dating quality of proxy records and reducing the uncertainties of the paleoclimate data. Other problems are more specific to data assimilation, such as the development of proxy forward models.

Moreover, proxy data typically represent a time average, in contrast to instantaneous weather observations, although solutions that involve assimilating time averages have been proposed to tackle this problem Dirren and Hakim ; Huntley and Hakim Model bias is also problematic for paleoclimate data assimilation, especially for regions with spatially sparse proxy data. While the field of paleoclimate data assimilation is still in its infancy, these challenges are all under active research.

Merging climate models and proxy data has a bright future in paleoclimate research e. National Science Foundation , and it is likely that paleoclimate data assimilation will play a central role in this endeavor. Franke1, P. Breitenmoser1, G. Hakim2, H. Goosse3, M. Widmann4, M. Crucifix3, G. Gebbie5, J. Annans and G.

Data assimilation methods used for transient atmospheric state estimations in paleoclimatology such as covariance-based approaches, analogue techniques and nudging are briefly introduced. With applications differing widely, a plurality of approaches appears to be the logical way forward. Traditionally, statistical reconstruction techniques have been used, but recent developments bring data assimilation techniques to the doorstep of paleoclimatology. Here we give a short overview of transient atmospheric state estimation in paleoclimatology using data assimilation. An introduction to data assimilation as well as applications for equilibrium state estimation and parameter estimation are given in the companion papers to this special section see also Wunsch and Heimbach It has been hugely successful in generating three-dimensional atmospheric data sets of the past few decades.

Paleoclimate proxies do not capture atmospheric states, but time-integrated functions of states, such as averages, in the simplest case. Therefore, for assimilating proxies, other methods are required than those applied in atmospheric sciences. A schematic view of these methods is given in Figure 1. Note that other methods may be used for the ocean see Gebbie Variational approaches can be used to approximate the solution. Normally x is a state vector. However, Dirren and Hakim have successfully extended the concept to time averages.

Data assimilation entails that x serves as an initial condition for the next forecast step. Focusing on the seasonal scale, Bhend et al. This conveniently allows one to use pre-computed simulations. Because x does not serve as new initial condition, it can be small and can be a vector of averaged model states e. H can be a simple proxy forward model, i. Covariance-based approaches are powerful but computationally intensive and can be sensitive to assumptions e.

Circles indicate locations and anomalies of the assimilated instrumental measurements; red squares the locations of tree-ring proxies. Analogue approaches Reverting to cost function 1 , we can also look for an existing x, e. New ensemble members are then generated for the next time step by adding small perturbations to x and the final analysis is a continuous simulation. In contrast to EnKF, H may be non-differentiable e. H can be a complex forward model driven by the full simulation output.

R may be non-diagonal, and x may be very large e. However, to reconstruct the state of systems including a large number of degrees of freedom, these. Nudging approaches Nudging approaches Widmann et al. The distance between model state and observations is reduced by adding tendencies to a subspace of the model state at each time step, similar to an additional source term in the tendency equations. G is a relaxation parameter. The Forcing Singular Vectors method van der Schrier and Barkmeijer manipulates the tendency equations as well, but adds a perturbation, which modifies the model atmosphere in the direction of the target pattern only.

Examples Figure 2 shows April-to-September averages of surface air temperature obtained from two assimilations approaches EKF and BEM for the year relative to the mean. Both approaches are based on the same ensemble of simulations described in Bhend et al. The unconstrained ensemble mean Fig. Anomalies are small and smooth which is typical for an ensemble mean.

The EKF analysis was constrained by historical instrumental observations using Eq. The EKF ensemble mean suggests a more pronounced cooling over northern Europe, but over most regions due to lack of observations it is close to the unconstrained ensemble mean. BEM was constrained with tree rings from 35 locations. The VS-lite tree growth model Tolwinski-Ward et al.

BEM identifies member 01 as the best fitting one. This member exhibits large anomalies in Alaska and Eurasia, but due to the small ensemble size little regional skill is expected Annan and Hargreaves For instance, it does not fit well with instrumental observations over Europe. The same member in the EKF analysis Fig. Limitations and future directions Paleoclimatological applications are much more disparate than atmospheric sciences in terms of time, time scales, systems analyzed, and proxies used. Therefore, a plurality of data assimilation approaches is a logical way forward.

However, all approaches still suffer from problems and uncertainties. Ensemble approaches PF, EnKF, EKF provide some information on the methodological spread, which however represents only one difficult to characterize part of the whole uncertainty. Further uncertainties are related to model biases, limited ensemble size, errors in the forcings and proxy data. Validation of the approaches using pseudo proxies in toy models and climate models and validation of the results using independent proxies is therefore particularly important.

Any approach, however, fundamentally relies on a good understanding of the proxies. Best-of-both-worlds estimates for time slices in the past Tamsin L. Edwards1, J. Annan2, M. Gebbie4 and A. We introduce data assimilation methods for estimating past equilibrium states of the climate and environment. The approach combines paleodata with physically-based models to exploit their strengths, giving physically consistent reconstructions with robust, and in many cases, reduced uncertainty estimates. Proxy-based reconstructions are based on observations of the real world, but most consider data points independently rather than accounting for correlations in space, time and between climate variables.

Therefore they risk being physically inconsistent. Models incorporate aspects of physical consistency, but are imperfect and are tested during development only with present day observations. We discuss data assimilation for estimating past equilibrium states of the earth system such as climate and vegetation. For a given computational resource time slice estimation permits more.

Another advantage of a focus on time slices is that for eras studied by the Paleoclimate Model Intercomparison Project PMIP relatively large quantities of paleodata and simulations are available. Most data assimilation estimates of equilibrium paleo-states are therefore of the Last Glacial Maximum LGM: 21 ka cal BP , the most recent era for which annual mean climate is substantially different to the present that also has a long history of study by PMIP. We use model simulations in paleo-state estimation because models provide links across different locations, times relevant to transient or multistate estimation and state variables.

This has two advantages: it helps ensure the resulting state is physically consistent, and it also means we are not limited to assimilating the same variables we wish to estimate. We could assimilate data in one place to estimate another, or assimilate temperature data to estimate precipitation, or assimilate variables corresponding to the outputs of a model to estimate variables corresponding to the inputs. Guiot et al. LeGrand and Wunsch ; Roche et al. Distance is usually measured with the standard metric for normally distributed model-data differences, i.

For non-continuous variables, for example with a threshold, variables must be transformed or a non-Gaussian metric chosen e. Stone et al. Optimisation methods search for the simulation with the minimum distance from paleodata. One approach uses numerical differentiation of the model with respect to the parameters, essentially least-squares fitting of a line or curve to one-dimensional data e. Gregoire et al. Updating methods combine model and paleodata estimates.


  • Publications — Eyja M. Brynjarsdóttir.
  • (PDF) The Background of Social Reality | Hans Bernhard Schmid and Michael Schmitz - iqojekorabyg.tk?
  • The pH Balance Diet: Restore Your Acid-Alkaline Levels to Eliminate Toxins and Lose Weight.
  • Introduction to relativistic heavy ion collisions.
  • Duplicate citations.
  • Nietzsches Philosophy (Athlone Contemporary European Thinkers);

Typically the model estimates are generated with a perturbed parameter ensemble, which permits well-defined sampling of parameter uncertainties; the model estimates are reweighted with the modeldata distance using Bayesian updating e. Interpretation Figure 1 illustrates some strengths of data assimilation. Uncertainties are reduced relative to the model estimate. Data assimilation is a formal method that not only highlights model-data discrepancies but also corrects them. It can be challenging, because it requires a process-based model and reliable estimation of uncertainties for both paleodata and simulations.

For paleodata, difficulties may arise from dating and time averaging. But improvements in estimating reconstruction uncertainties can be made by using forward modeling approaches e. Tingley et al. These approaches. Figure 1: LGM annual mean temperature anomalies from: A surface air temperature SAT reconstructions based on pollen and plant macrofossils Bartlein et al. How should we interpret assimilated paleo-states? Optimization methods select a single best simulation so the state estimate is physically selfconsistent according to the model.

But the state estimate from updating methods is a combination of multiple model simulations and paleodata, therefore interpretation requires more care. An ensemble mean anomaly of zero might correspond to a wide spread of positive and negative results; this would be reflected in large model uncertainties. Such considerations are common to all multi-model ensemble summaries and reanalyses. For statistically meaningful results it is essential to use a distance metric grounded in probability theory, i. This might preclude the use of non-standard variables such as biomes.

Data assimilation is a statistical modeling technique and should be evaluated. Testing the method with pseudo-paleodata can help avoid the literal pitfalls of finding local rather than global minima in high-dimensional spaces. Using physically -based forward models for reconstruction, i. The long-term goal may be forward physical modeling of the whole causal chain from radiative forcings to proxy archives e. Roche et al. For paleo-simulations, we do not need models to be complex or stateof-the-art, but we do need to estimate their uncertainties.

If they are complex it is difficult to generate their derivatives with respect to the parameters. If they are expensive it is difficult to sample, and therefore to assess, their uncertainties. Schmittner et al. A research priority is to estimate the discrepancy between a model and reality at its best parameter values, and how this varies across different eras. New updating methods are emerging that use the PMIP multi-model ensemble to explore structural uncertainties.

For example, Annan and Hargreaves use the linear combination of ensemble members that best matches the paleodata. These challenges are worth tackling for the substantial benefits. Information from paleodata can be extrapolated to other locations, times and state variables, and uncertainties are smaller or at worst, the same than those of the individual model or proxy-based estimates. Bartlein P et al. Parameter estimation using paleodata assimilation James D. Annan1, M. Crucifix2, T. Edwards3 and A. In addition to improving the simulations of climate states, data assimilation concepts can also be used to estimate the internal parameters of climate models.

Here we introduce some of the ideas behind this approach, and discuss some applications in the paleoclimate domain. There is, for example, no single value to describe the speed at which ice crystals fall through the atmosphere, or the background rate of mixing in the ocean, to mention two parameters which are commonly varied in General Circulation Models GCMs.

However, inadequacies will always be present no matter how carefully parameter values are chosen: this should serve as a caution against over-tuning. However, from a sufficiently abstract perspective, the problem of parameter estimation can be considered as equivalent to state estimation, via a standard approach in which the state space of a dynamical model is augmented by the inclusion of model parameters Jazwinski ; Evensen et al. While this approach is conceptually straightforward, there are many practical difficulties in its application.

The most widespread methods for data assimilation, including both Kalman filtering and 4D-VAR, rely on quasi- linear and Gaussian approaches. Further challenges exist in applying this approach due to the wide disparity in relevant time scales. Often the initial state has a rapid effect on the model trajectory within the predictability time scale of the model, which is typically days to weeks for atmospheric GCMs. On the other hand, the full effect of the parameters only becomes apparent on the climatological time scale, which may be decades or centuries.

Applications Methods for joint parameter and state estimation in the full spatiotemporal domain continue to be investigated for numerical weather prediction, where data are relatively plentiful. But identifiability, that is the ability to uniquely determine the state and parameters given the observations, is a much larger problem for modeling past climates, where proxy data are relatively sparse in both space and time. Therefore, data assimilation in paleoclimate research generally finds a way to reduce the dimension of the problem. One such approach is to reduce the spatial dimension, even to the limit of a global average.

Figure 1 presents the results of one parameter estimation experiment by Hargreaves and Annan In the case of more complex and higher resolution models, the problems of identifiability and computational cost are most commonly addressed by the use of equilibrium states. Here, the full initial condition of the model is irrelevant, at least within reasonable bounds, and the dimension of the problem collapses down to the number of free parameters; typically ten at most, assuming many boundary conditions are not also to be estimated.

With this approach, much of the detailed methodology of data assimilation as developed and practiced in numerical weather prediction, where the huge state dimension is a dominant factor, ceases to be so relevant. While some attempts at using standard data assimilation methods have been performed e. Annan et al. With reasonably cheap models and a sufficiently small set of parameters, direct sampling of the parameter space with a large ensemble may be feasible. A statistical emulator, which provides a very fast approximation to running the full model, may help in more computationally demanding cases e.

Holden et al. One major target of parameter estimation in this field has been the estimation of the equilibrium climate sensitivity. This may either be an explicitly tunable model parameter in the case of simpler models, or else an emergent property of the underlying physical processes, which are parameterized in a more complex global climate model. The Last Glacial Maximum is a particularly popular interval for study, due to its combination of a large signal to noise ratio and good data coverage over a quasi-equilibrium interval Annan et al. Science Highlights: Data assimilation Figure 1: Experiment with ka of data assimilated.

Data to the left of the vertical magenta line were used to tune parameters, with the right hand side used as validation of the model forecast, which over a range of experiments shows substantial skill for a duration of around ka. The dark blue lines show the mean of the ensemble and the light blue lines show one standard deviation of the ensemble. Modified from Hargreaves and Annan The methods used for studying the LGM in order to estimate the equilibrium climate sensitivity have covered a wide range of techniques including direct sampling of parameter spaces with and without the use of an emulator , Markov Chain Monte Carlo methods, the variational approach using an adjoint model, and the Ensemble Kalman Filter.

In general, more costly models require stronger assumptions and approximations due to computational limitations. Approaches which aim at averaging out the highest frequencies of internal. In that case, the spatial dimension can still be reduced, e. A similar approach was used by Frank et al. Paleoclimate simulations provide the only opportunity to test and critically evaluate climate models under a wide range of boundary conditions.

Laryea, B. Foli and C. The objective of the workshop was to identify how humans adapted to past climatic and sea level changes, and to discuss future adaptation strategies through a multidisciplinary approach. Twenty-seven scientists from five countries attended the workshop.

The inaugural lecture was given by Dr. He spoke about the causes of sea-level rise and coastal change, and their implications for coastal regions. His lecture stressed the fact that important planning decisions for sea-level rise should be based on the best available scientific knowledge and careful consideration of long-term benefits for a sustainable future. He recommended that decisions on adaptation or mitigation measures should also take into consideration economic, social, and environmental costs.

Sixteen scientific papers were presented and discussed on various topics. In particular, the presentations addressed the impact of sealevel changes on coastal tourism development; the linkages between sea-level rise and ground water quality, hydrodynamics, upwelling and biogeochemistry in the Gulf of Guinea; paleoclimatic evidences from the quaternary coastal deposits from Nigeria; dynamics of ocean surges and their impacts on the Nigerian coastline; and the effect of climatic extreme events on reservoir water storage in the Volta Basin in Ghana.

The workshop provided a platform for scientists to share knowledge and information on their respective areas of research. At the end of the presentation sessions, the plenum agreed that research on sea-level rise should be particularly encouraged and that further activities to bring together scientists working in this area should be organized. To stimulate interest and expose students to new methods, regular international workshops or summer schools will be held to bring together students and experts from within and outside the subregion.

Figure 1: This photo from shows a section of the Keta town destroyed by sea erosion. Photo by Beth Knittle. The meeting participants then went on a guided tour of coastal communities along the eastern coast of Ghana. One of them was Keta, a coastal town in the Volta Region that was partly destroyed by sea erosion at the end of the 20th century. Keta is situated on a sandspit separating the Gulf of Guinea from the Keta Lagoon.

Due to this double waterfront, the city area is particularly vulnerable to erosion. It is flooded from the ocean front during high tides and from the lagoon front during heavy runoff, especially in the rainy seasons. During devastating erosion events between and , more than half of the town area has been washed away. The photo Fig. A major goal of LUC is to achieve Holocene land-cover reconstructions that can be used for climate modeling and testing hypotheses on past and future effects of anthropogenic land cover on climate.

Collaborations initiated between Linnaeus University Sweden; M. Xu and Y. The objective of this workshop in China was to initiate the necessary collaborations and activities to make past land-cover reconstructions possible in Eastern Asia. The three major outcomes. It is planned to submit this article this fall. It will include 1 a review of Holocene human-induced vegetation and land-use changes in Eastern Asia based on results presented at the workshop; 2 a discussion of the existing anthropogenic land-cover change scenarios HYDE, Klein Goldewijk et al.

These significant outcomes are the result of three intense days of presentations and discussion sessions. Some of the conference was also dedicated to reviewing the state of existing pollen databases and more generally, database building. Plenary and group break-out discussions focused on more technical aspects such as improving pollen databases, especially for the East Asian region; planning the review paper; and scientific topics such as the use of pollen-based and historical land-cover reconstructions for the evaluation of model scenarios of past anthropogenic land-cover change Fig.

References Gaillard MJ et al. Note the large differences in land-cover between the different methodologies. Mackay1, A. Seddon2,3 and A. But paleoecologists are often challenged when it comes to processing, presenting and applying their data to improve ecological understanding and inform management decisions e. Froyd and Willis in a broader context. Participatory exercises in, for example, conservation, plant science, ecology, and marine policy, have developed as an effective and inclusive way to identify key questions and emerging issues in science and policy Sutherland et al.

With this in mind, we organized the first priority questions exercise in paleoecology with the goal of identifying 50 priority questions to guide the future research agenda of the paleoecology community. The workshop was held at the Biodiversity Institute of the University of Oxford. Participants included invited experts and selected applicants from an open call.

Several months prior to the workshop, suggestions for priority questions had been invited from the wider community via list-servers, mailing lists, society newsletters, and social media, particularly Twitter Palaeo By the end of October , over questions had been submitted from almost individuals and research groups. Questions were then coded and checked for duplication and meaning, and similar questions were merged. The remaining questions were re-distributed to those who had initially engaged in the process. Participants were asked to vote on their top 50 priority questions.

Each working group also had a cochair, responsible for recording votes and editing questions on a spreadsheet, and a scribe. Workshop participants were allocated into one of six parallel working groups tasked with reducing the number of questions from to 30 by the end of day one. This was an intensive process involving considerable debate and editing. During day two, these 30 questions were winnowed down further with each group arriving at seven priority questions.

The seven questions from each group were then combined to obtain 42 priority questions. Each working group had a further five reserve questions, which everyone voted on in the final. The eight reserve questions that obtained the most votes were selected to complete the list of 50 priority questions. Working group discussions were often heated and passionate. Compromises won by the chairs and co-chairs were difficult but necessary.

It is important that the final 50 priority questions are not seen as a definitive list, but as a starting point for future dialogue and research ideas. The final list of 50 priority questions and full details of the methodology is currently under review, and the publication will be announced through the PAGES network.

Shopping Cart

Beal3, A. The conference centered on the dynamics of the Agulhas Current in the present and the geological past; the influence of the current on weather, ecosystems, and fisheries; and the impact of the Agulhas Current on ocean circulation and climate with a notable focus on the Atlantic Meridional Overturning Circulation AMOC. Participants came from the areas of ocean and climate modeling, physical and biological oceanography, marine ecology, paleoceanography, meteorology, and marine and terrestrial paleoclimatology.

The Agulhas Current attracts interest from these communities because of its significance to a wide range of climatic, biological and societal issues. The current sends waters from the Indian Ocean to the South Atlantic. This is thought to modulate convective activity in the North Atlantic. It is possible that it even stabilizes the AMOC at times of.

But these feedbacks are not easy to trace, and direct observations and climate models have been the only way to indicate the possible existence of such far-field teleconnections to date. This is where marine paleo-proxy profiles prove helpful. They reveal the functioning of the Agulhas Current under a far larger array of climatic boundary conditions than those present during the short period of instrumental observations. Marine ecosystems were also shown to be measurably impacted by the Agulhas system.

Notably, the high variability associated with the prominence of mesoscale eddies and dipoles along.

Search history function requires JavaScript.

Figure 1: A model perspective of Agulhas leakage and the interbasin water transports between the Indian Ocean and Atlantic. The meteorological relevance of the Agulhas Current was also demonstrated, for example its role as a prominent source of atmospheric heat and its significance in maintaining and anchoring storm tracks. Among other things, these affect the atmospheric westerly Polar Front Jet and Mascarene High, with onward consequences for regional weather patterns, including extreme rainfall events over South Africa. The conference developed a number of recommendations; two key ones being that efforts should be made to trace the impacts of the Agulhas leakage on the changing global climate system at a range of timescales, and that sustained observations of the Agulhas system should be developed.

Implementing these recommendations would constitute a major challenge logistically and the Western Indian Ocean Sustainable Ecosystem Alliance WIOSEA was identified as a possible integrating platform for the cooperation of international and regional scientists toward these goals. This would involve capacity building and training regional technicians and scientists, which could be coordinated through partnerships with the National Research Foundation in South Africa. In Summer , it recovered ice from the Eemian dating back from ka BP, helping to describe the warming and ice sheet shrinking at a time of unusually high Arctic summer insolation NEEM community members Established in , it now includes scientists from 22 nations and aims at defining the scientific priorities of the ice core community for the coming decade.

While most of the participants work on ice cores, a significant number were scientists working on marine and continental records as well as on climate modeling. The sponsorship received from several institutions, agencies and projects, enabled us to invite ten keynote speakers as well as six scientists from emerging countries. Notably, it covered questions of climate variability at different time scales from the last to 1 M years , biogeochemical cycles, dating, and ice dynamics. New challenges, such as studying the bacterial content of ice cores, and new methodologies were also the focal point of specific sessions.

Over the five days of the conference, all attendees gathered for the plenary sessions combined with long poster sessions. These sessions offered valuable and efficient networking opportunities. The full program can be found at: www. Ice core projects from outside polar regions were also well represented, with results obtained from the Andes, the Alps and the Himalayas. Proving that ice core scientific outputs remain of prime importance to highimpact journals, the Chief Editor of Nature as well as an editor of Nature Geoscience attended the full five days of the event.

An open call for bids to organize it will be launched in Efficient administrative and financial handling of the conference was provided by the Floralis company. Such growth has presented paleolimnologists with the challenge of dealing with highly quantitative, complex, and multivariate data to document the timing and magnitude of past changes in aquatic systems, and to understand the internal and external forcing of these changes. To cope with such tasks paleolimnologists continuously add new and more sophisticated numerical and statistical methods to help in the collection, assessment, summary, analysis, interpretation, and communication of data.

Birks et al. R is both a programming language and a complete statistical and graphical programming environment. Its use has become popular because it is a free and open-source application, but above all because its capability is continuously enhanced by new and diverse packages developed and generously provided by a large community of scientists. This recent workshop, held in the comfortable facilities of the Millport Marine Biological Station, trained researchers on the theory and practice of analyzing paleolimnological data using R.

The course was led by Steve Juggins Newcastle University and Gavin Simpson University College London, recently moved to the University of Regina , two of the researchers that have contributed in the development and application of different statistical tools and packages for paleoecology within the R community. A total of 31 participants from a range of continents North and South America, Europe, Asia, and Africa , career stages PhD students to faculty and scientific backgrounds paleolimnology, palynology, diatoms, chironomids, sedimentology enjoyed four long days of training in statistical tools and working on their own data.

Initially, participants were introduced to R software and language, tools for summarizing data, exploratory data analysis and graphics. The following lectures and practical sessions focused on simple, multiple and modern regression methods; cluster analysis and ordination techniques used to summarize patterns in stratigraphic data; hypothesis testing using permutations for temporal data, age-depth modeling, chronological clustering, smoothing and interpolation of stratigraphic data;. Figure 1: Participants during the R workshop. Photo by S. The final lectures dealt with the application of techniques for quantitative environmental reconstructions.

The theory and assumptions underpinning each method were introduced in short lectures, after which the students had the opportunity to apply what they had learned, to data sets and real environmental questions, during practical sessions.

PAGES news, vol 21, no. 2 - El Nino-Southern Oscillation by PAGES - Issuu

There was also time in the evenings for sessions on important R tips, advanced R graphics, special topics proposed by the assistants, and for the students to work on their own data. The course was conveniently organized just prior to the 12th International Paleolimnology Symposium Glasgow, August , which enabled all of the workshop participants to attend the symposium and encouraged further discussions throughout the following week.

PAGES covered travel and course costs for five young researchers from developing countries Turkey, South Africa, Macedonia, and Argentina all of who were very grateful for the opportunity to attend. Springer, pp. It is important because to properly assess the anthropogenic effect on climate change an accurate quantification of the natural forcing factors is required. But it is difficult because: 1 natural forcing records are generally not well quantified; 2 the response of the climate system to forcings is non-linear due to various feedback mechanisms and can only be estimated using complex climate models; 3 in spite of their complexity models may not comprise all relevant processes and have to be validated, but the instrumental records of climate forcing and climate response are generally too short for this purpose and the data set needs to be complemented by proxy data; 4 proxy data are derived from natural archives and are only indirectly related to the physical parameters of interest, and their calibration is based on assumptions that may not be fully valid on longer time scales; 5 instrumental and proxy data reflect the combined response to all forcings, and not only the influence of the Sun.

Furthermore, the climate system shows internal unforced variability. All this makes separation and quantification of the individual forcings very difficult. The main aim of the first workshop of the solar forcing working group was to assess the present state of the art and identify knowledge gaps by bringing together experts from the solar, the observational and paleo-data, and modeling communities.

The workshop was organized jointly with FUPSOL Future and past solar influence on the terrestrial climate , a multidisciplinary project of the Swiss National Science Foundations that addresses how past solar variations have affected climate, and how this information can be used to constrain solar-climate modeling.

FUPSOL also aims to address the key question of how a decrease in solar forcing in the next decades could affect climate at global and regional scales. Here are some examples of open questions and problems that were identified in this workshop, and will be. Strategies to separate solar and volcanic forcings could be to select periods of low volcanic activity e.

As an opening spectacle to the workshop, a medium-sized flare initiated a long, magnetic filament burst out from the Sun Fig. Viewed in the extreme ultraviolet light, the filament strand stretched outwards until it finally broke and headed off to the left. Some of the particles from this eruption hit Earth in September , generating a beautiful aurora. Lucien von Gunten, D. Anderson2, B. Chase1, M. Curran1, J. Gergis1, E. Gille2, W. Gross2, S. Kaufman, T. Kiefer, N.

McKay1, I. Mundo1, R. Neukom1, M. Sano1, A. Shah2, J. Tyler1, A. Viau1, S. Wagner1, E. Wahl2 and D. It is thus an successful example for a large trans-disciplinary effort leading to added value for the scientific community. In , at its second network meeting in Bern, Switzerland von Gunten et al. In addition, the group committed to PAGES general objective to promote open access to scientific data and called for all records used for, or emerging from the 2k project to be publicly archived upon publication of the related 2k studies.

They set up a dedicated NOAA task force to tailor the 2k data archive to the specific needs of the 2k project and to coordinate archiving with NOAA's data architecture and search capabilities. Over the last two years, the regional 2k data managers have worked closely with NOAA to tailor the database infrastructure and prepare the upload of the 2k data. Since the data managers of the regional 2k groups are spread across the globe, the collaboration was organized around bi-monthly teleconference meetings under the lead of NOAA.

In spite of the occasional unearthly meeting hours for some, the interaction between the 2k data and NOAA database groups has worked fruitfully, as the following achievements show. The 2k database The paleoclimatology program at NOAA has set up a dedicated 2k project site with subpages for all regional groups www. This page was created early in the project to provide the regional groups with a.

Populating the database A two-step approach was applied for entering the 2k data into the database in order to serve demands for both speediness and thoroughness. This ensured that the records were made publicly available exactly at the time of publication and in a format that will remain identical with the data files supplementing the article. In a second step, all these records are currently being re submitted to NOAA with more detailed metadata information than before using a new submission protocol.

Additionally, many new and already stored records that were not used for the PAGES 2k temperature synthesis are re formatted to the new submission protocol. This will allow improved search and export capabilities for a wealth of records that can currently only be accessed individually. Improved data submission protocol The data submission process is a crucial step for the long-term success of a database.

On the one hand, it should contain as much relevant information as possible in order to maximize the value of the data. On the other hand, it should remain simple enough to keep the threshold for data providers as low as possible. The NOAA task force and 2k data managers therefore created a substantially revised submission template file. This new protocol allows including more comprehensive information relating to the proxy records, and, crucially, is organized in a structured format that allows machine reading and automated searching for defined metadata information.

This is critical in order to maximize the usefulness of the data to other scientists, as it additionally allows them to reprocess underlying features of the records such as the chronology or proxy calibrations.