**Chapter 1: Basic Functional Analysis**

**Abstract:**Functional and convex analysis are closely intertwined. In this chapter we recall the basic concepts and results from functional analysis and calculus that will be needed throughout this book. A first section is devoted to general normed spaces. We begin by establishing some of their main properties, with an emphasis on the linear functions between spaces. This leads us to bounded linear functionals and the topological dual. Second, we review the Hahn-Banach Separation Theorem, a very powerful tool with important consequences. It also illustrates the fact that the boundaries between functional and convex analysis can be rather fuzzy at times. Next, we discuss some relevant results concerning the weak topology, especially in terms of closedness and compactness. Finally, we include a subsection on differential calculus, which also provides an introduction to standard smooth optimization techniques. The second section deals with Hilbert spaces, and their very rich geometric structure, including the ideas of projection and orthogonality. We also revisit some of the general concepts from the first section (duality, reflexivity, weak convergence) in the light of this geometry.

**Chapter 2: Existence of Minimizers**

**Abstract:**In this chapter, we present sufficient conditions for an extended real-valued function to have minimizers. After discussing the main concepts, we begin by addressing the existence issue in abstract Hausdorff spaces, under certain (one-sided) continuity and compactness hypotheses. We also present Ekeland’s Variational Principle, providing the existence of approximate minimizers that are strict in some sense. Afterwards, we study the minimization of convex functions in reflexive spaces, where the verification of the hypothesis is more practical. Although it is possible to focus directly on this setting, we preferred to take the long path. Actually, the thechniques used for the abstract framework are useful for problems that do not fit in the convex reflexive setting, but where convexity and reflexivity still play an important role.

**Chapter 3: Convex Analysis and Subdifferential Calculus**

**Abstract:**This chapter deals with several properties of convex functions, especially in connection with their regularity, on the one hand, and the characterization of their minimizers, on the other. We shall explore sufficient conditions for a convex function to be continuous, as well as several connections between convexity and differentiability. Next, we present the notion of subgradient, a generalization of the concept of derivative for nondifferentiable convex functions that will allow us to characterize their minimizers. After discussing conditions that guarantee their existence, we present the basic (yet subtle) calculus rules, along with their remarkable consequences. Other important theoretical and practical tools, such as the Fenchel conjugate and the Lagrange multipliers, will also be studied. These are particularly useful for solving constrained problems.

**Chapter 4: Examples**

**Abstract:**The tools presented in the previous chapters are useful, on the one hand, to prove that a wide variety of optimization problems have solutions; and, on the other, to provide useful characterizations allowing to determine them. In this chapter, we present a short selection of problems to illustrate some of those tools. We begin by revisiting some results from functional analysis concerning the maximization of bounded linear functionals and the realization of the dual norm. Next, we discuss some problems in optimal control and calculus of variations. Another standard application of these convex analysis techniques lies in the field of elliptic partial differ-ential equations. We shall review the theorems of Stampacchia and Lax-Milgram, along with some variations of Poisson’s equation, including the obstacle problem and the p-Laplacian. We finish by commenting a problem of data compression and restoration.

**Chapter 5: Problem-Solving Strategies**

**Abstract:**Only in rare cases, can problems in function spaces be solved analytically and exactly. In most occasions, it is necessary to apply computational methods to approximate the solutions. In this chapter, we discuss some of the basic general strategies that can be applied. First, we present several connections between opti-mization and discretization, along with their role in the problem-solving process. Next, we introduce the idea of iterative procedures, and discuss some abstract tools for proving their convergence. Finally, we comment some ideas that are useful to simplify or reduce the problems, in order to make them tractable or more efficiently solved.

**Chapter 6: Keynote Iterative Methods**

**Abstract:**In this chapter we give an introduction to the basic (sub)gradient-based methods for minimizing a convex function on a Hilbert space. We pay special attention to the proximal point algorithm and the gradient method, which are interpreted as time discretizations of the steepest descent differential inclusion. Moreover, these methods, along with some extensions and variants, are the building blocks for other − more sophisticated − methods that exploit particular features of the problems, such as the structure of the feasible (or constraint) set. The choice of proximal- or gradient-type schemes depends strongly on the regularity of the objective function.