Showing posts with label OO. Show all posts
Showing posts with label OO. Show all posts

Monday, June 01, 2009

Prototypal Inheritance in Javascript - Template Method meets Strategy

I have been reading some of the papers on Self, a programming environment that models computation exclusively in terms of objects. However unlike the classical object-oriented approach, Self is a classless language, where everything is an object. An object has slots - each slot has a name and a value. The slot name is always a String, while the value can be any other Self object. The slot can point to methods as well, consisting of code. A special designated slot points to the parent object in the hierarchy. Hence each object is consistently designed for extensibility through inheritance. But since we don't have class structures, everything is dynamic and runtime. Objects interact through messages - when an object receives a message, it looks up into its slot for a match. If the matching message is not found, the search continues up the chain through successive parent pointers, till the root is reached.

Prototype based languages offer a different way of implementing objects, and hence require a different thinking for structuring your programs. They make you think more in terms of messages that your objects will receive, and how the messages get propagated up the inheritance chain.

Javascript follows an almost identical architecture, where the hierarchies of objects are constructed through prototypes. This post is not about Self or, for that matter, about the Javascript language. Some time back I had blogged about how the Template Method design pattern gets subsumed into higher order functions and closures when implemented using functional programming languages.

In a class based language, template method pattern is implemented in terms of inheritance, which makes the structure of the pattern static and makes the derivatives of the hierarchy statically coupled to the base abstraction. Closures liberate the pattern structure from this compile time coupling and make it dynamic. But once we take off the class inheritance part and use higher order functions to plug in the variable parts of the algorithm, what we end up with closely matches the Strategy pattern. Have a look at James Iry's insightful comments in my earlier post.

James also hinted at another level of subsumption which is more interesting - the case of the two patterns implemented in a prototype based language like Javascript. Here is how it looks ..

// the template function at the base object
// defines the generic flow
// uses hooks to be plugged in by derived objects

var processor = {
  process: function() {
    this.doInit();
    this.doProcess();
    this.doEnd();
    return true;
  }
};


We construct another object that inherits from the base object. The function beget is the one that Douglas Crockford defines as a helper to create a new object using another object as the prototype.

if (typeof Object.beget !== 'function') {
  Object.beget = function(o) {
    var F = function() {};
    F.prototype = o;
    return new F();
  };
}

var my_processor  = Object.beget(processor);


The new object now implements the variable parts of the algorithm.

my_processor.doInit = function() {
  //..
};
my_processor.doProcess = function() {
  //..
};
my_processor.doEnd = function() {
  //..
};


and we invoke the function from the base object ..

my_processor.process();

If we need to define another specialization of the algorithm that only has to override a single variable part, we do it likewise by supplying the object my_processor as the prototype ..

var your_processor= Object.beget(my_processor);
your_processor.doEnd = function() {
  //.. another specialization
};

your_processor.process();


So what we get is a dynamic version of the Template Method pattern with no static coupling - thanks to prototypal inheritance of Javascript. Is this a Template Method pattern or a Strategy pattern ? Both get subsumed into the prototypal nature of the language.

Sunday, January 18, 2009

Generic Repository and DDD - Revisited

Greg Young talks about the generic repository pattern and how to reduce the architectural seam of the contract between the domain layer and the persistence layer. The Repository is the contract of the domain layer with the persistence layer - hence it makes sense to have the contract of the repository as close to the domain as possible. Instead of a contract as opaque as Repository.FindAllMatching(QueryObject o), it is always recommended that the domain layer looks at something self revealing as CustomerRepository.getCustomerByName(String name) that explicitly states out the participating entities of the domain. +1 on all his suggestions.

However, he suggests using composition, instead of inheritance to encourage reuse along with encapsulation of the implementation details within the repository itself .. something like the following (Java ized)

public class CustomerRepository implements ICustomerRepository {
  private Repository<Customer> internalGenericRepository;

  public IEnumerable<Customer> getCustomersWithFirstNameOf(string _Name) {
    internalGenericRepository.fetchByQueryObject(
      new CustomerFirstNameOfQuery(_Name)); //could be hql or whatever
  }
}



Quite some time ago, I had a series of blogs on DDD, JPA and how to use generic repositories as an implementation artifact. I had suggested the use of the Bridge pattern to allow independent evolution of the interface and the implementation hierarchies. The interface side of the bridge will model the domain aspect of the repository and will ultimately terminate at the contracts that the domain layer will use. The implementation side of the bridge will allow for multiple implementations of the generic repository, e.g. JPA, native Hibernate or even, with some tweaking, some other storage technologies like CouchDB or the file system. After all, the premise of the Repository is to offer a transparent storage and retrieval engine, so that the domain layer always has the feel that it is operating on an in-memory collection.

// root of the repository interface
public interface IRepository<T> {
  List<T> read(String query, Object[] params);
}

public class Repository<T> implements IRepository<T> {

  private RepositoryImpl repositoryImpl;

  public List<T> read(String query, Object[] params) {
    return repositoryImpl.read(query, params);
  }

  //..
}



Base class of the implementation side of the Bridge ..

public abstract class RepositoryImpl {
  public abstract <T> List<T> read(String query, Object[] params);
}


One concrete implementation using JPA ..

public class JpaRepository extends RepositoryImpl {

  // to be injected through DI in Spring
  private EntityManagerFactory factory;

  @Override
  public <T> List<T> read(String query, Object[] params) {
    
  //..
}


Another implementation using Hibernate. We can have similar implementations for a file system based repository as well ..

public class HibernateRepository extends RepositoryImpl {
  @Override
  public <T> List<T> read(String query, Object[] params) {
    // .. hibernate based implementation
  }
}


Domain contract for the repository of the entity Restaurant. It is not opaque or narrow, uses the Ubiquitous language and is self-revealing to the domain user ..

public interface IRestaurantRepository {
  List<Restaurant> restaurantsByName(final String name);
  //..
}


A concrete implementation of the above interface. Implemented in terms of the implementation artifacts of the Bridge pattern. At the same time the implementation is not hardwired with any specific concrete repository engine (e.g. JPA or filesystem). This wiring will be done during runtime using dependency injection.

public class RestaurantRepository extends Repository<Restaurant>
  implements IRestaurantRepository {

  public List<Restaurant> restaurantsByEntreeName(String entreeName) {
    Object[] params = new Object[1];
    params[0] = entreeName;
    return read(
      "select r from Restaurant r where r.entrees.name like ?1",
      params);
  }
  // .. other methods implemented
}


One argument could be that the query string passed to the read() method is dependent on the specific engine used. But it can very easily be abstracted using a factory that returns the appropriate metadata required for the query (e.g. named queries for JPA).

How does this compare with Greg Young's solution ?

Some of the niceties of the above Bridge based solution are ..

  • The architecture seam exposed to the domain layer is NOT opaque or narrow. The domain layer works with IRestaurantRepository, which is intention revealing enough. The actual implementation is injected using Dependency Injection.

  • The specific implementation engine is abstracted away and once agian injected using DI. So, in the event of using alternative repository engines, the domain layer is NOT impacted.

  • Greg Young suggests using composition instead of inheritance. The above design also uses composition to encapsulate the implementation within the abstract base class Repository.


However in case you do not want to have the complexity or flexibility of allowing switching of implementations, one leg of the Bridge can be removed and the design simplified.

Sunday, May 25, 2008

Designing Internal DSLs in Scala

In an earlier post I had talked about building external DSLs using parser combinators in Scala. External DSLs involve parsing of syntax foreign to the native language - hence the ease of developing external DSLs depends a lot on parsing and parse tree manipulation capabilities available in existing libraries and frameworks. Internal DSLs are a completely different beast altogether, and singularly depends on the syntax and meta programming abilities that the language offers. If you have a language that is syntactically rigid and does not offer any meta (or higher level) programming features, then there is no way you can design usable internal DSLs. You can approximate to the extent of designing fluent interfaces.

Scala is a statically typed language that offers lots of features for designing concise, user friendly internal DSLs. I have been working on designing a DSL in Scala for interacting with a financial trading and settlement system, implemented primarily in Java. Since Scala interoperates with Java quite well, a DSL designed in Scala can be a potent tool to provide your users with nicely concise, malleable APIs even over existing Java domain model.

Here is a snippet of the DSL in action, for placing client orders for buy/sell of securities to the exchange ..


val orders = List[Order](

  // use premium pricing strategy for order
  new Order to buy(100 sharesOf "IBM")
            maxUnitPrice 300
            using premiumPricing,

  // use the default pricing strategy
  new Order to buy(200 sharesOf "GOOGLE")
            maxUnitPrice 300
            using defaultPricing,

  // use a custom pricing strategy
  new Order to sell(200 bondsOf "Sun")
            maxUnitPrice 300
            using {
              (qty, unit) => qty * unit - 500
            }
)



The DSL looks meaningful enough for the business analysts as well, since it uses the domain language and does not contain much of the accidental complexities that we get in languages like Java. The language provides easy options to plug in default strategies (e.g. for pricing orders, as shown above). Also it offers power users the ability to define custom pricing policies inline when instantiating the Order.

Here are some of the niceties in the syntax of Scala that makes it a DSL friendly language ..

  • Implicits

  • Higher order functions

  • Optional dots, semi-colons and parentheses

  • Operators like methods

  • Currying


Implicits are perhaps the biggest powerhouse towards designing user friendly DSLs. They do away with the static cling of Java and yet offer a safe way to extend existing abstractions. Implicits in Scala is perhaps one of the best examples of a meaningful compromise in language design between uncontrolled open classes of Ruby and the sealed abstractions of Java. In the above example DSL, I have opened up the Int class and added methods that convert a raw number to a quantity of shares. Here is the snippet that allows me to write code like sell(200 bondsOf "Sun") as valid Scala code.


class PimpedInt(qty: Int) {
  def sharesOf(name: String) = {
    (qty, Stock(name))
  }

  def bondsOf(name: String) = {
    (qty, Bond(name))
  }
}

implicit def pimpInt(i: Int) = new PimpedInt(i)



And the best part is that the entire extension of the class Int is lexically scoped and will only be available within the scope of the implicit definition function pimpInt.

Scala, having strong functional capabilities, offer higher order functions, where, first class methods can be passed around like objects in the OO world. This helps define custom control abstractions that look like the natural syntax of the programming language. This is another great feature that helps design DSLs in Scala. Add to that, optional parentheses and dots, and you can have syntax like ..


to buy(200 sharesOf "GOOGLE")
maxUnitPrice 300
using defaultPricing



where defaultPricing and premiumPricing are functions that have been passed on as arguments to methods. The method using in class Order takes a function as input. And you can define the function inline as well, instead of passing a predefined one. This is illustrated in the last Order created in the above example.

Another small subtlety that Scala offers is the convenience to sugarize methods of a class that takes one parameter. In the above example, 100 sharesOf "IBM" is actually desugared as 100.sharesOf("IBM"). Though the former looks more English like, without the unnecessary dot and parenthesis. Nice!

Here is the complete listing of an abbreviated version of the DSL and a sample usage ..


object TradeDSL {

  abstract class Instrument(name: String) { def stype: String }
  case class Stock(name: String) extends Instrument(name) {
    override val stype = "equity"
  }
  case class Bond(name: String) extends Instrument(name) {
    override val stype = "bond"
  }

  abstract class TransactionType { def value: String }
  case class buyT extends TransactionType {
    override val value = "bought"
  }
  case class sellT extends TransactionType {
    override val value = "sold"
  }

  class PimpedInt(qty: Int) {
    def sharesOf(name: String) = {
      (qty, Stock(name))
    }

    def bondsOf(name: String) = {
      (qty, Bond(name))
    }
  }

  implicit def pimpInt(i: Int) = new PimpedInt(i)

  class Order {
    var price = 0
    var ins: Instrument = null
    var qty = 0;
    var totalValue = 0
    var trn: TransactionType = null
    var account: String = null

    def to(i: Tuple3[Instrument, Int, TransactionType]) = {
      ins = i._1
      qty = i._2
      trn = i._3
      this
    }
    def maxUnitPrice(p: Int) = { price = p; this }

    def using(pricing: (Int, Int) => Int) = {
      totalValue = pricing(qty, price)
      this
    }

    def forAccount(a: String)(implicit pricing: (Int, Int) => Int) = {
      account = a
      totalValue = pricing(qty, price)
      this
    }
  }

  def buy(qi: Tuple2[Int, Instrument]) = (qi._2, qi._1, buyT())
  def sell(qi: Tuple2[Int, Instrument]) = (qi._2, qi._1, sellT())

  def main(args: Array[String]) = {

    def premiumPricing(qty: Int, price: Int) = qty match {
      case q if q > 100 => q * price - 100
      case _ => qty * price
    }

    def defaultPricing(qty: Int, price: Int): Int = qty * price

    val orders = List[Order](

      new Order to buy(100 sharesOf "IBM")
                maxUnitPrice 300
                using premiumPricing,

      new Order to buy(200 sharesOf "CISCO")
                maxUnitPrice 300
                using premiumPricing,

      new Order to buy(200 sharesOf "GOOGLE")
                maxUnitPrice 300
                using defaultPricing,

      new Order to sell(200 bondsOf "Sun")
                maxUnitPrice 300
                using {
                  (qty, unit) => qty * unit - 500
                }
    )
    println((0 /: orders)(+ _.totalValue))
  }
}

Monday, February 25, 2008

Implementation Inheritance with Mixins - Some Thoughts

Eugene's post It is safer not to invent safe hash map / Java once again brings forth one of the most controversial topics in the OO space - public inheritance, in general and implementation inheritance, in particular. Inheritance is a hard problem to solve in OO programming. Anton Van Straaten says in this LtU thread ..
".. inheritance definitely seems to be an "unsolved problem" in OO languages. As already mentioned, it involves so many competing forces: typing, interface vs. implementation, single vs. multiple, etc. As a result, languages have typically picked some workable compromise, and lived with the limitations this produces."

And, so has Java. In Java, the recommended practice is to make heavy use of interfaces and design concrete classes using interface inheritance. But there is no elegant way to reuse interface implementations, unless you resort to tools like AOP that works on byte code weaving. People often try to use public inheritance on concrete classes in Java to add new behaviors or override existing ones and lands up losing extensibility in orthogonal directions.

Consider the abstraction java.util.Map<K,V>. The following are the various collaborations that the abstraction participates in.

  • It has subinterfaces like SortedMap<K,V>, ConcurrentMap<K,V>, which implies linear extension of the contract. That is, these subinterfaces add additional behavioral contracts (no implementation) to the generic map.

  • It has multiple implementations like HashMap<K,V>, TreeMap<K,V> etc.


Hence any added behavior to a generic component like Map<K,V> needs to work across all extension points of the abstraction. This is where the suggested implementations by the original authors of SafeHashMap<K,V> falls flat. As Eugene points out, extending java.util.HashMap<K,V> will result in the additional behavior NOT being implemented in other variants of Map<K,V>. While adopting a wrapper based implementation will not scale with SortedMap<K,V> and ConcurrentMap<K,V>.

And, of course, implementing the contract of Map<K,V< ground up with additional behavior will force a blatant copy-paste based implementation.

Eugene's implementation, based on static methods and camoflaged with static imports gives it the best possible shape in Java.

Can we do better ?

No, not in Java, unless you use tools like aspects, which may seem too magical to many, and of course, is never a substitute to better language features.

What we need is the ability to compose independent granular abstractions that can seamlessly mix in the main abstraction additively to introduce new behavior or override existing ones. CLOS refers to this technique as mixin based programming - Gilad Bracha, in his OOPSLA 90 paper defines a mixin as an abstract subclass that may be used to specialize the behavior of a variety of parent classes. It often does this by defining new methods that perform some actions and then calls the corresponding parent methods.

The example in CLOS, which the paper illustrates is ..


(defclass Graduate-mixin () (degree))

(defmethod display ((self Graduate-mixin))
  (call-next-method)
  (display (slot-value self 'degree)))



In the above example, the mixin method display() invokes call-next-method despite the fact that the mixin does not have any explicit parent. The parent comes in implicitly when the mixin class is mixed-in with another abstraction that supports the same method display().


(defclass Graduate (Graduate-mixin Person)())



Here Person is the superclass of Graduate and Graduate-mixin mixes in with the hierarchy to provide the additional behavior. The mixin has no independent existence and cannot be instantiated on its own. It comes to life only when it is mixed in with an existing hierarchy. Hence the term abstract subclass.

Mixins provide a notion of compositional inheritance that allow sharing of implementations along with pure interface inheritance, without the noise of an intermediate abstract class - it is like being able to provide implementation to Java interfaces. There have been a few proposals in the recent past for adding mixins in Java.

Till we have mixins in Java 7 ..

Scala provides mixin implementations as traits. Reusable behaviors can be modeled as traits in Scala and mixed in with concrete classes during class definitions as well as object creation. The most commonly demonstrated use of a mixin in Scala is the use the Ordered trait, which implements total ordering of data for the class with which it is mixed in.


class Person(val lastName: String, val firstName: String)
extends Ordered[Person] {
  def compare(that: Person): Int = {
    //.. implementation
  }
  //..
}



The parent class has to implement the compare method, but gets convenient comparison operators for free by mixing in with the trait. This is implementation inheritance without the abstract class. And the class designer can stack in multiple behaviors simultaneously by mixing in with many traits at the same time.

Scala traits also offer implementation inheritance also on an object basis, through mixins during object creation.

Consider adding a synchronized get method to Scala Maps as an additional behavior overriding the existing api. We can define a trait for this behavior ..


trait SynchronizedGet[A,B] extends Map[A, B] {
  abstract override def get(key: A): Option[B] = synchronized {
    super.get(key)
  }
}



which overrides the get method of Map with a synchronized variant.

Now we can mix in this behavior with any variant of a Scala Map, irrespective of the underlying implementation ..


// Scala HashMap
val names = new HashMap[String, List[String]]
              with SynchronizedGet[String, List[String]]

//.. use names

// wrapper for Java Map
val stuff = new scala.collection.jcl.LinkedHashMap[String, List[String]]
              do SynchronizedGet[String, List[String]]

//.. use stuff



In fact Scala provides a complete trait SynchronizedMap[A,B] that synchronizes the Map function of the class into which it is mixed in.

Mixins provide the flexibility to add behaviors to existing abstractions through implementation inheritance without some of the drawbacks of existing mainstream languages as discussed above. Unlike CLOS mixins, Scala mixins do not support chaining of methods without a context. The above mixin SynchronizedGet needs to inherit from Map in order to have invocation of super in the overridden method. This takes away some of the flexibility and makes the mixin reusable only in the context of the abstraction that it inherits from. However, like CLOS, Scala mixins are also based on the linearization technique that allows users the flexibility to manipulate the order in which the behavior chains get invoked.

Overall, I think mixins are a useful feature to have in your language and mixins in Scala offer a better way to use implementation inheritance for code reuse than Java.

Sunday, February 03, 2008

Scala - To DI or not to DI

People have been discussing about dependency injection frameworks in languages that offer powerful abstraction techniques for construction and composition of objects. Scala mailing list points towards various approaches of implementing DI capabilities using the abstraction mechanisms over types and values and great composition techniques of traits. Gilad Bracha talks about his language, Newspeak, organized around namespaces, modules and lexically scoped nestable classes. Newspeak offers powerful class instantiation and module composition features that alleviate the necessity of writing explicit static factories or magical instantiation of objects through DI techniques.

I have also been thinking about the relevance of Dependency Injection in Scala and taking cue from the ideas discussed in the Scala community, tried out some techniques in a Scala application.

One of the features that a DI framework offers is the ability to decouple explicit concrete dependencies from the abstraction of the component. This is what Martin Odersky calls the service oriented software component model, where the actual component uses the services of other cooperating components without being statically dependent on their implementations. This composition of services is typically done using a DSL like module that sets up the wiring between components by injecting all relevant dependencies into the component graph.

Do I need a separate DI framework in Scala ?

With powerful mechanisms of object construction (and built-in factory method apply() with companion objects), abstraction over types and values and flexible composition using traits, Scala offers more power than Java in decoupling of concrete implementations from the abstract services. Every component can have separate configuration units that will inject the concrete type and data members that wire up non-intrusively to deliver the runtime machinery for the particular service.

Here is an example Scala class for computing the salary sheet of employees ..


abstract class SalaryCalculationEngine {
    trait Context {
        val calculator: Calculator;
        val calendar: Calendar;
    }

    protected val ctx: Context
    type E <: Employee

    def calculate(dailyRate: Double): Double = {
        ctx.calculator.calculate(dailyRate, ctx.calendar.noOfDays)
    }

    def payroll(employees: List[E]) = {
        employees.map(_.getDailyRate).foreach(=> println(calculate(s)))
    }
}



where Calculator is defined as a trait that mixes in ..


trait Calculator {
    def calculate(basic: Double): Double
}



Notice the use of the bounded abstract type and the abstract data members as configurable points of the abstraction. A concrete implementation of the above class will inject these abstract members to complete the runtime machinery. The configurable abstract data members form the context of the abstraction and has been encapsulated into another trait. This will be mixed in with a concrete implementation as part of the configuration module. Before going into the usage of the abstraction SalaryCalculationEngine, we need to configure the abstract types and data members. Let us define a module for Class1 Employees that will supply the exact concrete configuration parameters ..


trait Class1SalaryConfig {
    val calculator = new DefaultCalculator
    val calendar = new DefaultCalendar

    class DefaultCalculator extends Calculator {
        def calculate(basic: Double, noOfDays: Int): Double = basic * noOfDays
    }

    class DefaultCalendar extends Calendar {
        def noOfDays: Int = 30
    }
}



and use this configuration to instantiate the abstraction for generating salary sheet ..


val emps = List[Employee](..

val sal = new SalaryCalculationEngine {
    type E = Employee
    protected val ctx = new Class1SalaryConfig with Context
}
sal.payroll(emps)



Note how the concrete configuration (Class1SalaryConfig) mixes-in with the Context defined in the abstraction to inject the dependencies.

We can easily swap out the current implementation by mixing in with another configuration - the MockConfiguration ..


trait MockSalaryConfig {
    type T = MockCalendar
    val calculator = new MockCalculator
    val calendar = new MockCalendar

    class MockCalculator extends Calculator {
        def calculate(basic: Double, noOfDays: Int): Double = 0
    }

    class MockCalendar extends Calendar {
        def noOfDays: Int = 10
    }
}



and the application ..


val mock = new SalaryCalculationEngine {
    type E = Employee
    protected val ctx = new MockSalaryConfig with Context
}
mock.payroll(emps)



Extensibility ..

Traits make the above scheme very extensible. If we add another dependency in the Context, then we just need to provide a configuration for it in the implementation of the config trait. All sites of usage do not need to change since the Context mixes in dynamically with the configuration.

Powerful abstraction techniques of the Scala language help us achieve easy composability of services enable swapping in and out of alternative implementations in a fairly non-intrusive manner - one of the main features thet DI frameworks offer. The configurations can be easily reused at a much more coarse level of granularity and can be kept decoupled from the main flow of the application. From this point of view, these can act similar to the Modules of Guice, as they serve as the repository for binding information.

However, standard dependency injection frameworks shine in a couple of other ways which the above abstraction techniques fail to achieve :


  • DI frameworks offer a container of their own that abstracts the object creation and injection services in a manner completely decoupled from the main business logic of the application. To the application code, the developer gets a bunch of unit testable classes with all dependencies injected transparently through declarative configuration based on DSLs. Language based techniques are not that non-intrusive and often are found to tangle with the main logic of the application.

  • DI frameworks like Guice and Spring have a separate lifecycle of their own and perform lots of work upfront during application initialization. This is called the bootstrapping process, when all published dependencies are analysed and recursively injected starting from the root class. Hence runtime performance is greatly enhanced, since we have the injector as well as all bindings completely set up. In the case with Scala, every class asks for the configurations - hence it is more like a Service Locator pattern.

  • Modules in Guice provide a nice way to decouple compile time independent components and act as the central repository for all published dependencies that need to be injected. The Spring application context does the same thing during startup. You create a dependency one time and use it in many places - hence development scalability improves. The configuration specification in Scala is at a finer level of granularity and is strictly not centrally managed. I would love to have a DSL that allows me to manage all configurations declaratively in a centralized repository.

  • Another great feature that DI frameworks provide is integration with Web components and frameworks that allow objects to be created with specialized scopes, e.g. an object per http request, or an object per user session. These facilities come out of the box and provide great productivity boost to the developers.

  • Interceptors and AOP are another area where the DI frameworks shine. The following snippet from Guice manual applies a transcation interceptor to all methods annotated with @Transaction ..




binder.bindInterceptor(
    any(),                              // Match classes.
    annotatedWith(Transactional.class), // Match methods.
    new TransactionInterceptor()        // The interceptor.
);



Despite all powerful abstraction techniques present in the Scala language, I think a separate dependency injection framework has a lot to offer. Most importantly it addresses the object construction, injection and interception of lifecycle services as a completely separate concern from the application logic, leading to more modular software construction.