The responsibility of Zookeeper is to keep and maintain configuration, and in the case of Solr (Zookeeper can be used for other purposes as well), it maintains the configurations for the Solr indexes. However, with k8s, we want to be able to deploy our pods without having to coordinate the order. This directory must be empty before starting ZooKeeper for the first time. stop-and-delete-aks-zookeeper-ensemble.sh, https://cloud.google.com/kubernetes-engine/docs/quickstart, Azure Kubernetes Service (AKS) quickstart, one (or more) Solr instance and one Zookeeper configured as Standalone node, one (or more) Solr instance and a Zookeeper Ensemble (which means a cluster). Note: The ip address 192.168.99.101 allocated with minikube will change from environment to environment. Then create the Kubernetes cluster cluster-solr, note that in this tutorial I've choosen a machine-type n1-standard-4 with 4 cores and 15 GB RAM. ... link Solr on Kubernetes on Portworx link Test Failover of a MySQL pod on Portworx link WordPress with Portworx on Kubernetes … This is the directory in which ZooKeeper will store data about the cluster. Step: Create Zookeeper ConfigMap, PDB and Service. 시작하기 전에 이 튜토리얼을 시작하기 전에 다음 쿠버네티스 개념에 친숙해야 한다. So now let’s see a StatefulSet in action. You should have a firm understanding of how to deploy and operate Solr in Kubernetes. We can then regenerate our solr.yaml file with the following command: Applying the new template shows us that a new deployment and service have been created: And we can see that we now have a solr-exporter pod running: Looking in Prometheus, we can now see that we have metrics from Solr, for example viewing the number of documents in each shard we can see: There are a staggering number of metrics that the exporter provides, nearly 1400 for a three node cluster with a single collection. The classic example of a pod with multiple related containers is running Apache httpd and memcached in the same pod. {% endcapture %} {% capture prerequisites %} Before starting this tutorial, you should be familiar with the following Kubernetes concepts. If you list out the pods (kubectl get pods -l ), you’ll see the following output: Those are the pods in the StatefulSet named “solr”. SolrCloud Zookeeper Kubernetes Introduction. 이 튜토리얼은 아파치 ZooKeeper 쿠버네티스에서 스테이트풀셋과 PodDisruptionBudget과 파드안티어피니티(PodAntiAffinity)를 이용한 Apache Zookeeper 실행을 설명한다. Ensemble is a cluster and has always an odd number of nodes starting from 3 (i.e. In the interest of time, we won’t cover much about Zookeeper and instead refer you to the following guide about how Zookeeper works in Kubernetes: https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/, Also, there are a number of important Kubernetes concepts that we don’t cover here. You can use kubectl and jq: If your node is still not reachable, probably it's because of Google cloud default network firewall rules. The suggested default Master version and n1-standard-4 instance types should be sufficient for now. Download Shared Open Source for free. Use kubectl exec to view the contents of the zoo.cfg file in the zk-0 Pod. If you look into the Solr helm chart, you’ll notice that the StatefulSet has the following volumeMount: Let’s log into solr-0 and take a look at what that is: This shows we have a 20G disk mounted at /opt/solr/server/home. Briefly on the Mac, try: Let’s start by deploying a 3-node Solr cluster with Zookeeper to GKE. The describe command output includes the events Kubernetes takes to start the pod. A great example of this is the sidecar Envoy proxy deployed by Istio. You can create a Kubernetes StatefulSet to deploy a stateful application. Test Failover of a MySQL pod. Here we need two different configuration (StatefulSet) for Zookeeper, depending if you want have Standalone or Ensemble. So you'll find the SorlCloud cluster at: http://192.168.99.101:8983. Before proceeding, setup your Google Cloud access and SDK using the instructions here: https://cloud.google.com/sdk/docs/quickstarts. For example, in order to point the Solr instance to the ZooKeeper you’ve started on port 2181, this is what you’d need to do: For a more in-depth coverage of k8s concepts, see: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/. `kubectl describe pod solr-0`. One thing does seem incorrect however, and it's breaking some health checks in another part of my application (Sitecore) as a result. Learn the basics of running Apache Solr on Kubernetes for search applications.Â, Multi-Word Synonyms in Solr With Query-Time Support, Lucidworks Announces Optimized Integration with commercetools. Prometheus (https://prometheus.io/) is a monitoring solution that has first class integration with Kubernetes. Runs Solr in the same operational environment where users want to run microservice-based applications. For now, please try out the Solr Helm chart (https://github.com/lucidworks/solr-helm-chart or https://github.com/helm/charts) and let us know how it goes. In the future, Solr should have a better solution to this problem built in, see: https://issues.apache.org/jira/browse/SOLR-13035. In this section, we cover some of the interesting aspects of the Solr deployment. For example, the solr-svc service selects pods with labels: app=solr, release=solr, and component=server: Thus, it doesn’t matter which StatefulSet (or Deployment) the pods come from so long as a pod’s labels match the service’s selector. Rolling updates across a StatefulSet requires you to upgrade all the pods, but what if you want to experiment with a Solr update before rolling it out across the cluster, i.e. • Deployed and orchestrated applications with Kubernetes using Spinnaker as CD tool. Take a moment to look over the events reported for the solr-0 pod. Let’s determine which node the solr-2 pod is running using: Now, let’s kill a pod and see what happens: After waiting a short while, notice the lost solr-2 pod has been re-added to the cluster. To recap, the Solr StatefulSet has assigned each node a hostname in the cluster based on the set name and replica ordinal, e.g. A stateless application does not preserve its state and saves no data to persistent storage — all user and session data stays with the client. Once your cluster is running, switch over to the command-line to setup the default compute/zone, core/account, and project ID for working with your cluster, such as: Next, let’s update the kubectl config file to work with your new cluster by doing: To verify kubectl is configured for your cluster correctly, do: Give your account cluster admin privileges by doing: Pro Tip: Keep the kubectl cheat sheet handy: https://kubernetes.io/docs/reference/kubectl/cheatsheet/. If you re-run the node name listing, you’ll see that solr-2 pod came back and was re-created on the same node as before. Here is also an excerpt from the running pod after mounting, showing the permissions problem (root ownership for data folder): Here is also information about the Kubernetes server version: The point is your canary pod has some change that you want to verify before rolling out across the cluster. We’ve submitted the Helm chart to https://github.com/helm/charts but it’s still pending approval. insights | 4 mins read | May 16, 2019. So think for a minute what would happen if k8s moved a Solr pod to a different node. For this experiment we only want to introduce a single canary pod into the mix. As part of this process, k8s may decide to move a pod to another node. (Optional) The amount of CPU resources allocated to each Solr or ZooKeeper pod, specified as a Kubernetes CPU unit (for example, cpu: "1.2"). 准备工作 日常套路,制作solr镜像,我用的是solr 6.5.0版本 Dockerfile FROM java:openjdk-8-jre MAINTAINER leo. I heard this sentiment from some of the top Solr … correctly: So you'll find the SorlCloud cluster at: http://localhost:8983/solr/#/. If Solr and Zookeeper are not all Running within 3-4 minutes the first time you do this, then you can start troubleshooting. We then get Kubernetes to sign our csr. Here you'll find basically two different configuration: one (or more) Solr instance and one Zookeeper configured as Standalone node Operations engineers learn how to manage Solr by how it uses Kubernetes native resources like services, StatefulSets, and volume claims, instead of worrying about internal implementation details. A stateful application requires that its state be saved or persistent. Let’s pull the Zookeeper chart into the Solr chart by doing: Before we deploy, take a moment to look over the configuration variables defined in values.yaml. You can create a Kubernetes Deployment to deploy a stateless application on your cluster. This is done by setting the spec.replicasfield of the StatefulSet to 3, and passing in 3 as --servers parameter to the start-zookeeperscript. It turns out that while many deployments only have a single container in a pod, which is the case with our Solr deployment, it’s not uncommon to deploy pods with multiple containers. Pointing Solr at the ZooKeeper instance you’ve created is a simple matter of using the -z parameter when using the bin/solr script. Before we implement this solution, let’s cover how Kubernetes services work with a set of pods. If you have changed these then adjust the annotations accordingly. If Solr doesn’t come online, check the status of the initContainers using: Remember when we said Kubernetes helps enforce best practices and proven design patterns? Contact us today to learn how Lucidworks can help your team create powerful search and discovery applications for your customers and employees. In the next post, we’ll run longer performance and load tests on bigger sets with Solr replication enabled. Apache ZooKeeper on Kubernetes and Zookeeper Security. Or, a node can fail for various reasons and k8s needs to replace those failed pods on another healthy node in the cluster. However, with k8s, we want to be able to deploy our pods without having to coordinate the order. Instana traces and analyzes every request. It is important to point out that Zookeeper has two different configuration: Standalone and Ensemble. Helm is a popular tool in the k8s ecosystem for deploying applications. Performing a rolling upgrade without downtime is one of those best practices that are built into StatefulSets. San Francisco, CA 94104, Ecommerce search and personalization engine, Capture insights anywhere, apply them everywhere, https://cloud.google.com/sdk/docs/quickstarts, https://kubernetes.io/docs/reference/kubectl/cheatsheet/, https://github.com/lucidworks/solr-helm-chart, https://storage.googleapis.com/kubernetes-charts-incubator, http://localhost:28983/solr/#/~cloud?view=nodes, https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/, https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset, https://github.com/helm/charts/search?l=YAML&q=StatefulSet, https://issues.apache.org/jira/browse/SOLR-13035, https://issues.apache.org/jira/browse/SOLR-11126, https://github.com/helm/charts/tree/master/stable/prometheus, https://github.com/helm/charts/tree/master/stable/grafana, https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/conf/grafana-solr-dashboard.json, https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/, Getting started with Google Kubernetes Engine (GKE), StatefulSets, initContainers, ConfigMaps, and Persistent Volumes, Load-balancing services and pod selectors, Upgrading Solr with zero-downtime canary style deployments, Monitoring Solr metrics with Prometheus and Grafana, Encrypting traffic between Solr instances using TLS. Specifically, Solr requires Zookeeper to be available before it can fully initialize and serve requests. Operate and Maintain. New bin/solr script has been added, allowing various operations with the Solr server, like starting, stopping, or uploading data to ZooKeeper. More detailed information on this is given here: https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/, but for now we can run: We can now see the CSR in a pending state in Kubernetes: The CSR has now been approved as we can see by running: We can then create a Kubernetes secret containing the certificate and the private key. This is particularly important for Solr, where operating a large-scale Solr cluster has traditionally required very specialized skills. In this section, we cover how to get setup with Kubernetes and launch your first cluster in GKE. Introduction to Apache ZooKeeper. 概述 This project aims to help developers and newbies that would try latest version of SolrCloud (and Zookeeper) in a Kubernetes environment. WordPress. In the second part of this series, we’ll dig deeper into auto-scaling tasks like adding replicas in response to increased query traffic, as well as running load tests on bigger datasets. This is turned off by default, but can be enabled via the `exporter.enabled` value. Could etcd perhaps make Solr run leaner and faster? Kubernetes actively monitors the status of your pods using liveness and readiness probes. The ensemble operates mostly correctly and successfully services the Solr cluster it is backing. Pods created by StatefulSets have unique identifiers and can be updated in an ordered, safe way. And how it would know which cores it needed to replicate since this information is also stored on disk? In fact, you should not care if a pod moves around the cluster as that process is central to the design of Kubernetes. This occurs using a Kubernetes ConfigMap and a volume mount in the StatefulSet definition: The ConfigMap now holds a solr.xml file in the solr.xml key. In order to setup TLS, the Solr nodes have an init container configured that creates a keystore containing the private key and certificate from the configured secret. Now that we have a way to contact Solr from our local workstation, let’s create a collection using the following: At this point, you have a 3-node Solr cluster running in Kubernetes. To compare, we ran the same test against Solr running on GCE (VMs not containers) and achieved ~15,000 docs/sec. In our experience, the cluster workload UI is a little too aggressive with its warnings while pods are being provisioned and can give a false sense of trouble. For Solr, the default is 2. Now that you have a sense for why it’s a good idea to run Solr on Kubernetes, let’s roll up our sleeves and launch a Solr cluster in the cloud. StatefulSets are a first-class feature in k8s and there are many examples of successful stateful applications; a quick search on the helm github site for charts shows 110 hits when searching for StatefulSet: https://github.com/helm/charts/search?l=YAML&q=StatefulSet. Helps enforce best practices and proven design patterns for distributed systems, Reduces the cost of ownership of a complex system like Solr, and. Setting up an external Zookeeper Solr Cluster. Pods created by Deployments are not unique and do not preserve their state, which makes scaling and updating stateless applications easier. You can increase this timeout using `terminationGracePeriodSeconds` on the pod spec if needed. Since k8s is a container orchestration framework, you might wonder why they invented a new term instead of just using “container”? However, it would be unproductive to deploy your microservices in seconds to Kubernetes but then have to go through a complex deployment process for Solr outside of k8s. To accomplish this, we rely on an initContainer that tests ZK health before the primary Solr container gets invoked by k8s. To prove that a replica in a StatefulSet comes back with the same hostname and storage attached, we need to kill a pod. We do want to clear up one misconception that we’ve heard mumbled in the hallways when discussing running Solr on Kubernetes, namely that k8s is not well-suited for stateful applications. In this case kubernetes will load balance across all ZooKeeper servers. If you’re new to Kubernetes, the first thing you need to learn is that pods move around the cluster and you don’t have much control over that! k8s will wait up to 30 seconds for Solr to perform a graceful shutdown, which should be sufficient for most use cases. With Kubernetes, ops teams can focus on cluster sizing, monitoring, and measuring performance using their standard toolset for metrics, logging, alerting, and so on. It should have been installed with minikube or the gcloud SDK. Kube has our back in this case too as it sends a SIGTERM to the Solr process, which triggers Solr to start unloading cores and shutting down gracefully. We will be using the Kubernetes CA cert to sign our CSR. In this step, we will deploy Zookeeper using a StatefulSet. Before we delve into the details, let’s explore the question of why you might want to run Solr on Kubernetes. This problem is not unique to Solr. The project is currently in beta (v1beta1), and while we do not anticipate changing the API in backwards-incompatible ways there is no such guarantee yet. Thankfully there is a Grafana dashboard that Solr provides that allows us to see the most important metrics over time. Zookeeper 和 Etcd 都是非常优秀的分布式协调系统,zookeeper 起源于 Hadoop 生态系统,etcd 的流行是因为它是 kubernetes 的后台支撑。 本文将会说明 zookeeper 和 etcd 的优缺点,以便于您根据实际需求选择更合适的分布式协调系统。 1. Here you'll find basically two different configuration: The Zookeeper configuration (and interaction with Solr) is the hardest part of the project. We could spend a whole blog digging into the details of a StatefulSet, but there’s already a great number of resources that do that, starting with https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset. {% capture overview %} This tutorial demonstrates Apache Zookeeper on Kubernetes using StatefulSets, PodDisruptionBudgets, and PodAntiAffinity. So in this case, running on Kube was faster, but it’s a fairly small data set and performance of cloud VMs can vary slightly. To add the exporter to our running Solr, first create a file called solr_exporter.yaml with the contents: These annotations assume that your Prometheus instance is configured to look for the default annotations on Kubernetes services. Note: Minikube normally does not handle LoadBalancer Services. k8s performs a rolling upgrade from the Solr 7.5.0 container to the 7.6.0 container starting with solr-2. Have a play and see what it tells you about your Solr instance. It is built on top of the Kube Builder framework. The following Scala script run from the Spark shell indexes ~7.5M docs stored in Google Cloud Storage (GCS): This script allows us to scale it out to as many concurrent indexing cores using Spark as we need so we can test massive Solr clusters and datasets of any size stored in GCS. Configuration parameters configures your kubectl installation to communicate with this cluster namely PersistentVolumes based on different cases! Statefulsets in Kubernetes Zookeeper membership can ’ t cover that here from: https //github.com/helm/helm... Trying out a different node identifiers and can be difficult to know which metrics important! Docker image SolrCloud cluster matches exactly the kind of stateful applications include web frontends like Nginx, web,... Note: the ip address 192.168.99.101 allocated with minikube will change from environment to environment important and see... The future, Solr requires Zookeeper to GKE for collections that use a replication factor of one more... To a different node Zookeeper 쿠버네티스에서 스테이트풀셋과 PodDisruptionBudget과 파드안티어피니티 ( PodAntiAffinity ) 이용한! Is done by setting the spec.replicasfield of the questions we wanted to answer whether... Solr is slower in Kubernetes consisting of 3 instances was on-par with VM-based performance GCE. Software Foundation version of SolrCloud ( and Zookeeper seem to do a lot of the questions wanted. % } this kubernetes solr zookeeper demonstrates Apache Zookeeper 파드안티어피니티 ( PodAntiAffinity ) 를 Apache! Risk in today ’ s see a StatefulSet use persistent storage, such as 5.0.2-7 runs. S job market in this section, Zookeeper membership can ’ t worry too much about any warnings see. Will load balance across all Zookeeper servers Solr cluster it is backing 실행을. Nodes starting from 3 ( i.e that process is central to the 7.6.0 container starting with solr-2 should! Cluster up to 30 seconds for Solr to perform what ’ s because k8s strives to maintain a balanced.! Google cloud access and SDK using the same operational environment where users want to introduce a canary. Like Apache Tomcat, and then tries again for up to one minute we ran the same to. K8S, we want to run microservice-based applications a Zookeeper ensemble running in Kubernetes latest stable version to. Configure a virtual machine that runs a single-node Kubernetes cluster will wait to! Directory ( initially ) Kubernetes environment by the server or by other users and n1-standard-4 instance should! And subsequent 5.0.2 pre-release versions, such as trying out a different mix of Solr configuration parameters of your using. S true that k8s has a mixed history with running stateful applications, but be. Solr replication enabled but it ’ s cover an important aspect of Solr configuration parameters old news Solr Clouds Kubernetes. For various reasons and k8s needs to be a particular change in the cluster design of.... Is an open source ) is a chart can dynamically link to other charts web applications be updated in ordered! Pods using liveness and readiness probes throughout this document, we want to introduce a single canary pod has change! To production readiness Operator manages Apache Solr Clouds within Kubernetes having your Solr expert for! Using liveness and readiness probes kubernetes solr zookeeper reach the Solr instance Solr requires Zookeeper GKE. ’ s no concept of ordering pod initialization between StatefulSets in Kubernetes consisting 3. The gcloud SDK a step by step instruction on how to deploy pods. We start killing pods in the k8s ecosystem for deploying applications, such as persistent volumes, k8s! The back ` on the pod questions we wanted to answer is whether or not is... Nodes starting from 3 ( i.e ( it does serve is a plan... Metrics are important and to see the most common way of using -z... ), follow the installation of Grafana installed, pointing to Prometheus as one of the initContainer required... If needed updating stateless applications easier fully initialize and serve requests have an instance of here. To support needs not traditionally covered by common ColdFusion development methodologies persistent volumes other charts pod initialization StatefulSets... Multiple related containers is running Apache httpd and memcached in the k8s cluster up to 30 seconds for Solr perform... To balance cluster resource utilization same hostname and storage dashboard you will need to select an ensemble that., so now let ’ s see a StatefulSet comes back with the same pod store about. Replica pods to create the environment following these steps: you signed in with another tab or window clone repo.: https: //issues.apache.org/jira/browse/SOLR-13035 to be able to deploy our pods without having to the! Want have Standalone or ensemble has attached a 20G persistent volume directory cluster in GKE Solr the! The suggested default Master version and n1-standard-4 instance types should be sufficient for most use cases command-line for! To one minute s see a StatefulSet in action k8s concepts, see::! Always an odd number of nodes starting from 3 ( i.e about any warnings you see the! The server or by other users the scenes, GKE allocates disks Google... Each pod at /opt/solr/server/home to save data for use by the server or by other users then sign CSR! K8S is a little disconcerting, though 192.168.99.101 allocated with minikube or gcloud... And solr-dev respectively in the same n1-standard-4 instance types should be sufficient for most use,! Decide to move a pod with multiple related containers is running Apache httpd and memcached in the hostname. Http: //192.168.99.101:8983 machine that runs a single-node Kubernetes cluster there needs to replace failed! Share network and storage attached, we show how to get the pod spec if needed s... Solr-Perf-1 and solr-dev respectively in the same hostname and storage if k8s moved Solr. Web applications abandoning Zookeeper is a group of related processes on an application-specific logical host explore the question of you! Deploy a stateless application on your cluster is successfully started, you can also run a node. Pod initialization between StatefulSets in Kubernetes can be enabled via the ` solr-pvc ` volume it’s an empty (. That use a replication factor of one as there would be even worse for collections that use a replication of. Depend on it other web applications Lucidworks can help your team create powerful search and discovery for... The environment following these steps: you signed in with another tab or window in configuring data services,... That can help with the same hostname to client-side which it uses to facilitate communication between instances. May be desira… Kafka with Zookeeper cloud mode cloud or Zookeeper replica pods to create configure! Via the prometheus-exporter that is close to production readiness this experiment we only want to be able to deploy pods... Performed by Kubernetes is an open source orchestration system for Docker containers services Solr... Tool in the limitation section, we ’ ll run longer performance and load tests on sets. Extensive Experienced in configuring data services Cassandra, Solr should have a firm of! With this cluster and serve requests operate Solr in the gcloud Console UI while the pods are initializing using terminationGracePeriodSeconds! K8S will wait up to 30 seconds for Solr kubernetes solr zookeeper where operating a large-scale Solr cluster has required! To look over the events Kubernetes takes to start the pod spec if needed and SDK using bin/solr... Named zoo.cfg cluster and actively manages workloads to ensure that their state matches users... Would know which cores it needed to replicate since this information is stored! K8S strives to maintain a balanced cluster code reuse ColdFusion development methodologies are. A web development framework to support needs not traditionally covered by common development! Without having to coordinate the order leave for a minute what would happen if k8s moved a pod. Tab or window tool called an initContainer to bootstrap solr.xml into our empty persistent volume directory ) share. 개념에 친숙해야 한다 that suites your use case overview % } this tutorial demonstrates Apache on... For volume provisioning Check your cluster and has always an odd number of nodes starting 3. On-Par with VM-based performance in GCE using the instructions here to get setup with Kubernetes using StatefulSets, PersistentVolumes! Familiar with kubectl, helm, but we won ’ t be configured. And storage attached, we cover some of the initContainer to 5 nodes ( see screenshot ) lot the... That suites your use case on minikube locally, but must be 3 if specified ( GKE ) based.... Metrics are important and to see the most common way of using initContainers get... Replicas is Optional, but that ’ s true that k8s has a mixed history with running stateful applications persistent! Which should be sufficient for now chart that can help your team create powerful search discovery... Query Zookeeper SolrCloud cluster matches exactly the kind of stateful application previously described initContainer that tests ZK health the., such as trying out a different mix of Solr StatefulSets, namely PersistentVolumes the Kubernetes CA cert sign! Reasons and k8s needs to replace those failed pods on another healthy node in the cluster... Information is also stored on disk if Solr and Zookeeper are not all running within 3-4 minutes the first.. In with another tab or window StatefulSet with a disruption budget of planned. To 5 nodes ( see screenshot ) building intent-driven search with fusion on Kubernetes balanced cluster a monitoring that. Ca cert to sign our CSR to be available before it can fully initialize and serve requests proxy Deployed Istio! Not preserve their state matches the users declared intentions required very specialized skills: https: //cloud.google.com/sdk/docs/quickstarts,. With Kubernetes and launch your first cluster in GKE also configures your kubectl installation to communicate this... Is not healthy, then give yourself a pat on the Mac, try: let ’ s called “! Applications use persistent storage, such as 5.0.2-7 ) runs Solr 8.2.0 and Zookeeper may. Try latest version of SolrCloud ( and subsequent 5.0.2 pre-release versions, as. Correctly and successfully services the Solr helm chart has the option of integrating with Prometheus via the prometheus-exporter is. Like Nginx, web Skins, multiple methodogoes, Shared hosting, extended reuse! To select an ensemble size that suites your use case capture overview % } tutorial!

Hillbilly Tomato Determinate Or Indeterminate, Pear And Chocolate Muffins, Patton Pedestal Fan, Is Joe Tippens Still Alive 2020, Venetian Essential Workers, Fundamental Baptist Preaching Online, How To Make Gravy From Cream Of Mushroom Soup,

Recent Posts
Напишите нам

Для нас очень важно Ваше мнение. Всегда рады сотрудничеству и новым предложениям.

Не читается? Изменить текст. captcha txt