<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Featured LABS archivos - Geko Cloud</title>
	<atom:link href="https://geko.cloud/en/blog/featured-labs/feed/" rel="self" type="application/rss+xml" />
	<link></link>
	<description>Servicios de consultoría cloud y devops</description>
	<lastBuildDate>Thu, 16 Dec 2021 10:09:30 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.5.7</generator>

 
	<item>
		<title>AWS Logs to OpenSearch via Kinesis</title>
		<link>https://geko.cloud/en/aws-logs-to-opensearch-via-kinesis/</link>
					<comments>https://geko.cloud/en/aws-logs-to-opensearch-via-kinesis/#respond</comments>
		
		<dc:creator><![CDATA[Iván González]]></dc:creator>
		<pubDate>Thu, 16 Dec 2021 10:09:30 +0000</pubDate>
				<category><![CDATA[Featured LABS]]></category>
		<category><![CDATA[Labs]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Elasticsearch]]></category>
		<category><![CDATA[Kinesis]]></category>
		<guid isPermaLink="false">https://geko.cloud/?p=6737</guid>

					<description><![CDATA[<p>Introduction By default, the AWS Simple Email Service (SES) service does not offer us to be able to view logs of your actions / events, we can only see some metrics in Cloudwatch. In Geko we have encountered more than once the need to see SES logs, either to diagnose incidents or to better understand [&#8230;]</p>
<p>La entrada <a href="https://geko.cloud/en/aws-logs-to-opensearch-via-kinesis/">AWS Logs to OpenSearch via Kinesis</a> se publicó primero en <a href="https://geko.cloud/en/">Geko Cloud</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>Introduction</h2>
<p>By default, the AWS Simple Email Service (SES) service does not offer us to be able to view logs of your actions / events, we can only see some metrics in Cloudwatch. In Geko we have encountered more than once the need to see SES logs, either to diagnose incidents or to better understand the status of the service. Well, to make this possible, we use AWS Kinesis. We are going to explain how we make it. Let&#8217;s begin!</p>
<h2>Kinesis configuration</h2>
<p>Kinesis is a data collector, processor and transmitter. The first step is to create a Delivery Stream. We access the <a href="https://aws.amazon.com/kinesis/">Kinesis</a> service, Delivery Streams and create a Delivery Stream</p>
<p><img fetchpriority="high" decoding="async" class="alignnone wp-image-6731 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/1.png" alt="" width="1920" height="438" srcset="https://geko.cloud/wp-content/uploads/2021/12/1.png 1920w, https://geko.cloud/wp-content/uploads/2021/12/1-300x68.png 300w, https://geko.cloud/wp-content/uploads/2021/12/1-1024x234.png 1024w, https://geko.cloud/wp-content/uploads/2021/12/1-768x175.png 768w, https://geko.cloud/wp-content/uploads/2021/12/1-1536x350.png 1536w" sizes="(max-width: 1920px) 100vw, 1920px" /></p>
<p>In &#8220;Source&#8221; we choose &#8220;Direct PUT&#8221; and in Destination &#8220;Amazon OpenSearch Service&#8221;. There are other destination options such as Redshift, S3, Dynatrace .. All available options appear in the drop-down list.</p>
<p><img decoding="async" class="alignnone wp-image-6742 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/2.png" alt="" width="914" height="522" srcset="https://geko.cloud/wp-content/uploads/2021/12/2.png 914w, https://geko.cloud/wp-content/uploads/2021/12/2-300x171.png 300w, https://geko.cloud/wp-content/uploads/2021/12/2-768x439.png 768w" sizes="(max-width: 914px) 100vw, 914px" /></p>
<p>We add a name to the object &#8220;Delivery Stream&#8221; and add our OpenSearch. In our case, as we already have an OpenSearch created and operational by simply accessing &#8220;Browse&#8221;, it has appeared to select it.</p>
<p><img decoding="async" class="alignnone wp-image-6772 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/4.png" alt="" width="896" height="779" srcset="https://geko.cloud/wp-content/uploads/2021/12/4.png 896w, https://geko.cloud/wp-content/uploads/2021/12/4-300x261.png 300w, https://geko.cloud/wp-content/uploads/2021/12/4-768x668.png 768w" sizes="(max-width: 896px) 100vw, 896px" /></p>
<p>We add the index name that we want it to create.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6746 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/6.png" alt="" width="897" height="768" srcset="https://geko.cloud/wp-content/uploads/2021/12/6.png 897w, https://geko.cloud/wp-content/uploads/2021/12/6-300x257.png 300w, https://geko.cloud/wp-content/uploads/2021/12/6-768x658.png 768w" sizes="(max-width: 897px) 100vw, 897px" /></p>
<p>And by default it selects VPC, Subnet and Security Group based on our OpenSearch.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6744 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/7.png" alt="" width="899" height="685" srcset="https://geko.cloud/wp-content/uploads/2021/12/7.png 899w, https://geko.cloud/wp-content/uploads/2021/12/7-300x229.png 300w, https://geko.cloud/wp-content/uploads/2021/12/7-768x585.png 768w" sizes="(max-width: 899px) 100vw, 899px" /></p>
<p>Finally we create the object &#8220;Delivery Stream&#8221;</p>
<h2>SES Configuration</h2>
<p>Now we access the SES service, we access &#8220;Configuration Sets&#8221; and &#8220;Create Set&#8221;</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6740 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/9.png" alt="" width="1917" height="523" srcset="https://geko.cloud/wp-content/uploads/2021/12/9.png 1917w, https://geko.cloud/wp-content/uploads/2021/12/9-300x82.png 300w, https://geko.cloud/wp-content/uploads/2021/12/9-1024x279.png 1024w, https://geko.cloud/wp-content/uploads/2021/12/9-768x210.png 768w, https://geko.cloud/wp-content/uploads/2021/12/9-1536x419.png 1536w" sizes="(max-width: 1917px) 100vw, 1917px" />We add the Configuration Set name and create it.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6766 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/10.png" alt="" width="903" height="487" srcset="https://geko.cloud/wp-content/uploads/2021/12/10.png 903w, https://geko.cloud/wp-content/uploads/2021/12/10-300x162.png 300w, https://geko.cloud/wp-content/uploads/2021/12/10-768x414.png 768w" sizes="(max-width: 903px) 100vw, 903px" /></p>
<p>Once the &#8220;Configuration Set&#8221; is created we go to &#8220;Event Destionation&#8221; and &#8220;Add Destination&#8221; to create one.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6764 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/11.png" alt="" width="1921" height="522" srcset="https://geko.cloud/wp-content/uploads/2021/12/11.png 1921w, https://geko.cloud/wp-content/uploads/2021/12/11-300x82.png 300w, https://geko.cloud/wp-content/uploads/2021/12/11-1024x278.png 1024w, https://geko.cloud/wp-content/uploads/2021/12/11-768x209.png 768w, https://geko.cloud/wp-content/uploads/2021/12/11-1536x417.png 1536w" sizes="(max-width: 1921px) 100vw, 1921px" /></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6762 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/12.png" alt="" width="1537" height="512" srcset="https://geko.cloud/wp-content/uploads/2021/12/12.png 1537w, https://geko.cloud/wp-content/uploads/2021/12/12-300x100.png 300w, https://geko.cloud/wp-content/uploads/2021/12/12-1024x341.png 1024w, https://geko.cloud/wp-content/uploads/2021/12/12-768x256.png 768w" sizes="(max-width: 1537px) 100vw, 1537px" /></p>
<p>We select the type of event that we want and go to the next step</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6760 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/13.png" alt="" width="825" height="783" srcset="https://geko.cloud/wp-content/uploads/2021/12/13.png 825w, https://geko.cloud/wp-content/uploads/2021/12/13-300x285.png 300w, https://geko.cloud/wp-content/uploads/2021/12/13-768x729.png 768w" sizes="(max-width: 825px) 100vw, 825px" /></p>
<p>We select &#8220;Amazon Kinesis Data Firehose&#8221;, we put a name and we select the &#8220;Delivery Stream&#8221; that we have created previously.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6758 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/14.png" alt="" width="818" height="744" srcset="https://geko.cloud/wp-content/uploads/2021/12/14.png 818w, https://geko.cloud/wp-content/uploads/2021/12/14-300x273.png 300w, https://geko.cloud/wp-content/uploads/2021/12/14-768x699.png 768w" sizes="(max-width: 818px) 100vw, 818px" /></p>
<p>&nbsp;</p>
<p>Watch out! Now you have to create an &#8220;IAM Role&#8221; so that SES can write to Firehose, we have created it like this:</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6756 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/15.png" alt="" width="1596" height="539" srcset="https://geko.cloud/wp-content/uploads/2021/12/15.png 1596w, https://geko.cloud/wp-content/uploads/2021/12/15-300x101.png 300w, https://geko.cloud/wp-content/uploads/2021/12/15-1024x346.png 1024w, https://geko.cloud/wp-content/uploads/2021/12/15-768x259.png 768w, https://geko.cloud/wp-content/uploads/2021/12/15-1536x519.png 1536w" sizes="(max-width: 1596px) 100vw, 1596px" /></p>
<p>Con esta Policy</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6754 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/16.png" alt="" width="1305" height="498" srcset="https://geko.cloud/wp-content/uploads/2021/12/16.png 1305w, https://geko.cloud/wp-content/uploads/2021/12/16-300x114.png 300w, https://geko.cloud/wp-content/uploads/2021/12/16-1024x391.png 1024w, https://geko.cloud/wp-content/uploads/2021/12/16-768x293.png 768w" sizes="(max-width: 1305px) 100vw, 1305px" /></p>
<p>And this Trust Relationship</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6752 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/17.png" alt="" width="609" height="327" srcset="https://geko.cloud/wp-content/uploads/2021/12/17.png 609w, https://geko.cloud/wp-content/uploads/2021/12/17-300x161.png 300w" sizes="(max-width: 609px) 100vw, 609px" /></p>
<p>Finally we create the &#8220;Configuration Set&#8221;</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6750 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/18.png" alt="" width="822" height="780" srcset="https://geko.cloud/wp-content/uploads/2021/12/18.png 822w, https://geko.cloud/wp-content/uploads/2021/12/18-300x285.png 300w, https://geko.cloud/wp-content/uploads/2021/12/18-768x729.png 768w" sizes="(max-width: 822px) 100vw, 822px" /></p>
<p>&nbsp;</p>
<p>Now, to test it, you can send a test email from SES assigning to our test the &#8220;Configuration Set&#8221; that we have created. This &#8220;Configuration Set&#8221; can be applied to any Identity that we have in SES. In fact we are going to do a test by sending a test email. We see the test log that we have generated in our OpenSearch:</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6786 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/20.png" alt="" width="1580" height="303" srcset="https://geko.cloud/wp-content/uploads/2021/12/20.png 1580w, https://geko.cloud/wp-content/uploads/2021/12/20-300x58.png 300w, https://geko.cloud/wp-content/uploads/2021/12/20-1024x196.png 1024w, https://geko.cloud/wp-content/uploads/2021/12/20-768x147.png 768w, https://geko.cloud/wp-content/uploads/2021/12/20-1536x295.png 1536w" sizes="(max-width: 1580px) 100vw, 1580px" /></p>
<p>And in the Kinesis Delivery monitoring:</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-6748 size-full" src="https://geko.cloud/wp-content/uploads/2021/12/19.png" alt="" width="483" height="364" srcset="https://geko.cloud/wp-content/uploads/2021/12/19.png 483w, https://geko.cloud/wp-content/uploads/2021/12/19-300x226.png 300w" sizes="(max-width: 483px) 100vw, 483px" /></p>
<p>&nbsp;</p>
<p>From Geko we hope that if you have come this far, this entry is just what you were looking for! We also invite you to read other of our <a href="https://geko.cloud/es/blog/labs/">labs posts</a></p>
<p>La entrada <a href="https://geko.cloud/en/aws-logs-to-opensearch-via-kinesis/">AWS Logs to OpenSearch via Kinesis</a> se publicó primero en <a href="https://geko.cloud/en/">Geko Cloud</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://geko.cloud/en/aws-logs-to-opensearch-via-kinesis/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to update Gitlab</title>
		<link>https://geko.cloud/en/how-to-update-gitlab/</link>
					<comments>https://geko.cloud/en/how-to-update-gitlab/#respond</comments>
		
		<dc:creator><![CDATA[Christian]]></dc:creator>
		<pubDate>Tue, 14 Dec 2021 14:58:17 +0000</pubDate>
				<category><![CDATA[Featured LABS]]></category>
		<category><![CDATA[Labs]]></category>
		<category><![CDATA[Git]]></category>
		<category><![CDATA[Gitlab]]></category>
		<guid isPermaLink="false">https://geko.cloud/?p=6628</guid>

					<description><![CDATA[<p>Recently, a vulnerability has been detected in Gitlab that affects the following versions: 13.10.3 13.9.6 13.8.8 It may happen that we have some Gitlab even older than these versions, then we will have to update it to the most recent version through several intermediate versions. Next, we will show the steps to update Gitlab on [&#8230;]</p>
<p>La entrada <a href="https://geko.cloud/en/how-to-update-gitlab/">How to update Gitlab</a> se publicó primero en <a href="https://geko.cloud/en/">Geko Cloud</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 2000;">Recently, a <a href="https://www.rapid7.com/blog/post/2021/11/01/gitlab-unauthenticated-remote-code-execution-cve-2021-22205-exploited-in-the-wild/">vulnerability</a> has been detected in Gitlab that affects the following versions:</span></p>
<ul>
<li><strong>13.10.3</strong></li>
<li><strong>13.9.6</strong></li>
<li><strong>13.8.8</strong></li>
</ul>
<p><span style="font-weight: 2000;">It may happen that we have some Gitlab even older than these versions, then we will have to update it to the most recent version through several intermediate versions.</span></p>
<p><span style="font-weight: 2000;">Next, we will show the steps to update Gitlab on Ubuntu 18. We will start with the gitlab-ee 12.7.5-ee example and take it to version 14.4.2-ee.</span></p>
<p><span style="font-weight: 2000;">We consider that Gitlab is previously installed from the official repository with its embedded postgresql and redis services.</span></p>
<h2>Previous steps</h2>
<p><span style="font-weight: 2000;">If we are working in the Cloud and we have our Gitlab installed on an EC2 instance, it is recommended to create an image of the instance and clone it into a new one to perform these migration steps.</span></p>
<p><span style="font-weight: 2000;">We will also need access to the Linux console of that instance and have root permissions.</span></p>
<h2>Updates + snapshots</h2>
<p><span style="font-weight: 2000;">We execute the following &lt;apt install&gt; commands in sequence, expecting in each case that they do not give us any error and they end correctly.</span></p>
<div class="wp-block-codemirror-blocks code-block ">
<pre class="CodeMirror" data-setting="{&quot;mode&quot;:&quot;shell&quot;,&quot;mime&quot;:&quot;text/x-sh&quot;,&quot;theme&quot;:&quot;material&quot;,&quot;lineNumbers&quot;:false,&quot;lineWrapping&quot;:true,&quot;styleActiveLine&quot;:false,&quot;readOnly&quot;:true,&quot;align&quot;:&quot;&quot;}">sudo apt install gitlab-ee=12.10.14-ee.0
sudo apt install gitlab-ee=13.0.14-ee.0
sudo apt install gitlab-ee=13.1.11-ee.0
sudo apt install gitlab-ee=13.8.8-ee.0
sudo apt install gitlab-ee=13.12.10-ee.0
sudo apt install gitlab-ee=13.12.12-ee.0</pre>
</div>
<p><span style="font-weight: 2000;">Once these steps are finished, we will have to execute the following command in the console, since gitlab needs to migrate its existing projects to the newe <a href="https://docs.gitlab.com/ee/administration/raketasks/storage.html"><em>hashed_storage</em></a> system that the following versions use.</span></p>
<div class="wp-block-codemirror-blocks code-block">
<pre class="CodeMirror" data-setting="{&quot;mode&quot;:&quot;shell&quot;,&quot;mime&quot;:&quot;text/x-sh&quot;,&quot;theme&quot;:&quot;material&quot;,&quot;lineNumbers&quot;:false,&quot;lineWrapping&quot;:true,&quot;styleActiveLine&quot;:false,&quot;readOnly&quot;:true,&quot;align&quot;:&quot;&quot;}">sudo gitlab-rake gitlab:storage:migrate_to_hashed</pre>
</div>
<p><span style="font-weight: 2000;">At the end of the previous command, we must restart the instance and reconnect to it.</span></p>
<p><span style="font-weight: 2000;">At this point we have<strong> Gitlab 13.12.12</strong></span></p>
<h2>Let&#8217;s continue with the updates</h2>
<div class="wp-block-codemirror-blocks code-block ">
<pre class="CodeMirror" data-setting="{&quot;mode&quot;:&quot;shell&quot;,&quot;mime&quot;:&quot;text/x-sh&quot;,&quot;theme&quot;:&quot;material&quot;,&quot;lineNumbers&quot;:false,&quot;lineWrapping&quot;:true,&quot;styleActiveLine&quot;:false,&quot;readOnly&quot;:true,&quot;align&quot;:&quot;&quot;}">sudo apt install gitlab-ee=14.0.11-ee.0
</pre>
<p><span style="font-weight: 2000;">At the end of this step, we must verify that all migrations have completed successfully.</span></p>
<p><span style="font-weight: 2000;">Starting with this version, Gitlab provides us with a new tool that allows us to see if migrations are running. We must check that there are none in the queue and that all have finished without errors.</span></p>
<p><span style="font-weight: 2000;">We find this tool in: <i>administration&gt;monitoring&gt;Background Migrations</i>.</span></p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-6570" src="https://geko.cloud/wp-content/uploads/2021/12/migrations-gitlab-1-300x125.png" alt="gitlab" width="647" height="269" srcset="https://geko.cloud/wp-content/uploads/2021/12/migrations-gitlab-1-300x125.png 300w, https://geko.cloud/wp-content/uploads/2021/12/migrations-gitlab-1-1024x426.png 1024w, https://geko.cloud/wp-content/uploads/2021/12/migrations-gitlab-1-768x320.png 768w, https://geko.cloud/wp-content/uploads/2021/12/migrations-gitlab-1.png 1293w" sizes="(max-width: 647px) 100vw, 647px" /><br />
<span style="font-weight: 2000;">In addition, we will also check on <i>administration&gt;monitoring&gt;healthCheck </i>that Gitlab is in a “Healthy” state.</span></p>
<p><span style="font-weight: 2000;">At the end of this step, it is recommended to take a snapshot.</span></p>
<h2>We will continue with the following sequence of updates</h2>
<p><span style="font-weight: 2000;">At the end of each one, we will repeat the exercise of checking for each completion, <em>Background Migrations</em></span></p>
<pre class="CodeMirror" data-setting="{&quot;mode&quot;:&quot;shell&quot;,&quot;mime&quot;:&quot;text/x-sh&quot;,&quot;theme&quot;:&quot;material&quot;,&quot;lineNumbers&quot;:false,&quot;lineWrapping&quot;:true,&quot;styleActiveLine&quot;:false,&quot;readOnly&quot;:true,&quot;align&quot;:&quot;&quot;}">sudo apt install gitlab-ee=14.1.6-ee.0
sudo apt install gitlab-ee=14.2.0-ee.0
sudo apt install gitlab-ee=14.2.6-ee.0
sudo apt install gitlab-ee=14.3.0-ee.0
sudo apt install gitlab-ee=14.3.4-ee.0
sudo apt install gitlab-ee=14.4.0-ee.0
sudo apt install gitlab-ee=14.4.2-ee.0
</pre>
<p><span style="font-weight: 2000;"><u>Another important tip</u> to know is that in some versions, prior to 14, we will see that the <strong>apt</strong> command has ended, but when we try to access Gitlab we get a <strong>502</strong> error. This happens because there are the aforementioned Background Migrations, but for these versions not the tool existed that allowed us to observe its status. But let&#8217;s not despair because we can see the log! and observe when they are done.</span></p>
<p><span style="font-weight: 2000;">We can see the last entries in the log like this and check that no running transactions are found:</span></p>
<div class="wp-block-codemirror-blocks code-block ">
<pre class="CodeMirror" data-setting="{&quot;mode&quot;:&quot;shell&quot;,&quot;mime&quot;:&quot;text/x-sh&quot;,&quot;theme&quot;:&quot;material&quot;,&quot;lineNumbers&quot;:false,&quot;lineWrapping&quot;:true,&quot;styleActiveLine&quot;:false,&quot;readOnly&quot;:true,&quot;align&quot;:&quot;&quot;}">sudo tail -f /varlog/gitlab/gitlab-rails/production.log</pre>
<p><span style="font-weight: 2000;">And that&#8217;s it, we&#8217;ve done it! Congratulations</span></p>
<p><span style="font-weight: 2000;">I hope this article has helped you to learn something new and continue to expand your knowledge.</span></p>
<p><span style="font-weight: 2000;">If you need information about the <a href="https://geko.cloud/es/devops/"><strong>DevOps world</strong></a><strong>,</strong> I invite you to <a href="https://geko.cloud/es/contacto/">contact us</a> and keep checking <a href="https://geko.cloud/en/blog/">our blog</a> to find other useful publications.</span></p>
<p><span style="font-weight: 2000;">Until next time!<img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f44b.png" alt="👋" class="wp-smiley" style="height: 1em; max-height: 1em;" /></span></p>
<p>&nbsp;</p>
</div>
</div>
<p>→ Post written by Christian Tagliapietra and Álvaro Abad.</p>
<p>La entrada <a href="https://geko.cloud/en/how-to-update-gitlab/">How to update Gitlab</a> se publicó primero en <a href="https://geko.cloud/en/">Geko Cloud</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://geko.cloud/en/how-to-update-gitlab/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Firsts steps with ArgoCD</title>
		<link>https://geko.cloud/en/firsts-steps-with-argocd/</link>
					<comments>https://geko.cloud/en/firsts-steps-with-argocd/#respond</comments>
		
		<dc:creator><![CDATA[Xènia Adan]]></dc:creator>
		<pubDate>Fri, 03 Dec 2021 13:21:29 +0000</pubDate>
				<category><![CDATA[Featured LABS]]></category>
		<category><![CDATA[Labs]]></category>
		<category><![CDATA[ArgoCD]]></category>
		<category><![CDATA[cicd]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<guid isPermaLink="false">https://geko.cloud/?p=6557</guid>

					<description><![CDATA[<p>Introduction In this article we will talk about one of the hot tools in the topic of continuous integration and deployment processes &#8220;CICD&#8221; in Kubernetes, ArgoCD. In recent months, many leading companies in the Internet sector have publicly declared the use of ArgoCD to deploy applications in their clusters. You can see a list here. [&#8230;]</p>
<p>La entrada <a href="https://geko.cloud/en/firsts-steps-with-argocd/">Firsts steps with ArgoCD</a> se publicó primero en <a href="https://geko.cloud/en/">Geko Cloud</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>Introduction</h2>
<p>In this article we will talk about one of the hot tools in the topic of continuous integration and deployment processes &#8220;<a href="https://www.redhat.com/en/topics/devops/what-is-ci-cd">CICD</a>&#8221; in Kubernetes, <a href="https://argo-cd.readthedocs.io">ArgoCD</a>. In recent months, many leading companies in the Internet sector have publicly declared the use of ArgoCD to deploy applications in their clusters. <a href="https://github.com/argoproj/argo-cd/blob/master/USERS.md">You can see a list here.</a></p>
<p>To begin with, let&#8217;s review what ArgoCD is for and how far ArgoCD&#8217;s functionalities go. Then, we will see a typical use case of application deployment using ArgoCD and the advantages of its implementation. Finally, we will comment on the conclusions we have drawn in terms of pros and cons, and we will analyze what other tools complement ArgoCD to further optimize the process of integration and continuous deployment of applications.</p>
<h2>What is ArgoCD?</h2>
<p>ArgoCD is a tool that allows us adopt <a href="https://www.redhat.com/en/topics/devops/what-is-gitops">GitOps</a> methodologies for continuous deployment of applications in Kubernetes clusters.</p>
<p>The main feature is that ArgoCD synchronizes the state of the deployed applications with their respective manifests declared in git. This allows developers to deploy new versions of the application by simply modifying the git content, either with commits to development branches or by modifying main branch.<br />
Once the code has been modified in git, ArgoCD detects, via webhook or periodic checks every few minutes, that there have been changes in the application manifests. It then compares the manifests declared in git with those applied in the clusters and updates the latter until they are synchronised.</p>
<p>Its user-friendly user interface allows us to visualize very well the content, structure and state of the clusters as well as manipulate resources.</p>
<p>Can ArgoCD automate the entire CI/CD process of an application?</p>
<p>No, ArgoCD takes care of deploying the application once the artifact already exists in a container registry, such as Dockerhub or ECR. This implies that previously the application code has already been tested and containerised in an image. At the end of this article we will talk about what options currently exist to accomplish this previous task in an automated gitops way.</p>
<p>As we have already explained, ArgoCD synchronizes the state of deployed applications with their respective manifests declared in git. But it does not refer to the git repository of the application code itself, but to a separate repository, as best practices suggest, that contains the application&#8217;s kubernetes infrastructure code, which can be in the form of <a href="https://github.com/argoproj/argo-cd/blob/master/USERS.md">helm charts, kustomize application, ksonnet&#8230;</a></p>
<p>To better explain the main benefits offered by ArgoCD let&#8217;s see an use case example.</p>
<h2>Using ArgoCD</h2>
<p>In this example we will see how ArgoCD can deploy either applications developed by third parties, which have their own helm chart maintained by another organization, or one of our own where we have defined the chart ourselves.</p>
<p>For the example we will deploy a monitoring stack consisting of Prometheus, Grafana and Thanos using their helm charts.</p>
<p>ArgoCD deploys the applications through a custom object called <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#applications">Application</a>. This object has as attributes a source and a destination. The source can read several formats, in this exaple our application object will read and deploy helm charts from a chart repository, and charts from a git repository. The destination is the cluster to which the content of the source will be deployed. In the application configuration we can enable that ArgoCD automatically keeps the state of the deployed kubernetes objects synchronized with the configuration indicated in the source (charts/git). This option is very interesting because it ensures that ArgoCD is going to be aware every few minutes that everything is still in sync, by contrast, deploying applications directly with helm commands only ensure synchronization at the time of deployment.</p>
<p>Now that we have explained what the Application object is, for our monitoring-stack, we are going to create four. Why four applications if there will only be 3 services in the stack? (Prometheus, Grafana and Thanos).</p>
<p>ArgoCD also offers the possibility of creating groups of applications that follow the &#8220;<a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/#app-of-apps-pattern">app of apps pattern</a>&#8221; concept. This is an ArgoCD application that deploys other applications and so on recursively. In the case of our monitoring stack, we are going to create a fourth application that will deploy the other 3 applications, the parent application will be called &#8220;monitoring-stack&#8221;.</p>
<p>To create an application we can define an ArgoCD Application manifest, as indicated in <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/#app-of-apps-pattern">this page</a> of the documentation. We can also do it by <a href="https://argo-cd.readthedocs.io/en/stable/getting_started/#creating-apps-via-cli">command line</a>. But ArgoCD has a great UI that allows you to create applications manually as well, you can see how <a href="https://argo-cd.readthedocs.io/en/stable/getting_started/#creating-apps-via-ui">here</a>.</p>
<p>The &#8220;monitoring-stack&#8221; application will point its source to a git repository with a Helm chart. This chart will contain the manifests of the other three applications in the &#8220;templates&#8221; directory in yaml format. These files are <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#applications">Application</a> object definitions that point to the relevant official Helm chart of each service. Using the &#8220;values files&#8221;, we will be able to deploy different versions in different environments.</p>
<figure id="attachment_5055" aria-describedby="caption-attachment-5055" style="width: 300px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-5055 size-medium" src="https://geko.cloud/wp-content/uploads/2021/10/Screenshot-2021-10-29-at-13.39.28-300x248.png" alt="" width="300" height="248" srcset="https://geko.cloud/wp-content/uploads/2021/10/Screenshot-2021-10-29-at-13.39.28-300x248.png 300w, https://geko.cloud/wp-content/uploads/2021/10/Screenshot-2021-10-29-at-13.39.28.png 350w" sizes="(max-width: 300px) 100vw, 300px" /><figcaption id="caption-attachment-5055" class="wp-caption-text">Git repository containing monitoring-stack Helm chart. It consists of 3 applications defined in the directory monitoring-stack/templates/</figcaption></figure>
<p>Once the templates of the &#8220;monitoring-stack&#8221; chart have been defined, we will create the parent ArgoCD Application, and in source we will point to the previously mentioned repository. ArgoCD will detect that it is a helm chart and we can indicate the path of the specific values file, for example &#8220;prod_values.yaml&#8221;.</p>
<p>At the end of the manual configuration of the application, in the user interface we will see how all the created objects are represented, organized in a hierarchical way.</p>
<figure id="attachment_5057" aria-describedby="caption-attachment-5057" style="width: 742px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-5057 size-full" src="https://geko.cloud/wp-content/uploads/2021/10/Screenshot-2021-10-29-at-12.35.05.png" alt="" width="742" height="343" srcset="https://geko.cloud/wp-content/uploads/2021/10/Screenshot-2021-10-29-at-12.35.05.png 742w, https://geko.cloud/wp-content/uploads/2021/10/Screenshot-2021-10-29-at-12.35.05-300x139.png 300w" sizes="(max-width: 742px) 100vw, 742px" /><figcaption id="caption-attachment-5057" class="wp-caption-text">The monitoring-stack application creates the three applications defined in the templates directory of the chart.</figcaption></figure>
<figure id="attachment_5065" aria-describedby="caption-attachment-5065" style="width: 800px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-5065 size-large" src="https://geko.cloud/wp-content/uploads/2021/10/Screenshot-2021-10-29-at-14.40.16-1024x476.png" alt="" width="800" height="372" srcset="https://geko.cloud/wp-content/uploads/2021/10/Screenshot-2021-10-29-at-14.40.16-1024x476.png 1024w, https://geko.cloud/wp-content/uploads/2021/10/Screenshot-2021-10-29-at-14.40.16-300x139.png 300w, https://geko.cloud/wp-content/uploads/2021/10/Screenshot-2021-10-29-at-14.40.16-768x357.png 768w, https://geko.cloud/wp-content/uploads/2021/10/Screenshot-2021-10-29-at-14.40.16-1536x714.png 1536w, https://geko.cloud/wp-content/uploads/2021/10/Screenshot-2021-10-29-at-14.40.16.png 1585w" sizes="(max-width: 800px) 100vw, 800px" /><figcaption id="caption-attachment-5065" class="wp-caption-text">The grafana application has deployed it&#8217;s official helm chart, through the UI we can see all the resources in operation. It also allows us to interact with them, for example we can delete a pod and see how the deployment automatically creates another one.</figcaption></figure>
<p>Since the applications are synchronized with our repository, and the charts are parameterized with templates and values. To deploy a new version of any of our applications we will only have to modify the values file through git commits.<br />
ArgoCD will detect the changes in the repository and apply them in the kubernetes cluster through a rolling update deployment.</p>
<figure id="attachment_5067" aria-describedby="caption-attachment-5067" style="width: 292px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-5067 size-full" src="https://geko.cloud/wp-content/uploads/2021/10/Screenshot-2021-10-29-at-14.54.37.png" alt="" width="292" height="170" /><figcaption id="caption-attachment-5067" class="wp-caption-text">chart versions defined in the file prod_values.yaml</figcaption></figure>
<p>As a note, using <a href="https://github.com/argoproj-labs/argocd-image-updater">ArgoCD Image Updater</a> can save us from doing this last step manually, or even having to develop a complex pipeline to update the values.yaml file in git when we want to deploy the new image.<br />
This tool periodically queries the latest tags in our image repository looking for new artifacts to deploy. This way, once it has found a new one, it takes care of automating the deployment process by editing the git configuration with the name of the new tag.<br />
It is worth mentioning that there is not yet a stable version of ArgoCD Image Updater but it is expected soon.</p>
<p>In this example we have created an application that points to a repository that creates applications which point to official helm charts, but this hierarchical loop can be extended much further, following the &#8220;app of apps pattern&#8221;.</p>
<p>Another interesting feature of ArgoCD is that it allows us to deploy applications in different clusters. There are several ways to do this, but the most direct way is through the Application Set resource.<br />
In its manifest we can specify a list of clusters where to deploy simultaneously with different paths of the repository. Since in our repository we can specify different versions for each cluster.</p>
<p>The relative ease of installation that ArgoCD has is another positive point to take into account, here you can consult the <a href="https://github.com/argoproj-labs/argocd-image-updater">steps</a>.</p>
<h2>Automation of the entire CICD process with Argo tools</h2>
<p>If we want to go a step further and automate the entire CICD process in Kubernetes, we can complement ArgoCD with the rest of the tools presented by the <a href="https://argoproj.github.io/">Argo project</a>.<br />
By combining Argo Events, Argo Workflows, ArgoCD and Argo Rollouts, further automation is possible following best practices in the current continuous integration standards.<br />
Victor Farcic explains it very coherently in this <a href="https://www.youtube.com/watch?v=XNXJtxkUKeY&amp;t=277s&amp;ab_channel=DevOpsToolkit">video</a>.</p>
<p>As a solution to the added complexity of installing and managing all these Argo project tools, some applications that encompass this entire stack have already been released allowing us to configure the pipelines for integration and deployment from a simplified higher level layer. Below we mention a couple of them, although in this post we are not going to analyze the particular functionalities.</p>
<p><a href="https://devtron.ai/">Devtron</a> is an open source tool that installs underneath this Argo stack and other tools and promises that it will let us automate the entire CICD process completely from the user interface. Devtron simplifies the configuration quite a lot as we interact with the internal tools from a high level layer, without manually installing any of them. Although after testing it, we do not believe that the tool is mature enough to be implemented in a production environment for the time being.</p>
<p>Similar to Devtron&#8217;s approach, <a href="https://devtron.ai/">Codefresh</a> also uses all of the Argo stack to automate all integration and deployment. But apart from the fact that the tool is still in early-access, a big difference is that access will be in SaaS format. As we can see in the pricing section, the full automation option will be paid and the price is not mentioned on the website.</p>
<h2>Conclusions</h2>
<p>ArgoCD is a very useful tool to automate the deployment process using GitOps best practices. Thanks to its implementation, developers can test new versions of applications more quickly and deploy to production safely once testing is complete. In addition, thanks to the auto-sync feature and its beautiful interface, ArgoCD allows us to keep track of the status of applications and their resources deployed in Kubernetes at all times. Combined with the other tools of the Argo project we can automate the entire CICD process (and many other utilities outside the scope of this post) following good practices for current standards.</p>
<p>On the downside, using ArgoCD will introduce an extra layer of complexity to our configuration, as it has many different options, introduces custom objects and concepts we are not familiar with yet. It can be &#8220;overkill&#8221; if we have a very small cluster with only a handful of applications.</p>
<p>La entrada <a href="https://geko.cloud/en/firsts-steps-with-argocd/">Firsts steps with ArgoCD</a> se publicó primero en <a href="https://geko.cloud/en/">Geko Cloud</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://geko.cloud/en/firsts-steps-with-argocd/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Disaster recovery: expect the unexpected</title>
		<link>https://geko.cloud/en/disaster-recovery-expect-the-unexpected/</link>
					<comments>https://geko.cloud/en/disaster-recovery-expect-the-unexpected/#respond</comments>
		
		<dc:creator><![CDATA[Geko Cloud]]></dc:creator>
		<pubDate>Wed, 17 Nov 2021 10:49:39 +0000</pubDate>
				<category><![CDATA[DevSecOps]]></category>
		<category><![CDATA[Featured LABS]]></category>
		<category><![CDATA[Labs]]></category>
		<guid isPermaLink="false">https://geko.cloud/?p=6227</guid>

					<description><![CDATA[<p>Ben Franklin once said that nothing is certain except death and taxes. Today, I would add IT incidents to the list. Here at Geko we’ve had to go through our share of operations, IT and security incidents. It’s something that is just bound to happen for a number of reasons (which we’ll get into later), [&#8230;]</p>
<p>La entrada <a href="https://geko.cloud/en/disaster-recovery-expect-the-unexpected/">Disaster recovery: expect the unexpected</a> se publicó primero en <a href="https://geko.cloud/en/">Geko Cloud</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400;" data-mce-style="font-weight: 400;">Ben Franklin once said that nothing is certain except death and taxes. Today, I would add IT incidents to the list.</span></p>
<p><img loading="lazy" decoding="async" class=" wp-image-6239 aligncenter" src="https://geko.cloud/wp-content/uploads/2021/11/disaster-recovery-image.png" alt="disaster recovery" width="728" height="343" data-mce-src="https://geko.cloud/wp-content/uploads/2021/11/disaster-recovery-image.png" srcset="https://geko.cloud/wp-content/uploads/2021/11/disaster-recovery-image.png 850w, https://geko.cloud/wp-content/uploads/2021/11/disaster-recovery-image-300x141.png 300w, https://geko.cloud/wp-content/uploads/2021/11/disaster-recovery-image-768x361.png 768w" sizes="(max-width: 728px) 100vw, 728px" /></p>
<p><span style="font-weight: 400;" data-mce-style="font-weight: 400;">Here at Geko we’ve had to go through our share of operations, IT and security incidents. It’s something that is just bound to happen for a number of reasons (which we’ll get into later), and we’ve come to adopt a lot of habits and practices that makes these far easier to quickly </span><b>detect</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;">, </span><b>pinpoint</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;">, </span><b>handle</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> and </span><b>resolve</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> incidents of all kinds. It becomes a lot more manageable once you assume these will happen, and </span><b>prepare</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> for it. Can’t ever be too cautious when you’re talking about critical infrastructure for your entire team to work on, or your clients to access, it’s a “can’t afford to fail” scenario and you need to be ready for it.</span></p>
<p><span style="font-weight: 400;" data-mce-style="font-weight: 400;">There’s some countermeasures and checks you absolutely need to build into your infrastructure and ecosystem that makes this easier to handle:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1" data-mce-style="font-weight: 400;"><span style="font-weight: 400;" data-mce-style="font-weight: 400;">A robust </span><a href="https://geko.cloud/en/cloud-services/monitoring/" data-mce-href="https://geko.cloud/en/cloud-services/monitoring/"><b>monitoring</b></a><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> platform</span></li>
<li style="font-weight: 400;" aria-level="1" data-mce-style="font-weight: 400;"><span style="font-weight: 400;" data-mce-style="font-weight: 400;">A sensible </span><b>alerting</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> plan</span></li>
<li style="font-weight: 400;" aria-level="1" data-mce-style="font-weight: 400;"><span style="font-weight: 400;" data-mce-style="font-weight: 400;">Service </span><b>failover</b></li>
<li style="font-weight: 400;" aria-level="1" data-mce-style="font-weight: 400;"><span style="font-weight: 400;" data-mce-style="font-weight: 400;">data </span><b>snapshotting</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> and </span><b>backups</b></li>
<li style="font-weight: 400;" aria-level="1" data-mce-style="font-weight: 400;"><span style="font-weight: 400;" data-mce-style="font-weight: 400;">A </span><b>disaster recovery</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> plan</span></li>
</ul>
<p><span style="font-weight: 400;" data-mce-style="font-weight: 400;">Let’s go through each one of these and define why these are important.</span></p>
<h2><b>How do I know when it’s down?</b></h2>
<p><span style="font-weight: 400;" data-mce-style="font-weight: 400;">The very first step towards knowing your infrastructure has failed is knowing when it is </span><b>not</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> failing. You need to build a system that constantly checks every important part of your ecosystem so it notices deviations of that “working correctly” state. Status drift is that will absolutely be the first step of your fail state. Everything just works, then kinda works. Then one day, it doesn’t, and it’s deviated so much from the original state that you need to rebuild everything almost from scratch. You do not want to get here. So you monitor for system abnormalities. If something has to move, monitor it. If something doesn’t have to move, monitor it in case it moves.</span></p>
<h2><b>Do I just sit someone looking at metrics?</b></h2>
<p><span style="font-weight: 400;" data-mce-style="font-weight: 400;">Not much use of a monitoring stack if it doesn’t yell at you when something is going haywire. So as you set up the monitoring infrastructure, at the same pace you add </span><b>alerting</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;">. How you do this is up to you, depends on the urgency of the task. It’s a server that has 70% of its disk full? maybe send a Slack message about it to your IT team. Is the company program server not responding to pings? You probably want it to </span><b>call someone immediately</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> to turn it on as soon as possible. Depends on the system’s urgency and how big the problem indicator is. Your environment, your priorities.</span></p>
<h2><b>But that doesn’t keep the service running, does it?</b></h2>
<p><span style="font-weight: 400;" data-mce-style="font-weight: 400;">Do you absolutely need a way to keep service up even on the event of failure? Keep a failover system. Maybe you can skip calling about the ping-failing server, or you can move that “change drive” down the priority list because you got another one on the RAID for now. </span><b>Two is one, one is none</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;">. Keep a failover in basically anything important, even if it’s a manual failover system. Have something lined up to quickly change to and keep service.</span></p>
<h2><b>What do I do when I get a fail alarm?</b></h2>
<p><span style="font-weight: 400;" data-mce-style="font-weight: 400;">Let’s get into a disaster scenario for a second. Let’s say you ignored that S.M.A.R.T. alert for one day too long. Your main machine is </span><b><i>gone</i></b><span style="font-weight: 400;" data-mce-style="font-weight: 400;">, you can’t just press the “on” button for it to go back up and forget about it, and you somehow had your failover on that machine too, for example a scenario where your proxmox machine bit the dust. Congratulations, you’re facing an </span><b>incident</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;">. So, for this case, which is bound to happen, you’ve prepared backups (</span><b><i>hopefully</i></b><span style="font-weight: 400;" data-mce-style="font-weight: 400;">) and you can swap that drive and restore it in. You may lose a day of work, but it’s nothing compared to losing </span><b><i>everything</i></b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> and spending hundreds of hours rebuilding your company from scratch. It is especially important to remember the basic rule of backups generally known as the </span><b>3-2-1</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> rule:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1" data-mce-style="font-weight: 400;"><b>3 copies</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> of your data</span></li>
<li style="font-weight: 400;" aria-level="1" data-mce-style="font-weight: 400;"><span style="font-weight: 400;" data-mce-style="font-weight: 400;">On </span><b>2 different types of storage</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> media</span></li>
<li style="font-weight: 400;" aria-level="1" data-mce-style="font-weight: 400;"><b>At least one</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> of them in an </span><b>offsite</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> location</span></li>
</ul>
<p><span style="font-weight: 400;" data-mce-style="font-weight: 400;">So that, even in case of an especially bad incident, like a fire, you can just restore from the offsite backup (even though that may take substantially more time depending on your solution). Also, check that your backups work. </span><b>Please</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;">. A backup is not a backup until you </span><b>test it</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;" data-mce-style="font-weight: 400;">Also, as an extra,&nbsp; do not pull a </span><b>Michael Scott</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> on your team.</span></p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-6233 aligncenter" src="https://geko.cloud/wp-content/uploads/2021/11/calm-disaster-recovery.png" alt="mem" width="600" height="236" data-mce-src="https://geko.cloud/wp-content/uploads/2021/11/calm-disaster-recovery.png" srcset="https://geko.cloud/wp-content/uploads/2021/11/calm-disaster-recovery.png 600w, https://geko.cloud/wp-content/uploads/2021/11/calm-disaster-recovery-300x118.png 300w" sizes="(max-width: 600px) 100vw, 600px" /></p>
<h2><b>But I can’t plan for everything that’ll happen, can’t I?</b></h2>
<p><span style="font-weight: 400;" data-mce-style="font-weight: 400;">There’s a lot of things that can happen, and unfortunately you </span><b>can’t predict all of them</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;">. Anything can happen and the more complex your infrastructure setup is, the more points of failure there are on it, so you need to prepare as much as possible. Maybe you can’t get all of them nailed down, but the more you plan for, the better, so if something happens, your on-call staff can just walk through a runbook on your documentation and fix the issue without much of a problem. This plan usually includes examples of scenarios that you consider possible based on your infrastructure setup, the elements affected by it, and how to fix this issue.</span></p>
<h2><b>Sounds like I need one of those.</b></h2>
<p><span style="font-weight: 400;" data-mce-style="font-weight: 400;">If an important element of your infrastructure fails, would you get a call, an email, or a Slack notification? Are you </span><b>sure</b><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> your backups work? How resistant is your product to a drive failure? If these scenarios sound like a problem in your case, maybe you’d find it useful to </span><a href="https://geko.cloud/contact/" data-mce-href="https://geko.cloud/contact/"><span style="font-weight: 400;" data-mce-style="font-weight: 400;">Contact us</span></a><span style="font-weight: 400;" data-mce-style="font-weight: 400;"> and we’ll talk about setting you up in a better state. You just have to make a choice: are you doing it now, or are you waiting after your next IT incident? Remember Picard’s words on management:</span></p>
<p><img loading="lazy" decoding="async" class="size-large wp-image-6235 aligncenter" src="https://geko.cloud/wp-content/uploads/2021/11/frase-disaster-recovery-1024x253.png" alt="frase disaster recovery" width="800" height="198" data-mce-src="https://geko.cloud/wp-content/uploads/2021/11/frase-disaster-recovery-1024x253.png" srcset="https://geko.cloud/wp-content/uploads/2021/11/frase-disaster-recovery-1024x253.png 1024w, https://geko.cloud/wp-content/uploads/2021/11/frase-disaster-recovery-300x74.png 300w, https://geko.cloud/wp-content/uploads/2021/11/frase-disaster-recovery-768x190.png 768w, https://geko.cloud/wp-content/uploads/2021/11/frase-disaster-recovery.png 1486w" sizes="(max-width: 800px) 100vw, 800px" /></p>
<p>La entrada <a href="https://geko.cloud/en/disaster-recovery-expect-the-unexpected/">Disaster recovery: expect the unexpected</a> se publicó primero en <a href="https://geko.cloud/en/">Geko Cloud</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://geko.cloud/en/disaster-recovery-expect-the-unexpected/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Error using AWS EFS for MySQL</title>
		<link>https://geko.cloud/en/error-using-aws-efs-for-mysql/</link>
					<comments>https://geko.cloud/en/error-using-aws-efs-for-mysql/#respond</comments>
		
		<dc:creator><![CDATA[Geko Cloud]]></dc:creator>
		<pubDate>Wed, 26 May 2021 14:02:34 +0000</pubDate>
				<category><![CDATA[Featured LABS]]></category>
		<category><![CDATA[Labs]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[MySQL]]></category>
		<guid isPermaLink="false">https://geko2.factoryfy.com/error-using-aws-efs-for-mysql/</guid>

					<description><![CDATA[<p>Introduction In this post we are going to see how to solve the following error that we face while working with MySQL databases using AWS EFS service for storage: ERROR 1030 (HY000) at line 1744: Got error 168 from storage engine Starting situation Our use case consisted of a Kubernetes EKS cluster, where we needed [&#8230;]</p>
<p>La entrada <a href="https://geko.cloud/en/error-using-aws-efs-for-mysql/">Error using AWS EFS for MySQL</a> se publicó primero en <a href="https://geko.cloud/en/">Geko Cloud</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div style="display: none;"></div>
<div style="display: none;"></div>
<div style="display: none;"></div>
<div style="display: none;"></div>
<div style="display: none;"></div>
<div style="display: none;"></div>
<h3>Introduction</h3>
<p class="p1">In this post we are going to see how to solve the following error that we face while working with <em><strong>MySQL</strong></em> databases using <strong><em>AWS EFS</em></strong> service for storage:</p>
<div class="wp-block-codemirror-blocks code-block ">
<pre class="CodeMirror" data-setting="{">ERROR 1030 (HY000) at line 1744: Got error 168 from storage engine</pre>
</div>
<h3>Starting situation</h3>
<p>Our use case consisted of a <em>Kubernetes EKS </em>cluster, where we needed to create and destroy new environments for development as quickly as possible. Each of these environments had to contain several <strong><em>MySQL</em> databases</strong>, in addition to other tools such as <em>Redis</em>, <em>RabbitMQ</em>, etc.</p>
<p>To do this we decided to use <em>Helm Charts</em> that raised these tools as <em>StatefulSets</em>. At first, we did not specify any <em>StorageClass</em>, so the <em>EKS</em> default was used, which raises <em>EBS gp2</em> (General Purpose <em>SSD</em>) volumes when creating <em>PersistentVolumeClaims</em>.</p>
<p>The problem with using <strong><em>EBS</em> volumes</strong> is that each volume is available in a specific Availability Zone, so if the <em>Kubernetes</em> cluster needs to move a <em>pod</em> from, for example <em>MySQL</em>, it will not be able to lift it unless it is available in another worker node that is in the same Availability Zone as that <em>EBS</em> volume.</p>
<p>For this reason, we decided to use a new <strong><em>StorageClass</em> that uses <em>EFS</em></strong> instead of <em>EBS</em>. In this way, by having the <em>EFS</em> mounted on each worker node in the <a href="https://geko2.factoryfy.com/en/what-is-kubernetes/"><em>Kubernetes</em></a> cluster, the <em>pods</em> could move seamlessly between nodes, even if they were in different Availability Zones.</p>
<h3>Error with <em>EFS</em></h3>
<p>Since our use case required creating new environments as quickly as possible, when building a new <em>MySQL</em> database, we also executed an import of data from <em>sql</em> files to have an initial data set by default.</p>
<p>It was at this moment that we began to detect the error, since when importing this initial data, we continuously began to see the following error in the logs:</p>
<div class="wp-block-codemirror-blocks code-block ">
<pre class="CodeMirror" data-setting="{">ERROR 1030 (HY000) at line 1744: Got error 168 from storage engine
ERROR 1030 (HY000) at line 1744: Got error 168 from storage engine
ERROR 1030 (HY000) at line 1744: Got error 168 from storage engine
...</pre>
</div>
<p class="p1">This error began to appear after a certain number of <em>sql</em> instructions executed, and from that moment on, it was repeated until the <em>sql</em> file was finished reading.</p>
<h3>Resolution</h3>
<p>Upon investigation, we discovered that the problem resided with an <em>EFS</em> service boundary. Specifically, it was the limit of 256 unique locked files. We can see this limit in the quotas described in the <em>AWS</em> documentation:</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-4667 aligncenter" src="https://geko2.factoryfy.com/wp-content/uploads/captura-de-pantalla-2021-05-25-a-las-13.47.29.png" alt="" width="1144" height="54" /></p>
<p><a href="https://docs.aws.amazon.com/efs/latest/ug/limits.html#limits-client-specific">https://docs.aws.amazon.com/efs/latest/ug/limits.html#limits-client-specific</a></p>
<p>Because our data import was trying to create more than 256 tables, this limit was reached and the error began to appear. This limit cannot be modified, but we were able to avoid it by modifying the <em>MySQL</em> parameters to, as far as possible, not reach those 256 locked files.</p>
<p>The <em>MySQL</em> parameter to modify is <strong><em>innodb_file_per_table</em></strong>. In our <em>MySQL</em> databases, this parameter was <strong>activated by default</strong>. This causes an <em>.idb</em> data file to be created for each table in the database instead of creating a single data file. We can modify this parameter in the <em>MySQL</em> configuration file as follows:</p>
<div class="wp-block-codemirror-blocks code-block ">
<pre class="CodeMirror" data-setting="{">[mysqld]
innodb_file_per_table=OFF</pre>
</div>
<p><a href="https://dev.mysql.com/doc/refman/5.7/en/innodb-file-per-table-tablespaces.html">https://dev.mysql.com/doc/refman/5.7/en/innodb-file-per-table-tablespaces.html</a></p>
<p>After disabling this parameter, we did not encounter the error again.</p>
<p>We want to thank the <em>blog</em> <em>ops.tips</em> for their work doing the post that we linked to, since it helped us a lot to understand this error, so that we could find this solution.</p>
<p><a href="https://ops.tips/blog/limits-aws-efs-nfs-locks/">https://ops.tips/blog/limits-aws-efs-nfs-locks/</a></p>
<h3>Conclusions</h3>
<p>When working with <strong><em>MySQL</em> databases using <em>EFS</em></strong> for storage, we can have errors when reaching the limit of 256 locked files whenever we have schemes with a large number of tables or, ultimately, any system that requires simultaneously locking large amounts of files, such as it would be the case of <em>MongoDB</em>, <em>Oracle</em>, etc.</p>
<p>To avoid reaching this limit, we can disable the <em>MySQL</em> parameter <strong><em>innodb_file_per_table</em></strong> so that a data file is not created for each table.</p>
<hr />
<p>I hope you&#8217;ve enjoyed this post and I encourage you to <a href="https://geko.cloud/en/blog/labs/">check our blog for other posts</a> that you might find helpful. <a href="https://geko.cloud/en/contact/">Do not hesitate to contact us</a> if you would like us to help you on your projects.</p>
<p>See you on the next post!</p>
<p>La entrada <a href="https://geko.cloud/en/error-using-aws-efs-for-mysql/">Error using AWS EFS for MySQL</a> se publicó primero en <a href="https://geko.cloud/en/">Geko Cloud</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://geko.cloud/en/error-using-aws-efs-for-mysql/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
