Monday 28 August 2017

Installing Force.com CLI

I have spent the better part of my morning, laying flat on my back, my head propped on my pillow, while I tried resolving the Salesforce logs issue.
The biggest hurdle was installing force.com cli on Fedora machine. There aren't a lot of help around the internet. It was a real struggle!!
I'm grateful to God that my morning can be very meaningful as I found a very useful article to help.

Step 1
Install node js on Fedora
yum install nodejs

Step 2
Install force cli using node js
npm install force-cli
("you may want to create the force directory in /usr/bin before going on with step 2")
 

Monday 14 August 2017

Before I leave work, I've got some progress to share

So, like you all know, I've been in this pond of Salesforce logs pull to S3. It has been an amazing and heartbreaking adventure.
So far, I've been able to get through using curl scripts and finally with python.
I found a helpful git script that I modified to mine, if you can find it, it's called ELF.

I successfully got Salesforce logs onto my VM, the next hurdle was to upload to S3. So far not much is automated (that's for another day).
I successfully got SF logs automated to S3 from my VM. I setup a cron job for this but I'm still working my way out with lambda function (I guess because it sounds way too mathematical I have slightly given up on it but I'll work hard to get a victory :) ).

That's my day in brief, after turning up at work way past midday. It's time to finally go home and get ready for tomorrow. A new day, fresh start, fresh tasks and more hurdles to accomplish great things!

Solution 1 - Problems I encountered in my quest 'moving logs to S3'

IMPORTANT!!!
This is a continuation of my previous post.

Installing Heroku in Kali:
This was my biggest hurdle. I couldn't get this installed in Kali Linux (I am still a novice in Cyber security - pen testing and R.E that's why I'm a researcher). I tried to find all the help I could get everywhere to get Heroku working.

Warning:
If you are doing this and you use mv /source /dest/lib.... don't do this!!! It doesn't install this way
You'll need to do this: /usr/local/lib/heroku/install

Helpful resource

Tuesday 8 August 2017

Ways to move Salesforce Monitoring logs from Shield to S3

In one of my previous posts, I did mention how I was struggling with the reality of moving logs from Salesforce shield to Amazon S3.
After so many attempts at Googling and random research coupled with disturbing my manager about my findings, I came across something quite useful, I think.

Solution 1:
Shield logs are in Heroku, so I created a Heroku account to move my logs there using a free plugin, then from the plugin to S3 (the plugin integrates nicely with S3). Heroku however has failed to work as it should but we are getting there.

Solution 2 from my Manager:
User cURL script, download the logs to the desktop and upload to S3 (I will try it if 1 doesn't work)

Solution 3:
I found that I could use Python script to pull it through to S3 directly but lambda function on AWS is a bit awry for me (maybe not the right word but that would do).

Friday 4 August 2017

My mistake with Salesforce

I never fully understood the consequences of refreshing your Salesforce sandbox or production.
I learnt that this week and tried everything under the sun to get away with it without affecting customers of my organisation.
Oh well, I learnt well. I found out that it is possible to discard a refresh only when the refresh has gone through (that long wait of panic!). Everything on the staging Sandbox remains the same, thank God!

For you to know:
If you don't already know, when the sandbox refresh is done --> Quickfind --> sandboxes --> discard
The discard option will be next to the activate option after the refresh has gone through. DO NOT ACTIVATE if you don't intend losing data especially if you do not intend copying information from your production to your staging or whatever sandbox.

Wednesday 2 August 2017

Dear dear Readers

I know it can be rather frustrating to return to a blog you read of similar interests, and it hasn't been updated, not a picture or a text.
I understand you dear reader, and I must give my utmost apology for this space of time when I've been unable to click on the "blogger" or bring myself to think of the phrases to explain my next "adventure".

This is my little note of apology and I hope you understand. Since I got my new role that has barely given me space for myself due to the job excitement, I have been incommunicado with you. I sincerely do apologise but I won't be too quick to add that it will never be repeated because I am human and we always make mistakes. Rather, I'll add that I will do my best to keep the experience here more up-to-date and well, exciting (I should hope). Having said that, I've been up to rather new and strange exciting tasks of late.

I am doing something quite different at the moment and I thought to share this because I do not know if I'll be able to keep my head above water. To save you from drowning in this sea of AWS and Salesforce, I had to quickly steal 10 minutes from my regular desk to my news desk.

So here:
I am working on moving Salesforce logs which is saved in an app termed EM WAVE to Amazon's S3 bucket.
As you may all have guessed, I'm pretty new to Amazon Web Services (A.K.A AWS). I took a dive in when the AWS Engineer bailed out on the organisation I work for. Well, not exactly but kind of. He wasn't very happy with his job and I could tell on my first week on the job. Anyway, fast-forward to this week, I have to create a lambda function to move the Wave logs to S3, apparently, I need a python script for this. "Where is my Python knowledge stored again?"
What I've done is, I'm trying it out with a test S3. I thought to install an app in my Salesforce Sandbox to try it out, that was so silly because Salesforce logs aren't stored in salesforce.com. I'm getting a little help from Salesforce website and another referred by Salesforce:




That's it so far. I'll update the blog about my big mistake with Salesforce, I promise.